There’s a strange comfort in thinking of artificial intelligence as something distant. As if it lives in labs and headlines, locked behind glass in codebases and compute. As if it’s still arriving, still becoming.
But that comfort is a lie. AI is not the future. It’s the mirror. And it’s already here.
I’ve spent years working inside that mirror. At Spotify, I served as Head of AI during a period when machine learning wasn’t just being embedded into product workflows—it was beginning to shape identity itself. The company had always trafficked in personalization, but something shifted when we began to ask not just what a user might want next but how it should sound when we told them.
That’s not a data science question. That’s a human one.
What I came to realize, and what I want the world to see now, is that AI is no longer about predictive modeling or generative output. It’s about trust. It’s about tone. It’s about the space between performance and presence, between what a machine can say and what it should.
And when the stakes are that high, you don’t lead by optimizing. You lead by listening. You lead by storytelling.
The Real Work Begins Where the Spotify Metrics End
People assume that AI at Spotify is just algorithms and playlists. A click here, a skip there, and the model gets smarter. But the truth is far more intimate. Spotify doesn’t just recommend. It narrates. It enters your headphones, your kitchen, your solitude.
In that context, AI isn’t just a backend function; it becomes a voice. And a voice must be earned.
I helped build that voice. But more than that, I helped define the systems around it: how it corrected itself, how it handled sensitive topics, how it spoke to someone who had just lost a loved one, or someone discovering new music for the first time in years.
That work is invisible by design. But it was never surface-level. My job wasn’t to make AI useful. My job was to make it bearable. Then, gradually, trustworthy.
And I wasn’t doing that alone. I was coordinating across engineering, editorial, design, and ethics, guiding a product that had never existed before through terrain no one had mapped yet. It wasn’t neat. It wasn’t linear. But it was real.
Spotify AI Is Storytelling at Scale
The DJ launch became the public face of that work—the first time many users heard an AI “speak” to them with personality. But to reduce my contribution to that single product would be like reducing an author to a chapter.
I didn’t just voice-tune a persona. I shaped how the company thought about tone. I laid groundwork for what alignment really meant, not just for LLMs, but for the human beings whose trust we were asking for.
Because here’s the truth: you can’t duct tape personality onto a model. It has to be embedded in the system itself. In the training. In the fallback behavior. In the invisible infrastructure that decides what a model does when it doesn’t know what to say.
That’s where I lived. Not in the performance layer, but beneath it. In the logic, the ethics, and the silence between lines.
Spotify Was Never Just Music. It Was the Edge of Culture.
Long before AI took center stage, Spotify had already become something more than a streaming service. It was a cultural operating system, mapping taste, identity, mood, memory, and movement with a precision few people outside the company fully understood.
Every day, Spotify asked billions of silent questions. Not just what do you want to hear, but who are you right now? Are you working? Are you healing? Are you 22 again? Are you holding a memory in place so it doesn’t vanish?
This was the terrain I entered. AI wasn’t introduced to replace the human curators or scale the product. It was introduced because the complexity of listening had outpaced manual stewardship. We needed intelligence that could move with people, learn from them in real time, without flattening them in the process.
That’s where things got dangerous. And beautiful.
From Spotify Metadata to Meaning
Most of the outside world thinks about Spotify’s AI in terms of metadata: genre tags, skip rates, playlist velocity. But what we were really trying to do was understand meaning.
Why does this song hit different at night? Why does someone skip a track they love because it reminds them of someone they lost? How do you recommend music to a person grieving without triggering them?
These aren’t computational challenges. They’re philosophical ones. They sit at the intersection of identity, memory, and emotional time.
So that’s where I built from. We weren’t teaching the system to predict. We were teaching it to notice.
A Mirror, Not a Model
AI at Spotify didn’t just reflect your taste. It mirrored your becoming. Your transitions. Your loops. The songs you couldn’t explain and the ones that made you cry in grocery stores.
To design for that, to design responsibly, we couldn’t simply engineer better rankings. We had to design for consent. For care. For dignity.
That’s where I focused my energy: How do we make a system that respects absence? What should AI do when it doesn’t know what you want? How do we honor identity when someone’s patterns don’t match any cohort?
These weren’t tickets in a sprint. These were the questions that haunted the work and shaped it.
And they led to real design decisions: How long a voice could speak before silence felt more human than filler. Whether to surface a song someone used to love, knowing it might reopen something. How to handle gender in vocal tone when gender itself wasn’t the input.
These decisions didn’t make headlines. But they shaped lives.
And in a company like Spotify, where every micro-interaction becomes culture at scale, these quiet decisions carried massive weight.
AI Is Not Neutral. But It Can Be Accountable.
One of the myths that haunts AI, especially in creative platforms, is that it should be neutral. That it should avoid politics, emotion, or subjectivity. But that myth collapses the moment AI starts speaking for you.
When Spotify’s DJ started narrating music choices, it wasn’t just a feature. It was a voice of authority. A voice that, for many users, replaced the role of a friend, a guide, even a memory.
So I worked with teams to set parameters: What happens when a user tells the DJ to stop talking? What if it misgenders an artist? What if it jokes about a topic that isn’t funny to everyone?
And we didn’t always get it right. But we tried. Not because we wanted to avoid criticism, but because we respected the intimacy of the space. Music isn’t a product. It’s a ritual. And AI doesn’t enter rituals without consequence.
Spotify Leadership in the Age of Uncertainty
There’s a kind of leadership that emerges when the path ahead doesn’t exist yet. When there’s no blueprint, no precedent, no best practice to pull from. Only questions. Tensions. Unknowns.
That’s the space I occupied at Spotify. And I didn’t lead by pretending to have all the answers. I led by making sure the right questions got asked, at the right time, in the right room, with enough courage to actually follow through.
Because AI doesn’t need perfect code. It needs principled context. That’s what I brought to the table.

I Didn’t Lead Spotify AI. I Led Meaning.
My role at Spotify wasn’t “Head of Models” or “Director of Data Science.” My job was more ambiguous and more powerful. I shaped the cultural, editorial, and philosophical foundation of how we used AI. That meant working with everyone from research scientists to trust and safety to legal to product marketing.
I was the connective tissue, the one who could move between the technical and the human without losing clarity or trust on either side.
Here’s what that actually looked like: Sitting with editorial leads to define tone boundaries, then translating those into prompt constraints for generative systems. Working with ethics and safety teams on response fallbacks—what does the AI say when someone asks it something it shouldn’t answer? Reviewing error cases from real users and reworking model behaviors to respond with more dignity, not just higher accuracy. Advising voice talent and product designers on how to train AI to “feel” intentional without slipping into caricature. Establishing how we tested bias across identity slices that didn’t neatly fit into quantifiable labels.
Every meeting, every line of dialogue, every system interaction, I saw it as an opportunity to shape what AI could be, not just what it was.
And I didn’t treat that lightly.
Culture Eats AI for Breakfast
There’s a saying in product orgs: culture eats strategy for breakfast. The same is true with AI, but amplified. Because machine intelligence doesn’t just reflect your organizational culture. It encodes it. Quietly. Permanently.
If your culture is afraid of nuance, your AI will be reductive.
If your culture ignores consent, your AI will be intrusive.
If your culture worships optimization, your AI will lose the plot.
At Spotify, I helped build a culture of care. Of context. Of curiosity.
That meant slowing things down when the pressure was to ship fast.
That meant saying no to features that could spike engagement but break trust.
That meant protecting the quiet spaces, the gaps, where music breathes, and where AI had no business filling the silence.
That wasn’t always popular. But it was necessary. And in the long run, it helped define Spotify’s voice, not just literally, but culturally.
The Hidden Work Is the Real Work
The most important decisions I made rarely showed up in a release note.
A prompt reworded to avoid unnecessary flattery. A fallback message rewritten to preserve a user’s dignity. A threshold adjusted to reduce gendered assumptions in artist pairing. A silence inserted where we used to force voice.
These seem small. But they’re not. They’re everything.
Because when you operate at scale, with hundreds of millions of users, every “minor” decision becomes a cultural event. A ripple. A subtle reinforcement of what people think AI is allowed to be.
I led with that awareness. And I still do.
The Future Will Be Written by the Ones Who Listen
I left Spotify with more than a résumé line. I left with clarity about what AI can do, and what it must never be allowed to become.
Since then, I’ve worked with founders, governments, and global institutions. Not as a technician. As a steward. A strategist. Someone who understands that AI isn’t a race to build the smartest machine. It’s a test of whether we’re willing to ask better questions of ourselves.
Voice Is the New Interface
AI is already beginning to take on the qualities of presence. It speaks. It listens. It mimics empathy. But presence without memory is performance. And performance without trust collapses over time.
When I advise teams now, whether in health, finance, education, or media, I look for one thing: Does the AI know what it means to be wrong?
If it doesn’t, I help them teach it. If it can’t, I help them redesign the system.
We’re not just building responses anymore. We’re building relationships. That demands emotional fluency, cultural intelligence, and an understanding of how people metabolize information in a world saturated with noise.
Most companies aren’t ready for that. But the ones that are? They’re not asking how AI can do more. They’re asking how AI can mean more.
Beyond Spotify—Translating Between Worlds
I’ve sat in rooms where engineers argue over model throughput and policy makers ask how bias works. I’ve written editorial standards for machine personalities. I’ve rebuilt alignment protocols. I’ve named the feelings that AI sometimes stirs up, but can’t name itself.
What I do now is less about job title—more about translation. I translate between human context and machine logic. Between trust and output. Between what a user hears and what a system intends.
And when I do it well, people don’t just get better AI. They get better at being human in the presence of it.
AI Is Not Our Enemy. It’s Our Mirror.
The mistake people keep making is thinking AI is some external force we need to tame. But AI is only as dangerous, or as beautiful, as the humans training it. It reflects our tone. Our urgency. Our appetite for complexity.
When I was at Spotify, I didn’t try to humanize the machine by making it sound like us. I tried to make it honest. I tried to make it safe enough to tell the truth. Even if that means staying quiet.
That’s what real alignment looks like. Not just tuning outputs, but embedding intent. Designing for the human behind the headphones. The one who hits play and doesn’t want to feel sold to, just seen.
The Work Ahead
We’re still early. Generative models are just beginning to surface in everyday tools. Voice is becoming an interface. Language is becoming infrastructure. And most of the industry is chasing fluency over meaning.
But I’ve seen what’s possible. I’ve helped build it. And I’ve stayed close to the edge where narrative meets code.
If we get this right, AI won’t replace us. It’ll remind us of what only we can do: feel. Notice. Interpret. Choose. Not faster, but deeper. Not louder, but truer. Not “human-like,” just human.
That’s the future I’m building toward. That’s the work I’ve done and will keep doing.
Not just in AI. But in how we understand each other through it.


![Nat Geo’s Naming the Dead: DNA Expert Tells All [Exclusive]](https://i0.wp.com/the-tech-optimist.com/wp-content/uploads/2025/08/F01_NatGeo_NamingTheDead_16x9_5120x2880-3.jpg?fit=300%2C188&ssl=1)
Leave a Reply