How to direct an AI ad for Google
A Google creative director published the most honest breakdown on directing an AI ad
This week:
A Google creative director’s field notes on fighting “pixar drift”, Scorsese voices four-armed alien shopkeeper in the new Mandalorian, Google Cloud launched “enterprise-grade AI filmmaking pipelines” next to a 12th-century Delhi monument.
Render Reel
Framerate is the new Vimeo, or at least it’s trying to be. Still in beta, it’s positioning itself as the hosting platform for people who actually make things.
The Nonfiction Hotlist partnered with Yahoo to champion 23 unproduced nonfiction projects. Like the Blacklist but for docs already shot. Worth a look for next year’s edition if you have a doc sitting on your drive.
Gemini can now make you a 30-second AI song, the company unveiled Lyria 3 and clarified the goal isn’t to create a masterpiece, which is doing a lot of work as a disclaimer. Lyria 3 is rolling out through Gemini right now.
A Google creative director wrote the most honest breakdown of what directing an AI ad actually looks like
For most of advertising’s history, a director’s value was inseparable from what they could actually do.
You were a live-action director or animation. You shot on film or you shot digital. You worked in stop-motion or you didn’t.
The industry ran on specialization, and the pipeline from concept to storyboard to shoot to post was sequential and slow.
And of course, expensive by design.
A two-week turnaround for a polished stop-motion-style spot with a four-person team was completely delusional.
But that started cracking in the early 2000s when After Effects and Cinema 4D collapsed the line between motion design and traditional post-production. Suddenly a single designer with the right software could output work that previously required a team of twelve.
But even then, there was a core pipeline. You still concepted, then you storyboarded, then you built assets, then you animated and then you composited.
Erica Gorochow’s breakdown of directing a Google holiday ad using entirely generative AI tools is the clearest account yet of what happens when that pipeline finally collapses.
Gorochow is a motion designer and director whose work has been recognized by Vimeo, Fast Company, and AdWeek.
She led a team of four through a two-week sprint using Google’s own models which generated over a thousand takes.
They composited in After Effects and then hired real voice actors because AI dialogue didn’t hit the mark.
They fought constant “Pixar drift” which is the model’s gravitational pull toward a generic 3D aesthetic that flattens everything into the same cuteness.
On day one, they were already producing near-final frames to pitch with.
Gorochow describes a rhythm of rapid exploration in ImageFX, refinement in Flora using NanoBanana, then reconstruction in Photoshop and After Effects for final polish.
It’s not really post-prod. And it’s not really pre-prod either.
It’s all production, all the time.
After generating a sprawling, messy board of explorations, Gorochow realized she needed to lock her final characters, sets, and props before moving into video generation.
Without that discipline, the AI would hallucinate details differently in every shot.
In other words, she had to impose the structure of a traditional production pipeline onto a tool that has no native concept of continuity.
But it isn’t all song and dance.
She notes that her “bread and butter” is neither stop-motion nor high-end 3D and under the old system, she’d never have been hired to direct something in this style. AI gave her access to an aesthetic vocab that would previously have required a specialist.
She went onto say:
The ability to pitch anything is what energizes me.
That said, I don’t have a purely rosy view of AI. I still worry about what this means for craft, for specialists, and for the people who built the techniques we’re now abstracting away. But this project also reminded me how essential directing instincts still are: communication, framing, pacing, performance, and a practical understanding of how to glue the pieces together, were all key – and I hope, timeless.
When I think of AI as a new, expansive tool, more like the early days of the Adobe suite, there’s a lot here to be genuinely excited about.
Her entire piece is worth reading because it’s a rare AI production write-up that’s both enthusiastic and eye-opening and doesn’t pretend those two things cancel each other out.
Scorsese lends voice to a four-armed alien shopkeeper in the new Mandalorian and Grogu trailer, the character gets maybe thirty seconds of screen time before slamming the door on Pedro Pascal. Star Wars officially responded on X by calling the cameo “absolute cinema,” Possibly a dig at Scorsese’s 2019 comments that Marvel films weren’t?
Invideo partnered with Google Cloud to launch “enterprise-grade AI filmmaking pipelines” unveiled at the India AI Film Festival next to the Qutub Minar (an 800 year old monument), which is a sentence that would have read as satire only two years ago.
This week we watched: Matthew McConaughey on AI in Hollywood
He said:
It’s here, the moral plea won’t stop it, there’s too much money behind it, and the only move is to own your likeness before someone else uses it without asking.
He also floated the idea that “Best AI Film” could become its own awards category which sounds absurd until you remember that “Best Animated Feature” didn’t exist at the Oscars until 2001.
Retail Therapy: AI dash cam
The DDPAI Z90 Master is a triple-channel dash cam that records 4K front, 4K rear, and 3K interior simultaneously, which means it captures more footage of your commute. For better or for worse.
It’s got Sony STARVIS 2 sensors for night driving, an AI interior camera that brightens faces for “ride-share records and incident proof,” and 4G remote access so you can check on your parked car from your phone.
Like a Ring doorbell on wheels.
It’s really for Uber drivers who want receipts or people who watched one too many road rage compilations and decided they weren’t going to be the one without evidence. Fair.





