Space for Lease: Wes Anderson Meets The Office 

For fifteen years, I’ve told other people’s stories — about bourbon and banking and retirement communities. Stories shaped by briefs and budgets and the occasional request to “make the logo bigger.” It comes with the territory of professional advertising. I love collaborating with teams and clients, and I’m proud of that work.
But over the past few months, I’ve had the opportunity to explore strictly personal creative work. And this sitcom pilot, though only 124 seconds long, is one of the most ambitious things I’ve made.
It started as an admittedly superficial exercise. I wanted to make an AI video that was visually striking, with complicated camera movements. A proof of concept. For myself.
Then the story showed up.
What if I made a sitcom about a failing municipal tourist attraction? What if it lived in a Wes Anderson–adjacent universe? What if the people working there had the same petty frustrations, the same office politics, the same fluorescent-lit despair as any municipal job? And what if we called it a space elevator — even though it was shorter than the tallest office building in most mid-sized cities?
After about three months of making mistakes, fighting with the tech and figuring out workarounds, I had a complete pilot: an intro, a core episode that establishes the premise, and a “coming next week” teaser at the end. One hundred percent written, produced, directed, edited, sound-designed, and art-directed by me (for better or worse).
The tools mattered, but they were incidental. 
I used a mix of AI video and image generators to make this, along with more traditional post-production tools like Premiere Pro, Photoshop and Logic. Some handled heavy lifting. Others filled gaps. All of them required patience, repetition, and my own human judgment.
At this stage in AI video production, the tools are stubborn. Some shots were generated dozens of times to get the right performance. I found that one- or two-word prompt changes would swing a performance from robotic to believable. Add a casual “uh” or a “look,” and suddenly the "actor" would deliver a line less like an evening newscaster and more like a human being.
Directing the AI became its own craft, namely, learning how to coax intention, tone, and rhythm out of something synthetic.
Consistency was its own battle. Keeping characters recognizable across scenes. Making the circular space-elevator interior feel like a real, coherent place. Faces breaking. Audio drifting. Visual continuity collapsing because every generation is, at its core, a roll of the dice. These were daily problems.
Despite all that, I had a lot of fun making this. Frustrating fun. Many nights, I'd be up until 2 or 3 a.m. trying to figure out why one of the characters' faces had horrifyingly migrated halfway across his head. But thankfully, I managed to iron out those issues (mostly).
At any rate, if you’re generous enough to watch it, I hope you have as much fun as I did making it.
And if you want to talk about the workflows, the tools, the storytelling — or any of it — I’d love to chat.

steve@stvmrgn.com

You may also like

Back to Top