Blog

Making a Launch Video with AI

I wanted a launch video for publicker.dev to share on social media. Something clean, minimal, with that “motion graphics” feel. Instead of opening After Effects (let’s be honest, I don’t really know how to use it ), I decided to try generating it with AI using - a React-based video framework.

The Setup

The workflow was straightforward: install the Remotion skill in OpenCode, describe what I wanted, and let the agent build the video scene by scene. I emphasized the “motion graphics” aesthetic - native React components animating smoothly, not screen recordings.

The agent created:

  • An intro with my avatar and site name
  • A homepage scene showing the NoFrixion project card
  • A blog scene demonstrating the LLM-friendly content negotiation
  • An outro with credits

All built as React components with Remotion’s animation primitives.

The Reality

The process was… okay. It worked, but it didn’t match my expectations.

The biggest pain point was cursor animations. I spent about an hour trying to get the mouse cursor to click on specific buttons in the video. The agent would calculate positions, render the video, and the cursor would be off. Recalculate, re-render, still off. Over and over until it finally figured out the right scale factor between the preview and the actual video dimensions.

The animations themselves weren’t always smooth either. Easing curves that looked janky, timing that felt off, transitions that needed multiple iterations to get right.

My Part in This

To be fair, I didn’t have a clear vision of what I wanted. The goal was too open-ended, which led to a lot of back-and-forth. I’d see something, have a new idea, change direction. That’s fine for iteration, but it made the process feel scattered.

The failures weren’t just the agent’s fault - they were a symptom of me not knowing exactly what “good” looked like until I saw it.

What I’ve Noticed

I’m aware of Replit’s new “animated videos” feature. From what I’ve seen, the output quality looks significantly better - smoother animations, more polished results. It seems like they’ve figured out something about the workflow that makes it more reliable.

But here’s the thing: Remotion is just React code. There are no real limits to what you can build. The constraint isn’t the tool - it’s figuring out the right prompts and workflow to get consistent results.

What I’d Do Differently

Next time:

  • Have a clear storyboard before starting
  • Provide reference videos for the style I want
  • Break down each scene into specific, measurable requirements
  • Test cursor positions with a systematic approach from the start

Share Your Workflow

If you’ve figured out a good workflow for generating videos with AI - whether with Remotion, Replit, or something else - I’d love to hear about it. My DMs are open on X and LinkedIn.

The potential is there. I just need to get better at unlocking it.


Back to all posts