After gaining access to OpenAI’s new Sora-2 video API, I rebuilt my entire B-roll workflow for The Gladych Files channel — automating prompt generation, stitching clips together, and producing cinematic sequences in minutes. In this post, I walk through the full pipeline, share lessons learned, and offer tips for anyone curious about AI-assisted video creation.
I recently took a video production for the web class at City College San Francisco. I wish I had the tools I'm going to desribe in this post available to me two weeks ago. Such is the nature of AI development speed :)
This week, I gained access ot the Sora-2 vide API through OpenAI. If you haven't heard of it, Sora-2 is an AI enabled video generator in much the same way that Dall-E is an AI enabled image generator. The kids and I have recently started a new video channel where we're publishing videos about all things Michael Gladych.
I'm jsut starting to work with AI enabled video production flows, so if you have any cool ideas, or suggetsions to make things better, please comment below. My flow looks like this
- Write a script for the video
- Pass that script along with other material to a GPT-5 project I've setup to help with suggestions. I have, roughly speaking, a whole book's worth of material, (that's why I started a video channel), as well as several books worth of research material on the Gladych's in gneral, and the people that Michael Gladych reported on in the world of science in specific.
- The prompt for the project looks like this most of the time
- "You are a YouTube videographer consultant. You specialize in fringe science and the paranormal. You're an expert who's watched every episode of the Why Files, the X Files, and Fringe. You've read works like "The Philadelphia Experiment" by William Moore and "The Roswell Incident" by Charles Berlitz. You are consulting with me on a project called the Gladych Files. I've provided you with a basic document for the first two episodes. I'll also be providing graphics."
- OpenAI responds with ideas to perk up the scripts where it makes sense, and generally makes good B-Roll video suggestions for background and scene transitions. I then hand those suggestions to my productcion team and, ...
Wait! What!? I have no production team!?
Oh.
Well, there's that. So, I've developed the habit of asking GPT-5 to write insructions for Sora for the B-Roll footage. For the sora-1 engine, my results were less than promising. For sora-2 though! Wow! Things are different.
First, the engine does well enough that I fee comfortable automating it using its associated API. I ask GPT-5 to ouptut all the B-Roll descriptions in a data structure. As of this morning, I have a script, scaffolded using GPT-5, that I can paste those descriptions into. Several minutes later, I had my nine B-Roll videos. Eight of then are shown in the video below.
Programming Aside
The documentation for the Sora API outlines how to prompt for videos, how to download thumbnails, and sprite sheets, and how to remix videos with Sora. (Remixing is a video iterating technique. You hand the API the id of a video it has already created with a prompt that asks for the video to be modified in some way. The smaller or more atomic the requested modification, the better chance you have of getting something you like. Once again, expreienced programmers might expect to have better remixing experiences than most.
As a final important note, be sure to download your generated videos within 60 minutes. That's all the longer OpenAI sgtores them for you before throwing them out.
Prompting Tips
New Channel Plug
If you’re experimenting with AI video generation, or you’re just curious about WWII history, fringe science, and Michael Gladych’s extraordinary life, subscribe to our new channel, The Gladych Files. And if you’ve developed your own Sora or GPT-5 production tricks, please share them in the comments — I’d love to incorporate them into future posts.

Comments
Post a Comment
Please leave your comments on this topic: