Skip to main content

Posts

Showing posts with the label Sora-2

Maintaining Scene Continuity with Sora-2

 We're working on a promo video for our smartphone-based CW practice app. We had great luck last week using the sora-2 app to create B-Roll footage for the Gladych Files . This week, I'm hoping to make an entire scripted trailer for the CW app using sora-2. There are issues though. The first one is that while the sora app has a storyboard feature, (at least the one I can access this week), the API does  not. It does however allow you to pass in reference images to bridge scenes. That's pretty cool, and seems to work. I'm working on  a Python script to wait for bridging images between clips. That's worked out ok.  The real issue, so far, has been sora-2 moderation. Profanity and Real-Person Filters You cannot pass the image of a real person, (even one sora-2 invented), between clips. Moderation stops it every time. ( Moderation is what sora-2 calls its engine that decides if it's able to make your video at all.) This is what set off a cascade of moderatio...

Video Prodution B-Roll Flow with GPT-5 and SORA-2

 After gaining access to OpenAI’s new Sora-2 video API , I rebuilt my entire B-roll workflow for The Gladych Files channel — automating prompt generation, stitching clips together, and producing cinematic sequences in minutes. In this post, I walk through the full pipeline, share lessons learned, and offer tips for anyone curious about AI-assisted video creation.