I put OpenAI’s gpt-5-nano and gpt-5.1 head-to-head on my psy-ops article scorer to see what you really get for the extra spend. Along the way I ran into pricing surprises, wild variance, and a reminder that ChatGPT’s shiny new memory feature can quietly bend your evals if you’re not careful. A post on LinkedIn a few days back suggested using Small Language Models (SLMs) as opposed to LLMs for repetitive tasks. This seemed like a great idea in some regards for me, but I was curious about how it would apply to apps that were intended to perform lanugage analysis. Luckily, I have the psy-ops app up and running. Also? At the moment, it is using a close-to-an-SLM model, gpt-5-nano due to pricing decisions. I used it as a test vehicle to look at the difference betwween gpt-5 nano and full featured gpt-5.1. The testing framework I used: Starting from this article, I first did three separate anayses with gpt-5-nano, and then three others with gpt-5.1. I then used gpt-5.1...
We're working on a promo video for our smartphone-based CW practice app. We had great luck last week using the sora-2 app to create B-Roll footage for the Gladych Files . This week, I'm hoping to make an entire scripted trailer for the CW app using sora-2. There are issues though. The first one is that while the sora app has a storyboard feature, (at least the one I can access this week), the API does not. It does however allow you to pass in reference images to bridge scenes. That's pretty cool, and seems to work. I'm working on a Python script to wait for bridging images between clips. That's worked out ok. The real issue, so far, has been sora-2 moderation. Profanity and Real-Person Filters You cannot pass the image of a real person, (even one sora-2 invented), between clips. Moderation stops it every time. ( Moderation is what sora-2 calls its engine that decides if it's able to make your video at all.) This is what set off a cascade of moderatio...