Early Sunday morning, the code generated by ChatGPT became noticeably clunky. Why!?
I figured it out when this message arrived
Ah yes! The LLM that berates me a bit for the code it wrote :) At some point after the release of GPT5, OpenAI started defaulting my chats back to 4o. For me GPT5 is better. I switched back over 5 and a bit later, the code coming out of the LLM was just working, and when it didn't, I got messages like thisBy the way, at that point GPT5 was fixing code created by 4o when I had specifically asked for:
"Also save the vh and vw settings of the subpanel in subpanelsize and resize it to its original extent if the subpanelsize arg is present in a url
"
Anyway! GPT5 rocks for me. If you're using the browser UI instead of the API, check to see what model you're using before you go too far down the wrong path.
Comments
Post a Comment
Please leave your comments on this topic: