Do LLMs always generate the correct set of code given their inputs? Nope. You can use the same LLM to verifiy its own outputs if we're clever about it though. Here's an example from a recent HF interference model I asked GPT-5 to build for me after a Parks on the Air (POTA) ham radio outing in downtown San Francisco.
View from the QTH
That one little skyscraper led to a lot of fun HF propagation analysis using GPT-5. I described the area around my transmit site at One Maritime Plaza to GPT-5. I got some really interesting interference patterns based on the surrounding buildings. As I was going, I double checked GPT-5's output to make sure the buildings were where I'd said they were. I had to change a few obvious mistakes, but other than that, everything looked ok.
Then, last night, it occurred to me that I could ask GPT-5 to check its geographic model for me:
This is the resulting CZML the GPT-5 made of the points I'd asked it to model.
Clearly, I had a few things to straighten out.
After clarifying to the LLM the coordinates of the building:
I double checked just in case this was something in the modeling code I didn't understand:
Interestingly, GPT-5 didn't confess its error this time. I carried on, asking it to make the fixes:
and was rewarded with this mostly correct version of Gateway Vista West.
At which point I asked GPT-5 to put the entire model back in. Here's where I am now. I need to inquire as to why the walls on the Alcoa building are as thick with points as they are. I believe/hope that has to do with material modeling for concrete/rebar walls based on a journal article GPT-5 found, but I'll keep you posted.
Comments
Post a Comment
Please leave your comments on this topic: