Skip to main content

Posts

Showing posts with the label generative ai

Using AIs to Build AIs ChatGPT5 -> Morse Code AI

 This week's AI project is to create an AI Morse code decoder. I've been working with the new ChatGPT 5 model since late last week. I've asked a few different models if they could understand Morse code. ChatGPT 5 couldn't. Gemini couldn't. That's when it occurred to me that this was probably the perfect time to learn how to use TensorFlow to make an AI. So, I changed my question. I asked ChatGPT 5, "If I wanted to setup a model that learned Morse code using Google's Tensor engines, could you describe the entire process and output the code for me?" To which it promptly, (what an awesome pun!), replied, "Heck yes—that’s a super fun project. Here’s a complete, practical path to a TPU-accelerated Morse code recognizer using TensorFlow + CTC (Connectionist Temporal Classification). It generates synthetic Morse audio (with realistic timing/noise/tempo wobble), trains a small CRNN on log-mel spectrograms, and decodes with greedy CTC. You can run it ...

Learning Python Parallel with GenAI

 I've been looking for an excuse to try parallel processing with Python for a few months and yesterday, the FBI provided one. They released a collection of records related to the assassination of Reverend Dr. Martin Luthor King Jr. It's easy enough to get a count of the pdf files released from the announcement page . Information about the files released to the National Archives I was able to quickly read that there were 6,301 files. A brief internet search indicated that the files have not been released in any kind of compressed container, like a zip file yet. I also tested that the search box only searches the pdf file names, not their contents. The immediate next question was how many bytes of disc space do all the pdfs consume? I asked Chat GPT o4-mini-high to write a Python script to determine the size of the all the files combined. The script was unable to determine the size of each file by looking at the HEAD of the URL for each file, so it wound up having to use GET req...