Debug notes from getting the AI help feature of the free ham radio exams to work today.
Grabbing a text answer from OpenAI still works:
That's from this method
async function retrieveTextWithFileSearch({ system, user }) {
const vsId = localStorage.getItem('vector_store_id');
if (!vsId) throw new Error('No vector_store_id found.');
answText = answText + " " + user;
const resp = await openai('/responses', {
body: {
model: 'gpt-4.1-mini',
input: [
{ role: 'system', content: system },
{ role: 'user', content: answText }
],
tools: [{ type: 'file_search', vector_store_ids: [vsId] }]
}
});
With the agent flow, it doesn't work
async function retrieveTextWithAgent({ system, user }) {
// 0) Make sure we have a vector store (you already do this during init)
const vectorStoreId = localStorage.getItem('vector_store_id');
if (!vectorStoreId) throw new Error('No vector_store_id found.');
// 1) Ensure an Agent exists that uses that store
const agent_id = await ensureAgent(vectorStoreId, {
name: "Ham Exam Tutor",
model: "gpt-4.1-mini",
instructions: "Answer strictly using File Search when possible. Be concise and cite the question id.",
max_tool_calls: 1,
parallel_tool_calls: false
});
I'm going to assume we conked out somewhere near line 304:
async function retrieveTextWithAgent({ system, user }) {
// 0) Make sure we have a vector store (you already do this during init)
const vectorStoreId = localStorage.getItem('vector_store_id');
if (!vectorStoreId) throw new Error('No vector_store_id found.');
// 1) Ensure an Agent exists that uses that store
const agent_id = await ensureAgent(vectorStoreId, {
name: "Ham Exam Tutor",
model: "gpt-4.1-mini",
instructions: "Answer strictly using File Search when possible. Be concise and cite the question id.",
max_tool_calls: 1,
parallel_tool_calls: false
});
9:53 AM
Actually, 254 is a better choice. And even better still, line 183.
// ===== Agents REST (direct) =====
const AGENTS_BASE = "https://api.openai.com/v1";
// Reuse your existing getAPIKey()
async function openaiAgents(path, { method = 'POST', body } = {}) {
const apiKey = getAPIKey();
const res = await fetch(`${AGENTS_BASE}${path}`, {
method,
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: body ? JSON.stringify(body) : undefined
});
const text = await res.text();
if (!res.ok) throw new Error(`OpenAI Agents ${path} failed: ${res.status} ${res.statusText} :: ${text}`);
return text ? JSON.parse(text) : {};
}
Per normal, the vector store non-agent version worked, so let's contrast the two.
async function openai(path, { method = 'POST', headers = {}, body } = {}) {
const apiKey = getAPIKey();
if (!apiKey) throw new Error('Missing OpenAI API key.');
const isForm = (body instanceof FormData);
const res = await fetch(`${BASE}${path}`, {
method,
headers: isForm
? { 'Authorization': `Bearer ${apiKey}`, ...headers }
: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json', ...headers },
body: isForm ? body : (body ? JSON.stringify(body) : undefined),
});
const text = await res.text();
if (!res.ok) throw new Error(`OpenAI ${path} failed: ${res.status} ${res.statusText} :: ${text}`);
return text ? JSON.parse(text) : {};
}
The immediate obvious difference is the bit about the form we grabbed the openAI key from. I'll put that into the agent code and see how it does.
I honestly don't understand the intent of the code difference yet, so let's just see if the page will load to start. By the way, here are the changes I made.
// Reuse your existing getAPIKey()
async function openaiAgents(path, { method = 'POST', body } = {}) {
const apiKey = getAPIKey();
if (!apiKey) throw new Error('Missing OpenAI API key.');
const isForm = (body instanceof FormData);
const res = await fetch(`${AGENTS_BASE}${path}`, {
method,
headers: isForm
? { 'Authorization': `Bearer ${apiKey}`, ...headers}
: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json', ...headers },
body: isForm ? body : (body ? JSON.stringify(body) : undefined),
});
const text = await res.text();
if (!res.ok) throw new Error(`OpenAI Agents ${path} failed: ${res.status} ${res.statusText} :: ${text}`);
return text ? JSON.parse(text) : {};
}
I also just noticed that AGENTS_BASE is the same as BASE. That might be part of the issue. We'll find out.
// ===== Agents REST (direct) =====
const AGENTS_BASE = "https://api.openai.com/v1";
OK, same failure with code change.
Also though, I noticed that there were three errors, not one, and there's the CORS error that does not happen with the non-agent code. Darn it. 10:06 AM PST
Here's the entire error stack I'm passing to GPT-5 to see what's up
But first, one other small change
async function openaiAgents(path, { method = 'POST', headers = {}, body } = {}) {
No dice. Same errors.
Is the path the issue?
Text version path is
const vs = await openai('/vector_stores', { body: { name: 'Ham Exam Pool (User-Owned)' } });
Agent version path is in fact, very different
const agent = await openaiAgents('/agents', {
body: {
name,
model,
instructions,
tools: [
{
type: "file_search",
vector_store_ids: [vectorStoreId],
max_num_results: 20
}
],
max_tool_calls,
parallel_tool_calls
}
});
Taking a break after understanding the CORS issues. 10:22 AM PST
Back at it trying to figure out how to work through a server at 10:25. 10:26 AM PST back out.
And here we are back at the original issue again. Here's the conundrum. I want these exams to be serverless. I'd also like for them to be able to maintain context for each user. The responses API can't do that though. I'd have to pass back context on each call which will incur fees. Meanwhile, the agents API endpoint won't run on a browser, apparently no matter how hard I might try. Caveat: I can use the dev version of Chrome, but that's unsafe on a wider basis, so I shouldn't promote that idea for end users and so....
I can definitely begin to see how to maintain a context during a single exam. I can swap contexts out per question id. When I see the user asking about a new id, (the idea was to have chats attached to questions anyway), I can then reference that id's context, or create a new context. So, that's doable, a JSON context store in the web page's JavaScript memory. That starts to feel a little icky when I think of storing it in URL state. That's a bit much. I could encode it, place it in a textbox on the page, and ask the user to reload it on every session. That keeps the app serverless which I like but adds a task for the user which I do not like. At any rate, the sketched out architecture would look like this
I'd also like to ask the LLM to summarize each context on save so that we don't have to send in, (or store), as much stuff. This seems pretty good.
Looks like my saved state idea might have been inspired by answers from GPT-5 a few weeks ago.
Tips to keep costs down
-
Summarize stale context: when the thread gets long, replace older turns with a one-paragraph summary you generate via the model, then keep only recent turns verbatim.
-
Rely on File Search for facts: keep chat history short and fetch facts from your vector store instead of re-sending big documents.
-
Prompt caching helps: OpenAI automatically caches repeated prefixes (e.g., your long system message) to reduce cost/latency, but you still need to send the context in each call. OpenAI
If you want built-in state
Use the Agents SDK, which provides “agents/threads/runs” abstractions and persists conversation state for you; you interact at a higher level, and it handles threading/history under the hood. OpenAI Platform
Bottom line: store your chat history yourself (or use the Agents SDK) and include the relevant prior turns in each /v1/responses
call.
Caching seems kinda cool, so let's go ahead and try to keep this thing serverless.
Step 1: Let's add a chat button and a hidden chat box to each question. We'll store the system prompt and the question id and pass it along with ever repsonse.
Here's my prompt for Step 1
Let's try to use the original html for the exam and maintain state locally. I've attached the latest version of the exam code below. Please add an 'AI help' button to each question, and a panel based on the picture of the one attached (basically, make the height user adjustable once it's visible.' On 'AI help' button click, the following should happen:
- Copy the question id number and append "Please help me with id " + <idnumber> to the system prompt that you'll find in a string named system_prompt
- Expose the height adjustable AI answer panel. The code will display all questions and AI Answers there. The first question is "Please help me with id " <idnumber>
- Right below that expopse a text input box that is now to the left of the 'AI help' button. The user will type follow up questions there if any.
- When they click 'AI help' again, append the following : system_projpt + Please help me with id + <idnumber> + any text in the answer text box + the new question.
- On reply, append the new question and the answerw to the end of the text in th answer text box.
- Also store the new answer text box in a list of strings keyed by question id. We'll use that to restore context if necessary.
Here's the exam html file. Please also go ahead and wire in the OpenAI responses plumbing we've already been working with. No need for the audio API. I've included the latest working version of the code below. Also ignore the agent API code in that example.
There are issues with the first try. It seems the test code itself got munged, and also the help window doesn't expand.
Here's the answer I got back though!
Q: Please help me with id 591.A: The question ID 591 asks:"When evaluating RF exposure levels from your station at a neighbor’s home, what must you do?"The correct answer is:**"Ensure signals from your station are less than the uncontrolled maximum permissible exposure (MPE) limits."**This means when you check the RF exposure (radio frequency electromagnetic fields) from your ham radio station at a neighbor’s location, you should make sure the exposure level is below the limits set for uncontrolled environments—that is, areas where people might not be aware of the RF source, like your neighbor's home. The FCC has established these "uncontrolled" MPE limits as safe exposure levels for the general public.So in simple terms: You need to check that your signals aren't too strong at your neighbor's place according to the public exposure safety rules.Do you want me to explain more about what MPE limits are or how to measure or estimate RF exposure? Or maybe about why this is important? Let me know!
I asked for a fix and I'm replacing
<!-- Global, resizable AI Answers panel -->
<div id="aiPanel" class="ai-panel">
<div class="ai-panel-header">AI Answers <small id="aiPanelSub" class="muted">context-aware help per question</small></div>
<div id="aiLog" class="ai-log"></div>
</div>
with
<!-- Global, resizable AI Answers panel (hidden by default) -->
<div id="aiPanel" class="ai-panel" hidden>
<div class="ai-panel-header">
AI Answers
<small id="aiPanelSub" class="muted">context-aware help per question</small>
</div>
<textarea id="aiLog" class="ai-log" readonly></textarea>
</div>
In the CSS, I'm going from
/* --- AI help UI --- */
.ai-help-row { display:flex; gap:.5rem; align-items:center; margin-top:.5rem; }
.ai-help-input { flex:1; padding:.5rem .6rem; border:1px solid #ccc; border-radius:.5rem; }
.ai-help-btn { padding:.5rem .75rem; border:1px solid #ccc; border-radius:.5rem; background:#0066cc; color:#fff; cursor:pointer; }
.ai-help-btn:hover { background:#0055aa; }
.ai-panel {
display:none; margin-top:16px; border:1px solid #ddd; border-radius:10px;
background:#fafafa; padding:10px; resize:vertical; overflow:auto;
min-height:140px; max-height:65vh;
}
.ai-panel-header { font-weight:600; margin-bottom:6px; display:flex; gap:10px; align-items:center; }
.ai-log { white-space:pre-wrap; font-family: ui-monospace, SFMono-Regular, Menlo, Consolas, monospace; }
.ai-log .qline { color:#0b5; }
.ai-log .aline { color:#333; }
to
/* --- AI help UI --- */
.ai-help-row { display:flex; gap:.5rem; align-items:center; margin-top:.5rem; }
.ai-help-input { flex:1; min-width:180px; padding:.5rem .6rem; border:1px solid #ccc; border-radius:.5rem; }
.ai-help-btn { padding:.5rem .75rem; border:1px solid #ccc; border-radius:.5rem; background:#0066cc; color:#fff; cursor:pointer; }
.ai-help-btn:hover { background:#0055aa; }
.ai-panel {
margin-top:16px; border:1px solid #ddd; border-radius:10px;
background:#fafafa; padding:10px;
}
.ai-panel-header { font-weight:600; margin-bottom:6px; display:flex; gap:10px; align-items:center; }
.ai-log {
width:100%;
height:180px; /* initial height */
min-height:140px;
max-height:65vh;
resize:vertical; /* user can drag to resize */
overflow:auto;
white-space:pre-wrap;
font-family: ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;
}
The last change is to remove this
// If we already have panel text for this qid, show panel and seed it
if (AI_THREADS[q.id]?.panel) {
aiPanel.style.display = 'block';
aiLog.textContent = AI_THREADS[q.id].panel;
}
11:53 AM PST Still no dice. Trying one more time with the LLM
New:
<!-- Global AI panel (hidden by default) -->
<div id="aiPanel" class="ai-panel" hidden>
<div class="ai-panel-header">
<strong>AI Answers</strong>
<small id="aiPanelSub" class="muted">context-aware help per question</small>
</div>
<textarea id="aiLog" class="ai-log" rows="8" readonly></textarea>
</div>
also new:
/* --- AI help UI --- */
.ai-help-row { display:flex; gap:.5rem; align-items:center; margin-top:.5rem; }
.ai-help-input {
flex:1; min-width:200px; padding:.5rem .6rem;
border:1px solid #ccc; border-radius:.5rem;
}
.ai-help-btn {
padding:.5rem .75rem; border:1px solid #ccc; border-radius:.5rem;
background:#0066cc; color:#fff; cursor:pointer;
}
.ai-help-btn:hover { background:#0055aa; }
.ai-panel {
margin-top:16px; border:1px solid #ddd; border-radius:10px;
background:#fafafa; padding:10px;
}
.ai-panel-header { display:flex; gap:10px; align-items:center; margin-bottom:6px; }
.ai-log {
width:100%;
height:180px; /* initial height */
min-height:140px;
max-height:65vh;
resize:vertical; /* user can drag to resize */
overflow:auto;
white-space:pre-wrap;
font-family: ui-monospace, SFMono-Regular, Menlo, Consolas, monospace;
}
This
/* ---------- Inject AI Help controls ---------- */
const aiRow = document.createElement('div');
aiRow.className = 'ai-help-row';
aiRow.innerHTML = `
<input class="ai-help-input" type="text" placeholder="Follow-up for Q#${q.id} (optional)" />
<button class="ai-help-btn" type="button">AI help</button>
`;
card.appendChild(aiRow);
const followupInput = aiRow.querySelector('.ai-help-input');
const helpBtn = aiRow.querySelector('.ai-help-btn');
helpBtn.addEventListener('click', async () => {
try {
aiPanel.style.display = 'block';
// Build follow-up per spec
const userText = followupInput.value.trim();
const composed = userText
? `Please help me with id ${q.id}. ${userText}`
: `Please help me with id ${q.id}.`;
const { userMsg, answer, panelText } = await (async () => {
const r = await askAI({ qid: q.id, followup: composed });
return { userMsg: r.userMsg, answer: r.answer, panelText: r.panelText };
})();
// Update global panel
aiLog.textContent = panelText;
// Append Q/A into the input (scratchpad)
followupInput.value = (followupInput.value ? followupInput.value + "\n" : "") +
`Q: ${userMsg}\nA: ${answer}\n`;
} catch (err) {
console.error(err);
alert(err.message || 'AI help failed.');
}
});
/* ---------- end AI Help controls ---------- */
goes to this
// --- AI help row (follow-up input + button) ---
const aiRow = document.createElement('div');
aiRow.className = 'ai-help-row';
// follow-up input visible to the left of the button
const followupInput = document.createElement('input');
followupInput.type = 'text';
followupInput.placeholder = 'Ask a follow-up (optional)…';
followupInput.className = 'ai-help-input';
// AI help button
const helpBtn = document.createElement('button');
helpBtn.type = 'button';
helpBtn.className = 'ai-help-btn';
helpBtn.textContent = 'AI help';
// handle clicks
helpBtn.addEventListener('click', async () => {
try {
// show the global AI panel on first use
aiPanel.hidden = false;
// build the prompt per spec
const userText = followupInput.value.trim();
const composed = userText
? `Please help me with id ${q.id}. ${userText}`
: `Please help me with id ${q.id}.`;
// call your existing file-search plumbing
const text = await retrieveTextWithFileSearch({
system: (window.system_prompt || 'Use the File Search knowledge base. If no evidence is found, say NOT_FOUND.'),
user: composed
});
// append Q/A to the resizable panel (multiline)
const addBlock = `Q: ${composed}\nA: ${text}\n\n`;
aiLog.value = (aiLog.value || '') + addBlock;
// persist per-question thread text for context restore
if (!window.AI_THREADS) window.AI_THREADS = {};
if (!window.AI_THREADS[q.id]) window.AI_THREADS[q.id] = { panel: '' };
window.AI_THREADS[q.id].panel += addBlock;
// keep the follow-up input visible & empty for the next question
followupInput.value = '';
} catch (err) {
console.error(err);
alert(err.message || 'AI help failed.');
}
});
aiRow.appendChild(followupInput);
aiRow.appendChild(helpBtn);
card.appendChild(aiRow);
// --- end AI help row ---
And! And it's working!!! 2:23 PM PST.
4:05 PM PST
Swinging for the fences with md output
And finally! 5:30 PM PST md output! With math! (The math is what took all the time.)
And finally, even better math!!!
If you're currently studying, please don't sweat the above. You need not use either integral nor differential calculs for the exam, it was just an easy way to test the math formatting of the whole arrangement.
More tomorrow and possibly a demo.
Comments
Post a Comment
Please leave your comments on this topic: