- _synthesize: short-circuit on any single-agent report (avoids extra
Ollama call that can timeout); wrap multi-agent LLM call in try/except
- _update_memory: catch exceptions so DB/memory failures don't kill reply
- _log_directive_start: use 0 instead of NULL for channel_id (NOT NULL col)
- create_expense: drop 'description' field (not valid on hr.expense in Odoo 18)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
AGENT_ACCESS_GROUPS uses XML IDs (e.g. hr_expense.group_hr_expense_user)
but the check compared them against res.groups.full_name strings which
never matched, denying every user access to all restricted agents.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Discuss bot now reads ir.attachment from incoming messages; file-only
messages no longer silently dropped
- ZIP files are described (contents listed) and bot asks clarifying
question before acting; user's follow-up reply looks back for pending
attachments so files don't need to be re-uploaded
- receipt_parser: extracts text from ZIP (recursive), JPG/PNG/etc (OCR),
PDF (pdfplumber), HTML, TXT
- expenses_agent: full rewrite fixing broken method signatures; adds
create_expense_sheet / create_expense / attach_receipt flow driven by
LLM receipt parsing (Ollama, HIPAA-locked)
- master_agent: extra_context threads receipts + user_id into directives
- FastAPI /upload multipart endpoint; registered in main.py
- Odoo /ai/upload controller proxies files to agent service
- ab_ai_bot: dispatch_message_with_files() for multipart uploads
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The classifier was silently falling back to a clarification prompt every
time the LLM wrapped its JSON in markdown fences, prefixed it with
'json', or added surrounding prose. The bot then asked 'Could you
clarify what you need?' to every message regardless of clarity.
Now: strip code fences, slice to the first {...} block, and on parse
failure log the raw content (truncated) and treat the message as 'no
specialist agent' so the direct-answer fallback responds instead of
looping on clarification.
Previously when the LLM classified a message as needing no specialist
agent, the dispatcher built zero directives and _synthesize returned
'No agent responses received.' Greetings, follow-up clarifications,
and general questions all fell into this dead end.
Now when intent.agents is empty and no clarification is needed, the
master makes a second LLM call with the recent conversation as context
and answers directly. Updated master_system.txt to steer the classifier
toward agents=[] for chitchat instead of forcing a clarification loop.
The f-string only spanned the first fragment ('You don') so the
{chr(44).join(...)} placeholder leaked into chat output as literal
text. Build the message with plain string concat.
User messages were only saved inside _update_memory at the end of a
successful directive. The clarification and access-denied branches
returned early without ever calling it, so when a clarification turn
asked 'what do you mean?' and the user replied, the original question
was missing from context — the bot looked at a transcript of nothing
but its own clarifying questions and asked yet another.
Save the user message at the top of handle_message so every branch
includes it. Drop the now-duplicate write from _update_memory.
The prompt template contains a literal JSON example block ({"needs_clarification": ...})
which str.format() tried to interpret as format fields, raising KeyError on every
Discuss DM. Switch to .replace() so braces in the template are taken literally.
Without exc_info we only see the bare exception string, which has been
unhelpful for debugging Discuss DM failures (e.g. a KeyError whose
message is just a JSON key, with no clue where it was raised).
Odoo's bot model serialises user_id as a string (str(uid)) over the
HTTP boundary, but the asyncpg memory queries ($1) expect an integer.
This caused 'str object cannot be interpreted as an integer' on every
Discuss DM. Cast at the entry point so downstream stores get an int.