Initial implementation of IRC LLM bot

Full implementation from spec: ZNC/IRC client with TLS, Ollama LLM backend,
per-user SQLite conversation memory, and Flask web admin portal with 7 pages.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
tocmo0nlord
2026-04-17 22:08:53 -04:00
commit b154f63cfa
25 changed files with 2916 additions and 0 deletions

21
.env.example Normal file
View File

@@ -0,0 +1,21 @@
# ── ZNC ──────────────────────────────────────────
ZNC_HOST=ham.activeblue.net
ZNC_PORT=6501
ZNC_USER=your_znc_username
ZNC_PASSWORD=your_znc_password
ZNC_SSL=true
ZNC_NETWORK=activeblue
# ── Bot Identity ──────────────────────────────────
BOT_NICK=avcbot
BOT_REALNAME=Active Blue IRC Bot
# ── LLM Backend (startup defaults) ───────────────
# config.json values override these at runtime
OLLAMA_HOST=192.168.2.10
OLLAMA_PORT=11434
OLLAMA_MODEL=llama3.1
# ── Web Portal ────────────────────────────────────
PORTAL_PORT=8080
PORTAL_SECRET_KEY=changeme_use_a_long_random_string

12
.gitignore vendored Normal file
View File

@@ -0,0 +1,12 @@
.env
__pycache__/
*.py[cod]
*.pyo
venv/
.venv/
data/history/
data/ircbot.pid
data/ircbot.sock
logs/
*.db
*.log

13
Dockerfile Normal file
View File

@@ -0,0 +1,13 @@
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY bot/ ./bot/
COPY portal/ ./portal/
RUN mkdir -p config logs data/history
CMD ["python", "-m", "bot.irc_client"]

975
README.md Normal file
View File

@@ -0,0 +1,975 @@
# IRC LLM Bot
An IRC bot that connects to a ZNC bouncer, joins configured channels, and responds to users via a locally hosted Llama model (Ollama). Conversation history is persisted to disk per user and per channel so Llama remembers past interactions across restarts. Includes a web-based admin portal for live configuration changes — no restart required.
**Gitea Repository:** [http://192.168.1.64:3000/tocmo0nlord/irc-bot](http://192.168.1.64:3000/tocmo0nlord/irc-bot)
---
## Table of Contents
1. [Architecture Overview](#architecture-overview)
2. [Dependency: ZNC](#dependency-znc)
3. [Dependency: Ollama](#dependency-ollama)
4. [Dependency: Web Portal](#dependency-web-portal)
5. [Conversation Memory](#conversation-memory)
6. [Project Structure](#project-structure)
7. [Configuration Reference](#configuration-reference)
8. [How the Bot Works](#how-the-bot-works)
9. [Interaction Examples](#interaction-examples)
10. [Installation](#installation)
11. [Docker Compose](#docker-compose)
12. [Development Notes](#development-notes)
13. [Security Notes](#security-notes)
14. [Troubleshooting](#troubleshooting)
---
## Architecture Overview
```
IRC Network
┌──────────┐ TLS/6501 ┌────────────────────────────┐
│ ZNC │◄──────────────────────►│ IRC Bot │
│ Bouncer │ │ (bot/irc_client.py) │
│ham.active│ │ │
│blue.net │ │ bot/message_handler.py │
└──────────┘ │ │ │
│ ▼ │
│ bot/llm_client.py │
│ │ │
│ bot/memory.py ◄──────────┼─── data/history/
│ │ │ (SQLite, per user
└──────────┼─────────────────┘ per channel)
│ HTTP /api/generate
┌──────────────────┐
│ Ollama │
│ 192.168.2.10 │
│ :11434 │
│ llama3.1 (8B) │
└──────────────────┘
┌──────────────────┐
│ Web Portal │
│ portal/app.py │
│ :8080 │
│ R/W config.json │
│ View/clear mem │
└──────────────────┘
```
The bot, portal, and Ollama are **three separate processes** that communicate through:
- **Bot ↔ ZNC**: persistent TLS socket (IRC protocol)
- **Bot ↔ Ollama**: HTTP POST per message
- **Bot ↔ Memory store**: SQLite read/write per message (`data/history/`)
- **Portal ↔ Bot**: shared `config/config.json` on disk + reload signal (SIGHUP or Unix socket)
- **Portal ↔ Memory store**: SQLite read for viewing, delete for clearing history
- **Portal ↔ User**: browser HTTP on port 8080
---
## Dependency: ZNC
### What ZNC Does for This Bot
ZNC is an IRC bouncer — it maintains a persistent connection to the upstream IRC network on behalf of the bot. The bot connects to ZNC rather than directly to an IRC server. This means:
- If the bot process crashes or restarts, ZNC stays connected to IRC and buffers messages
- The bot reconnects to ZNC and replays any missed messages via the playback buffer
- ZNC handles TLS termination to the upstream IRC server
- Multiple bots or clients can share a single ZNC user/network without multiple upstream connections
### ZNC Connection Details
| Parameter | Value |
|---|---|
| Host | `ham.activeblue.net` |
| Port | `6501` |
| TLS | Yes |
| Protocol | IRC over TLS |
### ZNC Authentication Format
ZNC uses a special login format passed as the IRC `PASS` command during connection handshake:
```
PASS <ZNC_USER>/<ZNC_NETWORK>:<ZNC_PASSWORD>
```
Full handshake sequence the bot sends:
```
NICK avcbot
USER avcbot 0 * :Active Blue IRC Bot
PASS tocmo0nlord/activeblue:mysecretpassword
```
The `ZNC_NETWORK` value must exactly match the name of a network configured in the ZNC user's account. Verify this in ZNC's web panel or in `~znc/.znc/users/<user>/networks/`.
### ZNC Requirements for This Bot
The ZNC user account must have:
1. **A network entry** with the name matching `ZNC_NETWORK` in `.env`
2. **The IRC server configured** under that network (e.g., `irc.libera.chat:6697`)
3. **Playback buffer enabled** (recommended) — allows the bot to catch up on messages after a reconnect
4. **The bot's nick registered** on the upstream IRC network (optional but recommended for +v/+o access)
### ZNC Config Snippet Reference
Relevant portion of `~znc/.znc/configs/znc.conf` for a bot user:
```ini
<User tocmo0nlord>
Pass = <hashed_password>
Nick = avcbot
AltNick = avcbot_
RealName = Active Blue IRC Bot
<Network activeblue>
Server = irc.libera.chat +6697
Chan = #general
Chan = #support
</Network>
</User>
```
### ZNC Modules the Bot Benefits From
| Module | Purpose | Notes |
|---|---|---|
| `clientbuffer` | Per-client replay queue on reconnect | **Recommended** — use instead of `playbackbuffer` |
| `playbackbuffer` | Replay missed messages on reconnect | Do not enable alongside `clientbuffer` |
| `sasl` | SASL authentication to upstream IRC server | |
| `nickserv` | Auto-identify with NickServ on connect | |
### ZNC Reconnect Behavior
The bot implements exponential backoff on disconnect:
1. Wait 5 seconds → retry
2. Wait 10 seconds → retry
3. Wait 30 seconds → retry
4. Cap at 5-minute intervals until reconnected
The portal **Reconnect** button triggers an immediate disconnect + reconnect cycle bypassing the backoff.
### ZNC Playback Line Detection
When the bot reconnects, ZNC replays buffered messages. These must be detected and skipped to prevent the bot from feeding old messages to Ollama en masse.
The detection approach depends on which ZNC module is active:
**`playbackbuffer` module** (wraps message text):
```
:irc.server.name PRIVMSG #channel :[HH:MM:SS] <originalnick> original message text
```
Detection: PRIVMSG text matches `^\[\d{2}:\d{2}:\d{2}\] `
**`clientbuffer` module** (uses IRCv3 server-time tags):
```
@time=2024-01-01T12:00:00.000Z :nick!user@host PRIVMSG #channel :original message text
```
Detection: raw line starts with `@time=`
> **Use only one of these modules, not both.** They serve the same purpose (per-client replay) and enabling both causes double-replay. The recommended choice is `clientbuffer` — it's the more modern approach and its IRCv3 tag format is unambiguous to detect. The bot must strip the `@time=...` prefix before parsing the rest of the line as a normal IRC message.
Playback lines matched by either pattern are added to the context buffer (so Llama has channel awareness) but are **never** forwarded to Ollama for a response.
---
## Dependency: Ollama
### What Ollama Does for This Bot
Ollama serves the LLM locally over HTTP. The bot sends each user message as an HTTP POST and receives the generated response. No external API, no API key — fully self-hosted on LAN.
### Ollama Connection Details
| Parameter | Default Value | Configurable |
|---|---|---|
| Host | `192.168.2.10` | Yes — via portal or `config.json` |
| Port | `11434` | Yes — via portal or `config.json` |
| Model | `llama3.1` | Yes — via portal or `config.json` |
| Protocol | HTTP (unencrypted, LAN only) | — |
> The Ollama host, port, and model are runtime-configurable without a bot restart. The web portal writes changes to `config.json` and the bot picks them up on the next incoming message.
### Ollama API Endpoint Used
```
POST http://{OLLAMA_HOST}:{OLLAMA_PORT}/api/generate
```
Request body sent by the bot:
```json
{
"model": "llama3.1",
"system": "You are a helpful IRC assistant for Active Blue. Keep responses concise.",
"prompt": "<assembled user message with optional context>",
"stream": false,
"options": {
"temperature": 0.7,
"num_predict": 120,
"num_ctx": 2048
}
}
```
> **`num_predict` vs `max_response_length`:** These are two different controls operating at different layers. `num_predict` (unit: **tokens**, driven by `ollama_num_predict` in `config.json`) caps how many tokens the model generates — enforced by Ollama before text is returned. `max_response_length` (unit: **characters**, `config.json`) is a hard trim the bot applies after receiving the response — a safety net against flooding the IRC channel. At roughly 4 characters per token, `ollama_num_predict: 120` yields ~480 characters maximum, keeping it safely above the `max_response_length: 400` trim. Both values should be set together: `ollama_num_predict` should always leave headroom for `max_response_length` to trim if needed. Both are configurable from the portal under **LLM Settings**.
Response field the bot reads:
```json
{
"response": "The bot reply text goes here.",
"done": true
}
```
### Context Window and Persistent Memory
The bot maintains **two layers** of conversation history that are both included in each Ollama prompt:
**Layer 1 — Channel context buffer (in-memory)**
A rolling buffer of the last N messages in the channel (`context_window` in config, default: `5`). This gives Llama awareness of the surrounding conversation even for messages not directed at the bot.
**Layer 2 — Per-user persistent history (SQLite)**
Every exchange between a specific user and the bot is saved to `data/history/<channel>/<nick>.db`. On the next message from that user — even after a bot restart — the last `memory_history_limit` exchanges (default: `8`) are loaded from the database and prepended to the prompt. This is what allows Llama to remember past conversations.
Full prompt assembly order:
```
[System prompt]
You are a helpful IRC assistant for Active Blue...
[Persistent history — last 8 exchanges with this user]
User: what is DNS?
Assistant: DNS maps domain names to IP addresses...
User: what about DNSSEC?
Assistant: DNSSEC adds cryptographic signatures to DNS records...
[Channel context — last 5 messages from the channel]
<alice> anyone know about VLANs?
<bob> avcbot: can you explain VLAN tagging?
[Current message]
bob asks: can you explain VLAN tagging?
```
The persistent history and channel context windows are both configurable from the web portal under **LLM Settings**.
### Ollama Setup Requirements
- Ollama installed and running on `192.168.2.10`
- The configured model must be pulled before the bot starts:
```bash
ollama pull llama3.1
```
- Ollama must bind to `0.0.0.0` (not just `127.0.0.1`) to be reachable from the bot host:
```bash
# Add to Ollama's systemd unit Environment or /etc/default/ollama:
OLLAMA_HOST=0.0.0.0
systemctl restart ollama
```
### Verifying Ollama is Reachable
```bash
# List available models
curl http://192.168.2.10:11434/api/tags
# Test a generation end-to-end
curl -s http://192.168.2.10:11434/api/generate \
-d '{"model":"llama3.1","prompt":"Say hello in one sentence","stream":false}' \
| jq .response
```
### Changing the LLM Model at Runtime
Models can be swapped from the web portal under **LLM Settings** at any time. To make a new model available, pull it first on the Ollama host:
```bash
ollama pull mistral
ollama pull llama3.2
ollama list
```
Then select the model in the portal. The bot uses whatever `ollama_model` is set in `config.json` at the time of each request.
### Ollama Timeout Handling
The bot enforces a request timeout (`response_timeout_seconds` in config, default: `30`). If Ollama does not respond in time:
- The bot sends a fallback message to the channel: `[LLM timeout — try again]`
- The full error is written to `logs/bot.log`
- The bot continues processing subsequent messages normally
---
## Dependency: Web Portal
### Purpose
The web portal provides live management of all bot settings without touching files on the server or restarting any process. It is the primary operational interface.
### Portal Access
| Parameter | Value |
|---|---|
| URL | `http://<bot-host>:8080` |
| Default Port | `8080` (set via `PORTAL_PORT` in `.env`) |
| Auth | None by default — restrict before exposing to any network |
| Backend | Flask (Python) |
| Frontend | Jinja2 templates + vanilla JS |
### Portal Pages
#### `/` — Dashboard
- Current bot status: `connected` / `disconnected` / `reconnecting`
- Active ZNC connection info (host, port, network name, nick)
- Current Ollama host, port, and model name
- Number of channels currently joined
- Session message count
- Quick-action buttons: **Reconnect**, **Reload Config**, **Clear Log**
#### `/channels` — Channel Management
- List of currently joined channels with per-channel message counts
- **Add channel** — sends `JOIN #channel` immediately, persists to `config.json`
- **Remove channel** — sends `PART #channel` immediately, removes from `config.json`
#### `/llm` — LLM Settings
| Setting | `config.json` Key | Description |
|---|---|---|
| Ollama Host | `ollama_host` | IP or hostname of the Ollama server |
| Ollama Port | `ollama_port` | Port Ollama listens on (default: `11434`) |
| Ollama Model | `ollama_model` | Model name — must be pulled on the Ollama host |
| System Prompt | `system_prompt` | Instruction prepended to every LLM request |
| Max Response Length | `max_response_length` | Character cap applied after Ollama responds |
| Token Limit | `ollama_num_predict` | Max tokens Ollama generates per response |
| Context Size | `ollama_num_ctx` | Ollama context window in tokens (default: `2048`) |
| Response Timeout | `response_timeout_seconds` | Seconds before declaring a timeout |
| Channel Context | `context_window` | In-memory channel messages included in prompt |
| Temperature | `ollama_temperature` | LLM sampling temperature (0.0 1.0) |
| Memory Enabled | `memory_enabled` | Toggle persistent per-user history on/off |
| Memory Depth | `memory_history_limit` | Past exchanges loaded from SQLite per request |
| Memory Max Age | `memory_max_age_days` | Days before exchanges are pruned (0 = keep forever) |
All fields validate before save. Changes apply on the next incoming message — no restart required.
#### `/bot` — Bot Identity
| Setting | Description |
|---|---|
| Bot Nick | IRC nickname — changing this sends a live `NICK` command |
| Real Name | IRC GECOS/realname field — requires reconnect to update |
| Trigger on Nick | Respond only when the bot's nick is mentioned |
| Trigger Prefix | Alternative trigger string (e.g., `!ask`) |
| Ignored Nicks | Comma-separated list of nicks the bot never responds to |
#### `/logs` — Live Logs
- Tail of the last 200 lines from `logs/bot.log`
- Color-coded by type: `IRC IN` / `IRC OUT` / `LLM` / `ERROR` / `CONFIG`
- Auto-refresh toggle (polls every 3 seconds)
- Download full log as `.txt`
#### `/memory` — Conversation Memory
- Browse persistent history by channel and nick
- View the full stored exchange history for any user
- **Clear user history** — deletes all stored exchanges for a specific nick
- **Clear channel history** — deletes all stored history for an entire channel
- **Clear all history** — wipes the full `data/history/` database (with confirmation prompt)
- Shows total stored exchange count and database size
#### `/config` — Raw Config Editor
- View and edit `config.json` directly in a text area
- JSON syntax validation before save
- **Reload** button to signal the bot to re-read config
- Download and upload config file buttons
### Portal ↔ Bot Communication
The portal and bot share `config/config.json` on disk. After writing changes, the portal signals the bot to reload:
**Option A (default for non-Docker): SIGHUP**
```python
# portal sends:
os.kill(bot_pid, signal.SIGHUP)
# bot handles:
signal.signal(signal.SIGHUP, lambda s, f: reload_config())
```
**Option B (Docker): Unix socket**
The portal sends a `RELOAD` command over `./data/ircbot.sock`. The bot listens on this socket and reloads config on receipt. Both containers mount the `./data` directory as a shared volume, so the socket file is always accessible to both processes without any pre-creation requirement.
Bot PID is written to `./data/ircbot.pid` and the socket is created at `./data/ircbot.sock` on startup.
---
## Conversation Memory
### Overview
Conversation memory is a **required core feature**. Without it, Llama starts fresh on every bot restart and has no recollection of prior exchanges with any user. With it, the model builds a genuine per-user history that persists indefinitely until explicitly cleared.
### Storage Backend
History is stored in **SQLite** at `data/history/<sanitized_channel>/<nick>.db`. Each database file is initialized with WAL (Write-Ahead Logging) mode enabled to prevent `database is locked` errors when the portal and bot access the same file concurrently:
```sql
PRAGMA journal_mode=WAL;
CREATE TABLE IF NOT EXISTS exchanges (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
user_input TEXT NOT NULL,
bot_reply TEXT NOT NULL
);
```
WAL mode allows the bot to write a new exchange at the same time the portal is reading history for display, without either operation blocking the other.
One database file per nick per channel. This keeps history isolated — a user's conversation in `#general` does not bleed into their conversation in `#support`.
### How Memory is Used in Each Request
On every triggered message, `bot/memory.py`:
1. Loads the last `memory_history_limit` rows from `data/history/<channel>/<nick>.db`
2. Formats them as `User: ... / Assistant: ...` pairs
3. Passes them to `llm_client.py` to be prepended to the prompt (after system prompt, before channel context)
4. After the bot sends its reply, writes the new `(user_input, bot_reply)` pair to the database
### Memory Configuration (in `config.json`)
```json
{
"memory_enabled": true,
"memory_history_limit": 8,
"memory_max_age_days": 90
}
```
| Key | Description |
|---|---|
| `memory_enabled` | Toggle persistent memory on/off globally |
| `memory_history_limit` | Max number of past exchanges loaded per request |
| `memory_max_age_days` | Exchanges older than this are pruned on bot startup. Set to `0` to keep forever. |
> **Token budget warning:** Each exchange averages ~100150 words (~130200 tokens). At the default `memory_history_limit: 8`, persistent history consumes roughly 1,0001,600 tokens. Add the system prompt (~50 tokens), channel context buffer (~200 tokens), and the current message, and the total prompt sits comfortably under Llama 3.1 8B's default 2,048-token context window in Ollama. Do **not** raise `memory_history_limit` above `10` without also raising Ollama's `num_ctx` parameter, or the prompt will be silently truncated and responses will degrade. To increase context size in Ollama: set `"num_ctx": 4096` in the `options` block of the `/api/generate` request, and ensure the model was loaded with sufficient VRAM to support it.
All three values are editable from the portal under **LLM Settings** without restart.
### Memory File Layout
Channel names are sanitized before use as filesystem directory names: the leading `#` is stripped and any remaining special characters (`#`, `&`, `+`, `!`) are replaced with `_`. This avoids shell escaping issues and ensures compatibility across platforms.
| IRC Channel | Filesystem Directory |
|---|---|
| `#general` | `data/history/general/` |
| `##linux` | `data/history/_linux/` |
| `#support-us` | `data/history/support-us/` |
```
data/
└── history/
├── general/
│ ├── alice.db
│ ├── bob.db
│ └── charlie.db
└── support/
├── alice.db
└── dave.db
```
### Startup Pruning
On each bot startup, `bot/memory.py` iterates all `.db` files in `data/history/` and runs a pruning pass on each. The SQL uses a parameterized query — the `memory_max_age_days` value from config is passed as a bound parameter, not interpolated as a string:
```python
# In bot/memory.py — correct parameterized form
cursor.execute(
"DELETE FROM exchanges WHERE timestamp < datetime('now', ?)",
(f"-{memory_max_age_days} days",)
)
```
This runs before the IRC connection is established and keeps databases from growing unbounded. If `memory_max_age_days` is `0`, the pruning pass is skipped entirely.
### Disabling Memory Per-User
A user can ask the bot to forget them. The bot responds to `avcbot: forget me` by deleting their database file for the current channel and confirming in-channel. This can also be done manually from the portal `/memory` page.
---
## Project Structure
```
irc-bot/
├── bot/
│ ├── __init__.py
│ ├── irc_client.py # ZNC/IRC connection, PING/PONG, reconnect loop
│ ├── llm_client.py # Ollama HTTP client, timeout handling, prompt builder
│ ├── memory.py # SQLite read/write, pruning, per-user history loader
│ └── message_handler.py # Parses PRIVMSG, checks triggers, calls LLM, replies
├── portal/
│ ├── app.py # Flask routes for all portal pages
│ ├── config_manager.py # Read/write config.json, signal bot reload
│ ├── templates/
│ │ ├── base.html
│ │ ├── index.html
│ │ ├── channels.html
│ │ ├── llm.html
│ │ ├── bot.html
│ │ ├── logs.html
│ │ ├── memory.html # Browse/clear conversation history
│ │ └── config.html
│ └── static/
│ ├── style.css
│ └── app.js
├── config/
│ └── config.json # Runtime config — written by portal, read by bot
├── data/
│ └── history/ # SQLite DBs, one per nick per channel
│ ├── general/ # channel names are sanitized (# stripped)
│ │ └── alice.db
│ └── support/
│ └── bob.db
├── logs/
│ └── bot.log # Rotating log file (read by portal log viewer)
├── .env # Secrets and startup defaults — never commit
├── .env.example # Safe template to commit
├── requirements.txt
├── docker-compose.yml
├── Dockerfile # Single image used by both bot and portal services
└── README.md
```
---
## Dockerfile
A single image is built and used by both the `irc-bot` and `portal` services. The `command:` in `docker-compose.yml` determines which process each container runs.
```dockerfile
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy source
COPY bot/ ./bot/
COPY portal/ ./portal/
# Create runtime directories (volumes will overlay these at runtime)
RUN mkdir -p config logs data/history
# Default entrypoint — overridden by docker-compose command:
CMD ["python", "-m", "bot.irc_client"]
```
> Both services use the same image. The `portal` service overrides the `CMD` with `python -m portal.app` via the `command:` key in `docker-compose.yml`. There is no separate Dockerfile for the portal.
---
## Configuration Reference
### `.env` — Secrets and Startup Defaults
```env
# ── ZNC ──────────────────────────────────────────
ZNC_HOST=ham.activeblue.net
ZNC_PORT=6501
ZNC_USER=your_znc_username
ZNC_PASSWORD=your_znc_password
ZNC_SSL=true
ZNC_NETWORK=activeblue
# ── Bot Identity ──────────────────────────────────
BOT_NICK=avcbot
BOT_REALNAME=Active Blue IRC Bot
# ── LLM Backend (startup defaults) ───────────────
# config.json values override these at runtime
OLLAMA_HOST=192.168.2.10
OLLAMA_PORT=11434
OLLAMA_MODEL=llama3.1
# ── Web Portal ────────────────────────────────────
PORTAL_PORT=8080
PORTAL_SECRET_KEY=changeme_use_a_long_random_string
```
### `config/config.json` — Runtime Config
```json
{
"channels": ["#general", "#support"],
"trigger_on_nick": true,
"trigger_prefix": null,
"ignored_nicks": ["ChanServ", "NickServ"],
"bot_nick": "avcbot",
"system_prompt": "You are a helpful IRC assistant for Active Blue. Keep responses concise and under 3 sentences when possible.",
"max_response_length": 400,
"ollama_host": "192.168.2.10",
"ollama_port": 11434,
"ollama_model": "llama3.1",
"ollama_temperature": 0.7,
"ollama_num_predict": 120,
"ollama_num_ctx": 2048,
"response_timeout_seconds": 30,
"context_window": 5,
"memory_enabled": true,
"memory_history_limit": 8,
"memory_max_age_days": 90,
"log_level": "INFO"
}
```
**Priority rule:** `config.json` values always take precedence over `.env` values for runtime settings. `.env` is the fallback used only on first startup or if a key is absent from `config.json`.
---
## How the Bot Works
### Startup Sequence
```
1. Load .env
2. Load config/config.json (values override .env defaults)
3. Run memory pruning pass — delete exchanges older than memory_max_age_days
4. Write PID to ./data/ircbot.pid, open Unix socket at ./data/ircbot.sock
5. Connect: TLS socket → ham.activeblue.net:6501
6. Send: NICK <BOT_NICK>
USER <BOT_NICK> 0 * :<BOT_REALNAME>
PASS <ZNC_USER>/<ZNC_NETWORK>:<ZNC_PASSWORD>
7. On numeric 001 (RPL_WELCOME): JOIN all channels from config.json
8. Enter message loop
```
### Message Loop
```
Receive raw IRC line
├─ PING :server → PONG :server (immediate, keeps connection alive)
├─ :nick!user@host PRIVMSG #channel :message
│ │
│ ├─ ZNC playback line? (text matches ^\[\d{2}:\d{2}:\d{2}\] )
│ │ → YES → add to context buffer only, discard (never send to Ollama)
│ │
│ ├─ sender in ignored_nicks? → discard
│ │
│ ├─ trigger_on_nick=true AND message starts with "avcbot:"?
│ │ OR trigger_prefix set AND message starts with prefix?
│ │ OR message == "avcbot: forget me" (special command)?
│ │ → NO → add to context buffer, discard
│ │ → YES → continue
│ │
│ ├─ "forget me" command? → delete data/history/<channel>/<nick>.db
│ │ reply "<nick>: Done, I've cleared your history."
│ │ → done
│ │
│ ├─ Strip trigger prefix/nick from message text
│ ├─ Load last memory_history_limit exchanges from data/history/<channel>/<nick>.db
│ ├─ Append current message to context buffer (capped at context_window)
│ ├─ Build prompt:
│ │ system_prompt
│ │ + persistent history (from SQLite)
│ │ + channel context buffer
│ │ + current message
│ ├─ POST to Ollama http://{ollama_host}:{ollama_port}/api/generate
│ ├─ Await response (timeout: response_timeout_seconds)
│ │ └─ Timeout → send "[LLM timeout — try again]" to channel
│ ├─ Trim response to max_response_length chars
│ ├─ Save (user_input, bot_reply) to data/history/<channel>/<nick>.db
│ └─ PRIVMSG #channel :<triggering_nick>: <response>
└─ Connection drop / ERROR → backoff reconnect loop
```
---
## Interaction Examples
### Standard nick trigger
```
<alice> avcbot: what's the difference between TCP and UDP?
<avcbot> alice: TCP is connection-oriented and guarantees ordered delivery.
UDP is connectionless and faster but has no delivery guarantees.
Use TCP for HTTP/SSH, UDP for DNS/VoIP/gaming.
```
### Continued context
```
<alice> avcbot: explain subnetting
<avcbot> alice: Subnetting divides a network into smaller blocks using a subnet mask...
<alice> avcbot: give me a /24 example
<avcbot> alice: A /24 like 192.168.1.0/24 has 256 addresses (254 usable hosts),
with .0 as network address and .255 as broadcast.
```
### Prefix trigger (if configured)
```
<bob> !ask what is VLAN tagging?
<avcbot> bob: VLAN tagging (802.1Q) adds a 4-byte tag to Ethernet frames to identify
which VLAN the traffic belongs to, enabling a single trunk port to carry
multiple VLANs simultaneously.
```
---
## Installation
### Prerequisites
- Python 3.11+
- Access to ZNC at `ham.activeblue.net:6501` with valid credentials configured
- Ollama running on `192.168.2.10:11434` with at least one model pulled (`ollama pull llama3.1`)
- Docker + Docker Compose (recommended) or a bare Python venv
### `requirements.txt`
```
# HTTP client for Ollama
httpx==0.27.0
# Web portal
flask==3.0.3
jinja2==3.1.4
# Config / env
python-dotenv==1.0.1
# socket and sqlite3 are Python stdlib — no install needed
```
> **No `irc` library.** The bot uses Python's stdlib `socket` module directly with a custom raw IRC parser in `bot/irc_client.py`. The `jaraco/irc` high-level framework conflicts with writing raw `NICK`/`USER`/`PASS` commands manually and adds unnecessary abstraction for a bot that only needs to handle `PRIVMSG`, `PING`, and connection state.
### Manual Setup (venv)
```bash
git clone http://192.168.1.64:3000/tocmo0nlord/irc-bot
cd irc-bot
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env — set ZNC_USER, ZNC_PASSWORD, ZNC_NETWORK at minimum
# Terminal 1: start bot
python -m bot.irc_client
# Terminal 2: start portal
python -m portal.app
```
### Docker Compose
```bash
cp .env.example .env
# Edit .env
docker compose up -d
docker compose logs -f irc-bot
docker compose logs -f portal
```
---
## Docker Compose
```yaml
version: "3.9"
services:
irc-bot:
build: .
container_name: irc-bot
restart: unless-stopped
command: python -m bot.irc_client
env_file: .env
volumes:
- ./config:/app/config
- ./logs:/app/logs
- ./data:/app/data
networks:
- botnet
portal:
build: .
container_name: irc-bot-portal
restart: unless-stopped
command: python -m portal.app
env_file: .env
ports:
- "${PORTAL_PORT:-8080}:8080"
volumes:
- ./config:/app/config
- ./logs:/app/logs
- ./data:/app/data
networks:
- botnet
depends_on:
- irc-bot
networks:
botnet:
driver: bridge
```
Both containers share:
- `./config` — so the portal can write `config.json` and the bot can read it
- `./logs` — so the portal log viewer can tail `bot.log`
- `./data` — shared volume for SQLite history files, `ircbot.pid`, and `ircbot.sock`; using a bind-mounted directory avoids the Docker `/tmp` socket pre-creation problem
---
## Development Notes
### IRC Connection Implementation
`bot/irc_client.py` uses Python's stdlib `socket` module directly — no third-party IRC library. It opens a TLS-wrapped TCP socket, sends the handshake commands as raw bytes, and reads the server line-by-line in a loop. This keeps the implementation minimal and avoids framework conflicts.
```python
import socket, ssl
sock = socket.create_connection((ZNC_HOST, ZNC_PORT))
tls_sock = ssl.wrap_socket(sock)
tls_sock.sendall(b"NICK avcbot\r\n")
tls_sock.sendall(b"USER avcbot 0 * :Active Blue IRC Bot\r\n")
tls_sock.sendall(f"PASS {ZNC_USER}/{ZNC_NETWORK}:{ZNC_PASSWORD}\r\n".encode())
```
All incoming data is buffered and split on `\r\n` before dispatch to `message_handler.py`.
### Extending Bot Commands
Add static command handlers in `bot/message_handler.py` before the LLM call to short-circuit common requests:
```python
if stripped_text.lower() == "ping":
return "pong"
if stripped_text.lower().startswith("version"):
return f"irc-bot v1.0 | model: {config['ollama_model']}"
```
### Swapping the LLM Backend
`bot/llm_client.py` is the only file that needs to change to use a different backend (LM Studio, vLLM, OpenAI-compatible endpoint, etc.). The interface contract is:
```python
def generate(prompt: str, system: str, config: dict) -> str:
# returns the response string, raises TimeoutError on timeout
```
### Adding a Portal Page
1. Add a route and handler in `portal/app.py`
2. Create a Jinja2 template in `portal/templates/`
3. Add a navigation link in `portal/templates/base.html`
### Log Levels
Set `log_level` in `config.json` to one of: `DEBUG`, `INFO`, `WARNING`, `ERROR`. All bot activity, LLM requests/responses, and config changes are written to `logs/bot.log` with timestamps and severity. The portal log viewer tails this file.
---
## Security Notes
- **Never commit `.env`** — it contains ZNC credentials. It is in `.gitignore` by default.
- **Portal has no authentication by default** — restrict to LAN/VPN (NetBird) before production use. Add Traefik BasicAuth middleware or a session login page before exposing.
- **Ollama is unauthenticated HTTP** — do not expose port `11434` externally. Keep it LAN-only behind a firewall.
- **ZNC password in `.env`** — consider using ZNC's token-based auth (`/znc AddToken`) as an alternative to the plaintext password.
- **Config reload socket** — `./data/ircbot.sock` is accessible to any process that can read the `./data` directory. Set appropriate directory permissions in production (`chmod 750 data/`).
---
## Troubleshooting
### Bot connects to ZNC but never joins channels
- Confirm `ZNC_NETWORK` in `.env` exactly matches the network name in the ZNC user config (case-sensitive)
- Confirm the ZNC network is connected to the upstream IRC server (check ZNC web panel)
- Check `logs/bot.log` for IRC error numerics: `433` (nick in use), `465` (banned), `464` (bad password)
### Ollama returns no response / timeout
```bash
# Verify Ollama is reachable
curl http://192.168.2.10:11434/api/tags
# Confirm the model is pulled
curl http://192.168.2.10:11434/api/tags | jq '.models[].name'
# Test a full generation
curl -s http://192.168.2.10:11434/api/generate \
-d '{"model":"llama3.1","prompt":"hello","stream":false}' | jq .response
```
If Ollama is only bound to `127.0.0.1`:
```bash
# /etc/default/ollama or systemd unit [Service] section:
Environment="OLLAMA_HOST=0.0.0.0"
systemctl daemon-reload && systemctl restart ollama
```
### Portal changes not picked up by bot
- Confirm both the bot and portal containers mount the same `./config` directory
- Confirm `./data/ircbot.sock` exists and is accessible by the portal process (it is created by the bot on startup)
- Click **Reload Config** in the portal dashboard and watch `logs` for `[CONFIG] Reloaded`
- If using SIGHUP mode, confirm `./data/ircbot.pid` contains the correct running PID
### Bot responds to every message, not just mentions
- Confirm `"trigger_on_nick": true` in `config.json`
- Confirm `"trigger_prefix"` is `null` or a specific prefix string, not an empty string
- Use the portal **Reload Config** button after any direct file edits
### Bot does not remember past conversations after restart
- Confirm `"memory_enabled": true` in `config.json`
- Confirm `./data` is mounted as a volume in both containers (not just `irc-bot`)
- Check that `data/history/` exists and is writable by the bot process:
```bash
ls -la data/history/
```
- Check `logs/bot.log` for `[MEMORY] Failed to write` errors
- Confirm `memory_history_limit` is greater than `0`
### Memory database growing too large
- Lower `memory_max_age_days` in `config.json` (e.g., `30`) — pruning runs on each startup
- Use the portal `/memory` page to clear history for specific users or channels
- To manually inspect a database:
```bash
sqlite3 data/history/general/alice.db "SELECT COUNT(*) FROM exchanges;"
```
### ZNC playback flooding the bot on reconnect
- Enable ZNC's `clientbuffer` module — do **not** enable `playbackbuffer` alongside it
- Set `MaxBufferSize` in `znc.conf` to a reasonable value (e.g., `500` lines)
- Confirm the bot's raw line parser strips `@time=...` IRCv3 tag prefixes before processing

0
bot/__init__.py Normal file
View File

372
bot/irc_client.py Normal file
View File

@@ -0,0 +1,372 @@
"""
IRC bot entry point — connects to ZNC via TLS, handles the message loop,
reconnect backoff, config reload (SIGHUP + Unix socket), and PID file.
"""
import json
import logging
import logging.handlers
import os
import re
import signal
import socket
import ssl
import sys
import threading
import time
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
# ── Logging ────────────────────────────────────────────────────────────────
os.makedirs("logs", exist_ok=True)
os.makedirs("data", exist_ok=True)
os.makedirs("config", exist_ok=True)
handler = logging.handlers.RotatingFileHandler(
"logs/bot.log", maxBytes=5 * 1024 * 1024, backupCount=3, encoding="utf-8"
)
handler.setFormatter(
logging.Formatter("%(asctime)s [%(levelname)s] %(message)s")
)
logging.basicConfig(
level=logging.INFO,
handlers=[handler, logging.StreamHandler(sys.stdout)],
)
logger = logging.getLogger(__name__)
from bot import memory as mem
from bot.message_handler import handle_privmsg
# ── Config ─────────────────────────────────────────────────────────────────
CONFIG_PATH = "config/config.json"
PID_PATH = "data/ircbot.pid"
SOCK_PATH = "data/ircbot.sock"
_config: dict = {}
_config_lock = threading.Lock()
# Runtime state
_sock: socket.socket | None = None
_connected = False
_session_msg_count = 0
_status = "disconnected" # disconnected | connecting | connected | reconnecting
def _load_config() -> dict:
defaults = {
"channels": [],
"trigger_on_nick": True,
"trigger_prefix": None,
"ignored_nicks": ["ChanServ", "NickServ"],
"bot_nick": os.getenv("BOT_NICK", "avcbot"),
"system_prompt": "You are a helpful IRC assistant for Active Blue. Keep responses concise and under 3 sentences when possible.",
"max_response_length": 400,
"ollama_host": os.getenv("OLLAMA_HOST", "192.168.2.10"),
"ollama_port": int(os.getenv("OLLAMA_PORT", 11434)),
"ollama_model": os.getenv("OLLAMA_MODEL", "llama3.1"),
"ollama_temperature": 0.7,
"ollama_num_predict": 120,
"ollama_num_ctx": 2048,
"response_timeout_seconds": 30,
"context_window": 5,
"memory_enabled": True,
"memory_history_limit": 8,
"memory_max_age_days": 90,
"log_level": "INFO",
}
if os.path.exists(CONFIG_PATH):
try:
with open(CONFIG_PATH, "r") as f:
file_cfg = json.load(f)
defaults.update(file_cfg)
logger.info("[CONFIG] Loaded config.json")
except Exception as e:
logger.error(f"[CONFIG] Failed to load config.json: {e}")
return defaults
def _reload_config() -> None:
global _config
new_cfg = _load_config()
with _config_lock:
_config = new_cfg
level = logging.getLevelName(_config.get("log_level", "INFO"))
logging.getLogger().setLevel(level)
logger.info("[CONFIG] Reloaded")
def get_config() -> dict:
with _config_lock:
return dict(_config)
# ── PID + Unix socket ──────────────────────────────────────────────────────
def _write_pid() -> None:
with open(PID_PATH, "w") as f:
f.write(str(os.getpid()))
def _remove_pid() -> None:
try:
os.remove(PID_PATH)
except FileNotFoundError:
pass
def _start_sock_listener() -> None:
"""Listens for RELOAD command on Unix socket (used by portal in Docker)."""
if sys.platform == "win32":
return # Unix sockets not supported on Windows
try:
if os.path.exists(SOCK_PATH):
os.remove(SOCK_PATH)
srv = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
srv.bind(SOCK_PATH)
srv.listen(5)
srv.settimeout(1)
os.chmod(SOCK_PATH, 0o660)
logger.info(f"[CONFIG] Unix socket listening at {SOCK_PATH}")
def _loop():
while True:
try:
conn, _ = srv.accept()
data = conn.recv(64).decode().strip()
conn.close()
if data == "RELOAD":
_reload_config()
elif data == "RECONNECT":
_trigger_reconnect()
except socket.timeout:
continue
except Exception as e:
logger.error(f"[CONFIG] Socket error: {e}")
t = threading.Thread(target=_loop, daemon=True)
t.start()
except Exception as e:
logger.warning(f"[CONFIG] Could not start Unix socket: {e}")
# ── SIGHUP handler (non-Docker) ────────────────────────────────────────────
_reconnect_flag = threading.Event()
def _trigger_reconnect() -> None:
_reconnect_flag.set()
if sys.platform != "win32":
signal.signal(signal.SIGHUP, lambda s, f: _reload_config())
# ── IRC helpers ────────────────────────────────────────────────────────────
def _send(sock: socket.socket, line: str) -> None:
logger.debug(f"IRC OUT: {line}")
sock.sendall((line + "\r\n").encode("utf-8", errors="replace"))
def _join_channels(sock: socket.socket, channels: list[str]) -> None:
for ch in channels:
_send(sock, f"JOIN {ch}")
logger.info(f"[IRC] Joining {ch}")
PLAYBACK_RE = re.compile(r"^\[\d{2}:\d{2}:\d{2}\] ")
def _is_playback(text: str) -> bool:
return bool(PLAYBACK_RE.match(text))
def _parse_privmsg(line: str) -> tuple[str, str, str] | None:
"""Returns (nick, channel, text) or None."""
m = re.match(r"^:([^!]+)![^ ]+ PRIVMSG (#\S+) :(.+)$", line)
if m:
return m.group(1), m.group(2), m.group(3)
return None
# ── Connection ─────────────────────────────────────────────────────────────
def _connect() -> socket.socket:
global _status
host = os.getenv("ZNC_HOST", "ham.activeblue.net")
port = int(os.getenv("ZNC_PORT", 6501))
use_ssl = os.getenv("ZNC_SSL", "true").lower() == "true"
znc_user = os.getenv("ZNC_USER", "")
znc_password = os.getenv("ZNC_PASSWORD", "")
znc_network = os.getenv("ZNC_NETWORK", "activeblue")
bot_nick = get_config().get("bot_nick", os.getenv("BOT_NICK", "avcbot"))
bot_realname = os.getenv("BOT_REALNAME", "Active Blue IRC Bot")
_status = "connecting"
logger.info(f"[IRC] Connecting to {host}:{port} (SSL={use_ssl})")
raw = socket.create_connection((host, port), timeout=30)
if use_ssl:
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
sock = ctx.wrap_socket(raw, server_hostname=host)
else:
sock = raw
_send(sock, f"NICK {bot_nick}")
_send(sock, f"USER {bot_nick} 0 * :{bot_realname}")
_send(sock, f"PASS {znc_user}/{znc_network}:{znc_password}")
return sock
# ── Main message loop ──────────────────────────────────────────────────────
def _run_loop(sock: socket.socket) -> None:
global _connected, _status, _session_msg_count
buf = ""
_connected = True
_status = "connected"
while True:
if _reconnect_flag.is_set():
_reconnect_flag.clear()
raise ConnectionResetError("Reconnect triggered")
try:
sock.settimeout(1)
chunk = sock.recv(4096).decode("utf-8", errors="replace")
except socket.timeout:
continue
except Exception:
raise
if not chunk:
raise ConnectionResetError("Remote closed connection")
buf += chunk
while "\r\n" in buf:
line, buf = buf.split("\r\n", 1)
# Strip IRCv3 server-time tag (clientbuffer playback)
is_tagged_playback = False
if line.startswith("@time="):
is_tagged_playback = True
line = re.sub(r"^@[^ ]+ ", "", line)
logger.debug(f"IRC IN: {line}")
if line.startswith("PING"):
_send(sock, "PONG" + line[4:])
continue
if " 001 " in line:
logger.info("[IRC] Connected — joining channels")
cfg = get_config()
_join_channels(sock, cfg.get("channels", []))
continue
if " 433 " in line:
bot_nick = get_config().get("bot_nick", "avcbot")
_send(sock, f"NICK {bot_nick}_")
logger.warning("[IRC] Nick in use, trying alternate")
continue
parsed = _parse_privmsg(line)
if not parsed:
continue
nick, channel, text = parsed
if is_tagged_playback or _is_playback(text):
# Add to context buffer but don't send to LLM
from bot.message_handler import _get_context
cfg = get_config()
ctx = _get_context(channel, cfg.get("context_window", 5))
ctx.append(f"<{nick}> {text}")
continue
cfg = get_config()
_session_msg_count += 1
reply = handle_privmsg(nick, channel, text, cfg)
if reply:
_send(sock, f"PRIVMSG {channel} :{reply}")
logger.info(f"IRC OUT: PRIVMSG {channel} :{reply[:80]}")
def get_status() -> dict:
cfg = get_config()
return {
"status": _status,
"nick": cfg.get("bot_nick", "avcbot"),
"znc_host": os.getenv("ZNC_HOST", "ham.activeblue.net"),
"znc_port": os.getenv("ZNC_PORT", "6501"),
"znc_network": os.getenv("ZNC_NETWORK", "activeblue"),
"ollama_host": cfg.get("ollama_host"),
"ollama_port": cfg.get("ollama_port"),
"ollama_model": cfg.get("ollama_model"),
"channels": cfg.get("channels", []),
"session_msg_count": _session_msg_count,
}
def send_raw(line: str) -> None:
global _sock
if _sock and _connected:
_send(_sock, line)
# ── Entry point ────────────────────────────────────────────────────────────
def main() -> None:
global _sock, _connected, _status
_reload_config()
cfg = get_config()
mem.prune_old_exchanges(cfg.get("memory_max_age_days", 90))
_write_pid()
_start_sock_listener()
backoff = [5, 10, 30, 60, 120, 300]
attempt = 0
while True:
try:
_sock = _connect()
attempt = 0
_run_loop(_sock)
except (ConnectionResetError, ConnectionRefusedError, OSError) as e:
_connected = False
_status = "reconnecting"
logger.warning(f"[IRC] Disconnected: {e}")
except Exception as e:
_connected = False
_status = "reconnecting"
logger.error(f"[IRC] Unexpected error: {e}", exc_info=True)
finally:
if _sock:
try:
_sock.close()
except Exception:
pass
_sock = None
delay = backoff[min(attempt, len(backoff) - 1)]
attempt += 1
logger.info(f"[IRC] Reconnecting in {delay}s (attempt {attempt})")
time.sleep(delay)
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
logger.info("[IRC] Shutting down")
_remove_pid()

70
bot/llm_client.py Normal file
View File

@@ -0,0 +1,70 @@
import httpx
import logging
logger = logging.getLogger(__name__)
def generate(prompt: str, system: str, config: dict) -> str:
host = config.get("ollama_host", "192.168.2.10")
port = config.get("ollama_port", 11434)
model = config.get("ollama_model", "llama3.1")
timeout = config.get("response_timeout_seconds", 30)
num_predict = config.get("ollama_num_predict", 120)
num_ctx = config.get("ollama_num_ctx", 2048)
temperature = config.get("ollama_temperature", 0.7)
max_length = config.get("max_response_length", 400)
url = f"http://{host}:{port}/api/generate"
payload = {
"model": model,
"system": system,
"prompt": prompt,
"stream": False,
"options": {
"temperature": temperature,
"num_predict": num_predict,
"num_ctx": num_ctx,
},
}
logger.debug(f"[LLM] POST {url} model={model} prompt_len={len(prompt)}")
try:
response = httpx.post(url, json=payload, timeout=timeout)
response.raise_for_status()
text = response.json().get("response", "").strip()
if len(text) > max_length:
text = text[:max_length].rsplit(" ", 1)[0] + ""
logger.debug(f"[LLM] Response ({len(text)} chars): {text[:80]}")
return text
except httpx.TimeoutException:
logger.error(f"[LLM] Timeout after {timeout}s")
raise TimeoutError(f"Ollama did not respond within {timeout}s")
except Exception as e:
logger.error(f"[LLM] Request failed: {e}")
raise
def build_prompt(
user_message: str,
nick: str,
persistent_history: list[dict],
context_buffer: list[str],
) -> str:
parts = []
if persistent_history:
parts.append("--- Past conversation with this user ---")
for ex in persistent_history:
parts.append(f"User: {ex['user']}")
parts.append(f"Assistant: {ex['assistant']}")
parts.append("--- End of past conversation ---")
if context_buffer:
parts.append("--- Recent channel activity ---")
parts.extend(context_buffer)
parts.append("--- End of channel activity ---")
parts.append(f"{nick} asks: {user_message}")
return "\n".join(parts)

164
bot/memory.py Normal file
View File

@@ -0,0 +1,164 @@
import sqlite3
import os
import logging
import re
logger = logging.getLogger(__name__)
HISTORY_DIR = "data/history"
def _sanitize_channel(channel: str) -> str:
name = channel.lstrip("#")
name = re.sub(r"[#&+!]", "_", name)
return name
def _db_path(channel: str, nick: str) -> str:
chan_dir = os.path.join(HISTORY_DIR, _sanitize_channel(channel))
os.makedirs(chan_dir, exist_ok=True)
return os.path.join(chan_dir, f"{nick}.db")
def _get_conn(path: str) -> sqlite3.Connection:
conn = sqlite3.connect(path)
conn.execute("PRAGMA journal_mode=WAL;")
conn.execute("""
CREATE TABLE IF NOT EXISTS exchanges (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
user_input TEXT NOT NULL,
bot_reply TEXT NOT NULL
)
""")
conn.commit()
return conn
def load_history(channel: str, nick: str, limit: int) -> list[dict]:
path = _db_path(channel, nick)
if not os.path.exists(path):
return []
try:
conn = _get_conn(path)
cursor = conn.execute(
"SELECT user_input, bot_reply FROM exchanges ORDER BY id DESC LIMIT ?",
(limit,),
)
rows = cursor.fetchall()
conn.close()
return [{"user": r[0], "assistant": r[1]} for r in reversed(rows)]
except Exception as e:
logger.error(f"[MEMORY] Failed to load history for {nick} in {channel}: {e}")
return []
def save_exchange(channel: str, nick: str, user_input: str, bot_reply: str) -> None:
path = _db_path(channel, nick)
try:
conn = _get_conn(path)
conn.execute(
"INSERT INTO exchanges (user_input, bot_reply) VALUES (?, ?)",
(user_input, bot_reply),
)
conn.commit()
conn.close()
except Exception as e:
logger.error(f"[MEMORY] Failed to write exchange for {nick} in {channel}: {e}")
def delete_user_history(channel: str, nick: str) -> None:
path = _db_path(channel, nick)
if os.path.exists(path):
os.remove(path)
logger.info(f"[MEMORY] Deleted history for {nick} in {channel}")
def delete_channel_history(channel: str) -> None:
chan_dir = os.path.join(HISTORY_DIR, _sanitize_channel(channel))
if os.path.isdir(chan_dir):
for f in os.listdir(chan_dir):
if f.endswith(".db"):
os.remove(os.path.join(chan_dir, f))
logger.info(f"[MEMORY] Cleared all history for {channel}")
def delete_all_history() -> None:
for root, dirs, files in os.walk(HISTORY_DIR):
for f in files:
if f.endswith(".db"):
os.remove(os.path.join(root, f))
logger.info("[MEMORY] All history deleted")
def prune_old_exchanges(max_age_days: int) -> None:
if max_age_days <= 0:
return
pruned = 0
for root, dirs, files in os.walk(HISTORY_DIR):
for f in files:
if not f.endswith(".db"):
continue
path = os.path.join(root, f)
try:
conn = sqlite3.connect(path)
conn.execute("PRAGMA journal_mode=WAL;")
cursor = conn.execute(
"DELETE FROM exchanges WHERE timestamp < datetime('now', ?)",
(f"-{max_age_days} days",),
)
pruned += cursor.rowcount
conn.commit()
conn.close()
except Exception as e:
logger.error(f"[MEMORY] Pruning failed for {path}: {e}")
if pruned:
logger.info(f"[MEMORY] Pruned {pruned} old exchanges (>{max_age_days} days)")
def list_channels() -> list[str]:
if not os.path.isdir(HISTORY_DIR):
return []
return [d for d in os.listdir(HISTORY_DIR) if os.path.isdir(os.path.join(HISTORY_DIR, d))]
def list_nicks(channel_dir: str) -> list[str]:
path = os.path.join(HISTORY_DIR, channel_dir)
if not os.path.isdir(path):
return []
return [f[:-3] for f in os.listdir(path) if f.endswith(".db")]
def get_all_exchanges(channel_dir: str, nick: str) -> list[dict]:
path = os.path.join(HISTORY_DIR, channel_dir, f"{nick}.db")
if not os.path.exists(path):
return []
try:
conn = sqlite3.connect(path)
cursor = conn.execute(
"SELECT id, timestamp, user_input, bot_reply FROM exchanges ORDER BY id ASC"
)
rows = cursor.fetchall()
conn.close()
return [{"id": r[0], "timestamp": r[1], "user": r[2], "assistant": r[3]} for r in rows]
except Exception as e:
logger.error(f"[MEMORY] Failed to read all exchanges: {e}")
return []
def get_stats() -> dict:
total = 0
total_size = 0
for root, dirs, files in os.walk(HISTORY_DIR):
for f in files:
if f.endswith(".db"):
path = os.path.join(root, f)
total_size += os.path.getsize(path)
try:
conn = sqlite3.connect(path)
row = conn.execute("SELECT COUNT(*) FROM exchanges").fetchone()
total += row[0] if row else 0
conn.close()
except Exception:
pass
return {"total_exchanges": total, "total_size_bytes": total_size}

99
bot/message_handler.py Normal file
View File

@@ -0,0 +1,99 @@
import logging
import re
from collections import deque
from bot import memory as mem
from bot import llm_client
logger = logging.getLogger(__name__)
# Per-channel rolling context buffer: {channel: deque}
_context_buffers: dict[str, deque] = {}
def _get_context(channel: str, window: int) -> deque:
if channel not in _context_buffers:
_context_buffers[channel] = deque(maxlen=window)
else:
_context_buffers[channel] = deque(_context_buffers[channel], maxlen=window)
return _context_buffers[channel]
def handle_privmsg(nick: str, channel: str, text: str, config: dict) -> str | None:
"""
Returns a reply string if the bot should respond, else None.
Also maintains the context buffer as a side effect.
"""
window = config.get("context_window", 5)
ctx = _get_context(channel, window)
ignored = [n.lower() for n in config.get("ignored_nicks", [])]
if nick.lower() in ignored:
return None
bot_nick = config.get("bot_nick", "avcbot").lower()
trigger_prefix = config.get("trigger_prefix")
trigger_on_nick = config.get("trigger_on_nick", True)
# Detect "forget me" command before trigger check
forget_pattern = re.compile(
rf"^{re.escape(bot_nick)}\s*[:,]\s*forget\s+me\s*$", re.IGNORECASE
)
if forget_pattern.match(text.strip()):
mem.delete_user_history(channel, nick)
logger.info(f"[MEMORY] Forgot history for {nick} in {channel}")
return f"{nick}: Done, I've cleared your history."
# Determine if triggered
stripped = None
if trigger_on_nick:
nick_pattern = re.compile(
rf"^{re.escape(bot_nick)}\s*[:,]\s*", re.IGNORECASE
)
m = nick_pattern.match(text)
if m:
stripped = text[m.end():].strip()
if stripped is None and trigger_prefix:
if text.startswith(trigger_prefix):
stripped = text[len(trigger_prefix):].strip()
# Add to context buffer regardless
ctx.append(f"<{nick}> {text}")
if stripped is None:
return None
# Build and send to LLM
history = []
if config.get("memory_enabled", True):
limit = config.get("memory_history_limit", 8)
history = mem.load_history(channel, nick, limit)
prompt = llm_client.build_prompt(
user_message=stripped,
nick=nick,
persistent_history=history,
context_buffer=list(ctx)[:-1], # exclude the current message already in buffer
)
system = config.get(
"system_prompt",
"You are a helpful IRC assistant for Active Blue. Keep responses concise and under 3 sentences when possible.",
)
logger.info(f"[LLM] Request from {nick} in {channel}: {stripped[:80]}")
try:
reply = llm_client.generate(prompt, system, config)
except TimeoutError:
return f"{nick}: [LLM timeout — try again]"
except Exception as e:
logger.error(f"[LLM] Generation error: {e}")
return f"{nick}: [LLM error — check logs]"
if config.get("memory_enabled", True):
mem.save_exchange(channel, nick, stripped, reply)
return f"{nick}: {reply}"

21
config/config.json Normal file
View File

@@ -0,0 +1,21 @@
{
"channels": ["#general", "#support"],
"trigger_on_nick": true,
"trigger_prefix": null,
"ignored_nicks": ["ChanServ", "NickServ"],
"bot_nick": "avcbot",
"system_prompt": "You are a helpful IRC assistant for Active Blue. Keep responses concise and under 3 sentences when possible.",
"max_response_length": 400,
"ollama_host": "192.168.2.10",
"ollama_port": 11434,
"ollama_model": "llama3.1",
"ollama_temperature": 0.7,
"ollama_num_predict": 120,
"ollama_num_ctx": 2048,
"response_timeout_seconds": 30,
"context_window": 5,
"memory_enabled": true,
"memory_history_limit": 8,
"memory_max_age_days": 90,
"log_level": "INFO"
}

37
docker-compose.yml Normal file
View File

@@ -0,0 +1,37 @@
version: "3.9"
services:
irc-bot:
build: .
container_name: irc-bot
restart: unless-stopped
command: python -m bot.irc_client
env_file: .env
volumes:
- ./config:/app/config
- ./logs:/app/logs
- ./data:/app/data
networks:
- botnet
portal:
build: .
container_name: irc-bot-portal
restart: unless-stopped
command: python -m portal.app
env_file: .env
ports:
- "${PORTAL_PORT:-8080}:8080"
volumes:
- ./config:/app/config
- ./logs:/app/logs
- ./data:/app/data
networks:
- botnet
depends_on:
- irc-bot
networks:
botnet:
driver: bridge

0
portal/__init__.py Normal file
View File

291
portal/app.py Normal file
View File

@@ -0,0 +1,291 @@
import json
import logging
import os
import sys
from dotenv import load_dotenv
from flask import Flask, abort, jsonify, redirect, render_template, request, url_for, send_file
import io
load_dotenv()
app = Flask(__name__, template_folder="templates", static_folder="static")
app.secret_key = os.getenv("PORTAL_SECRET_KEY", "changeme")
@app.template_filter("log_class")
def log_class_filter(line: str) -> str:
if "IRC IN:" in line: return "log-irc-in"
if "IRC OUT:" in line: return "log-irc-out"
if "[LLM]" in line: return "log-llm"
if "[MEMORY]" in line: return "log-memory"
if "[CONFIG]" in line: return "log-config"
if "ERROR" in line: return "log-error"
return ""
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
from portal import config_manager as cm
from bot import memory as mem
# ── Dashboard ──────────────────────────────────────────────────────────────
@app.route("/")
def index():
cfg = cm.load_config()
pid_exists = os.path.exists("data/ircbot.pid")
sock_exists = os.path.exists("data/ircbot.sock")
return render_template("index.html", cfg=cfg, pid_exists=pid_exists, sock_exists=sock_exists)
@app.route("/api/status")
def api_status():
cfg = cm.load_config()
pid_exists = os.path.exists("data/ircbot.pid")
return jsonify({
"bot_running": pid_exists,
"channels": cfg.get("channels", []),
"ollama_model": cfg.get("ollama_model"),
"bot_nick": cfg.get("bot_nick"),
})
@app.route("/action/reload", methods=["POST"])
def action_reload():
cm.signal_bot_reload()
return redirect(url_for("index"))
@app.route("/action/reconnect", methods=["POST"])
def action_reconnect():
cm.signal_bot_reconnect()
return redirect(url_for("index"))
@app.route("/action/clear_log", methods=["POST"])
def action_clear_log():
try:
open("logs/bot.log", "w").close()
except Exception:
pass
return redirect(url_for("logs"))
# ── Channels ───────────────────────────────────────────────────────────────
@app.route("/channels")
def channels():
cfg = cm.load_config()
return render_template("channels.html", cfg=cfg)
@app.route("/channels/add", methods=["POST"])
def channel_add():
ch = request.form.get("channel", "").strip()
if not ch.startswith("#"):
ch = "#" + ch
cfg = cm.load_config()
if ch not in cfg.get("channels", []):
cfg.setdefault("channels", []).append(ch)
cm.save_config(cfg)
cm.signal_bot_reload()
return redirect(url_for("channels"))
@app.route("/channels/remove", methods=["POST"])
def channel_remove():
ch = request.form.get("channel", "").strip()
cfg = cm.load_config()
channels_list = cfg.get("channels", [])
if ch in channels_list:
channels_list.remove(ch)
cfg["channels"] = channels_list
cm.save_config(cfg)
cm.signal_bot_reload()
return redirect(url_for("channels"))
# ── LLM Settings ───────────────────────────────────────────────────────────
@app.route("/llm", methods=["GET", "POST"])
def llm():
cfg = cm.load_config()
errors = []
if request.method == "POST":
try:
cfg["ollama_host"] = request.form["ollama_host"].strip()
cfg["ollama_port"] = int(request.form["ollama_port"])
cfg["ollama_model"] = request.form["ollama_model"].strip()
cfg["system_prompt"] = request.form["system_prompt"]
cfg["max_response_length"] = int(request.form["max_response_length"])
cfg["ollama_num_predict"] = int(request.form["ollama_num_predict"])
cfg["ollama_num_ctx"] = int(request.form["ollama_num_ctx"])
cfg["response_timeout_seconds"] = int(request.form["response_timeout_seconds"])
cfg["context_window"] = int(request.form["context_window"])
cfg["ollama_temperature"] = float(request.form["ollama_temperature"])
cfg["memory_enabled"] = "memory_enabled" in request.form
cfg["memory_history_limit"] = int(request.form["memory_history_limit"])
cfg["memory_max_age_days"] = int(request.form["memory_max_age_days"])
cm.save_config(cfg)
cm.signal_bot_reload()
except (ValueError, KeyError) as e:
errors.append(str(e))
return render_template("llm.html", cfg=cfg, errors=errors)
# ── Bot Identity ────────────────────────────────────────────────────────────
@app.route("/bot", methods=["GET", "POST"])
def bot():
cfg = cm.load_config()
errors = []
if request.method == "POST":
cfg["bot_nick"] = request.form.get("bot_nick", "avcbot").strip()
bot_realname = request.form.get("bot_realname", "").strip()
if bot_realname:
cfg["bot_realname"] = bot_realname
cfg["trigger_on_nick"] = "trigger_on_nick" in request.form
prefix = request.form.get("trigger_prefix", "").strip()
cfg["trigger_prefix"] = prefix if prefix else None
ignored = request.form.get("ignored_nicks", "")
cfg["ignored_nicks"] = [n.strip() for n in ignored.split(",") if n.strip()]
cm.save_config(cfg)
cm.signal_bot_reload()
return render_template("bot.html", cfg=cfg, errors=errors)
# ── Logs ────────────────────────────────────────────────────────────────────
@app.route("/logs")
def logs():
lines = []
log_path = "logs/bot.log"
if os.path.exists(log_path):
with open(log_path, "r", encoding="utf-8", errors="replace") as f:
all_lines = f.readlines()
lines = all_lines[-200:]
return render_template("logs.html", lines=lines)
@app.route("/logs/download")
def logs_download():
log_path = "logs/bot.log"
if not os.path.exists(log_path):
abort(404)
return send_file(log_path, as_attachment=True, download_name="bot.log")
@app.route("/api/logs")
def api_logs():
lines = []
log_path = "logs/bot.log"
if os.path.exists(log_path):
with open(log_path, "r", encoding="utf-8", errors="replace") as f:
lines = f.readlines()[-200:]
return jsonify({"lines": [l.rstrip() for l in lines]})
# ── Memory ─────────────────────────────────────────────────────────────────
@app.route("/memory")
def memory():
channels_list = mem.list_channels()
selected_chan = request.args.get("channel")
selected_nick = request.args.get("nick")
nicks = []
exchanges = []
if selected_chan:
nicks = mem.list_nicks(selected_chan)
if selected_chan and selected_nick:
exchanges = mem.get_all_exchanges(selected_chan, selected_nick)
stats = mem.get_stats()
return render_template(
"memory.html",
channels=channels_list,
selected_chan=selected_chan,
nicks=nicks,
selected_nick=selected_nick,
exchanges=exchanges,
stats=stats,
)
@app.route("/memory/clear_user", methods=["POST"])
def memory_clear_user():
chan = request.form.get("channel_dir", "")
nick = request.form.get("nick", "")
if chan and nick:
mem.delete_user_history("#" + chan, nick)
return redirect(url_for("memory", channel=chan))
@app.route("/memory/clear_channel", methods=["POST"])
def memory_clear_channel():
chan = request.form.get("channel_dir", "")
if chan:
mem.delete_channel_history("#" + chan)
return redirect(url_for("memory"))
@app.route("/memory/clear_all", methods=["POST"])
def memory_clear_all():
confirm = request.form.get("confirm", "")
if confirm == "yes":
mem.delete_all_history()
return redirect(url_for("memory"))
# ── Config editor ───────────────────────────────────────────────────────────
@app.route("/config", methods=["GET", "POST"])
def config_editor():
error = None
success = None
if request.method == "POST":
raw = request.form.get("config_raw", "")
try:
parsed = json.loads(raw)
cm.save_config(parsed)
cm.signal_bot_reload()
success = "Config saved and bot signaled to reload."
except json.JSONDecodeError as e:
error = f"Invalid JSON: {e}"
else:
parsed = cm.load_config()
raw_json = json.dumps(cm.load_config(), indent=2)
return render_template("config.html", raw_json=raw_json, error=error, success=success)
@app.route("/config/download")
def config_download():
return send_file("config/config.json", as_attachment=True, download_name="config.json")
@app.route("/config/upload", methods=["POST"])
def config_upload():
f = request.files.get("config_file")
if not f:
return redirect(url_for("config_editor"))
try:
data = json.loads(f.read().decode())
cm.save_config(data)
cm.signal_bot_reload()
except Exception:
pass
return redirect(url_for("config_editor"))
# ── Run ─────────────────────────────────────────────────────────────────────
if __name__ == "__main__":
port = int(os.getenv("PORTAL_PORT", 8080))
app.run(host="0.0.0.0", port=port, debug=False)

68
portal/config_manager.py Normal file
View File

@@ -0,0 +1,68 @@
import json
import logging
import os
import signal
import socket
import sys
logger = logging.getLogger(__name__)
CONFIG_PATH = "config/config.json"
PID_PATH = "data/ircbot.pid"
SOCK_PATH = "data/ircbot.sock"
def load_config() -> dict:
if not os.path.exists(CONFIG_PATH):
return {}
with open(CONFIG_PATH, "r") as f:
return json.load(f)
def save_config(cfg: dict) -> None:
os.makedirs("config", exist_ok=True)
with open(CONFIG_PATH, "w") as f:
json.dump(cfg, f, indent=2)
logger.info("[CONFIG] Saved config.json")
def signal_bot_reload() -> bool:
"""Signal bot to reload config. Returns True on success."""
# Try Unix socket first (Docker mode)
if sys.platform != "win32" and os.path.exists(SOCK_PATH):
try:
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect(SOCK_PATH)
s.sendall(b"RELOAD")
s.close()
logger.info("[CONFIG] Sent RELOAD via Unix socket")
return True
except Exception as e:
logger.warning(f"[CONFIG] Socket reload failed: {e}")
# Fall back to SIGHUP
if sys.platform != "win32" and os.path.exists(PID_PATH):
try:
with open(PID_PATH) as f:
pid = int(f.read().strip())
os.kill(pid, signal.SIGHUP)
logger.info(f"[CONFIG] Sent SIGHUP to PID {pid}")
return True
except Exception as e:
logger.warning(f"[CONFIG] SIGHUP failed: {e}")
logger.warning("[CONFIG] Could not signal bot (no socket or PID available)")
return False
def signal_bot_reconnect() -> bool:
if sys.platform != "win32" and os.path.exists(SOCK_PATH):
try:
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.connect(SOCK_PATH)
s.sendall(b"RECONNECT")
s.close()
return True
except Exception as e:
logger.warning(f"[CONFIG] Reconnect signal failed: {e}")
return False

52
portal/static/app.js Normal file
View File

@@ -0,0 +1,52 @@
// Auto-refresh for logs page
(function () {
const checkbox = document.getElementById("auto-refresh");
if (!checkbox) return;
let timer = null;
function refresh() {
fetch("/api/logs")
.then((r) => r.json())
.then((data) => {
const box = document.getElementById("log-box");
if (!box) return;
box.innerHTML = data.lines
.map((l) => `<div class="log-line ${logClass(l)}">${escHtml(l)}</div>`)
.join("");
box.scrollTop = box.scrollHeight;
})
.catch(() => {});
}
checkbox.addEventListener("change", () => {
if (checkbox.checked) {
refresh();
timer = setInterval(refresh, 3000);
} else {
clearInterval(timer);
timer = null;
}
});
// Scroll to bottom on load
const box = document.getElementById("log-box");
if (box) box.scrollTop = box.scrollHeight;
function logClass(line) {
if (line.includes("IRC IN:")) return "log-irc-in";
if (line.includes("IRC OUT:")) return "log-irc-out";
if (line.includes("[LLM]")) return "log-llm";
if (line.includes("[ERROR]") || line.includes("ERROR")) return "log-error";
if (line.includes("[CONFIG]")) return "log-config";
if (line.includes("[MEMORY]")) return "log-memory";
return "";
}
function escHtml(str) {
return str
.replace(/&/g, "&amp;")
.replace(/</g, "&lt;")
.replace(/>/g, "&gt;");
}
})();

325
portal/static/style.css Normal file
View File

@@ -0,0 +1,325 @@
:root {
--bg: #0f1117;
--sidebar-bg: #161b22;
--card-bg: #1c2128;
--border: #30363d;
--text: #c9d1d9;
--muted: #8b949e;
--accent: #58a6ff;
--danger: #f85149;
--success: #3fb950;
--warning: #d29922;
--font: -apple-system, BlinkMacSystemFont, "Segoe UI", Helvetica, Arial, sans-serif;
--mono: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, monospace;
}
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
display: flex;
min-height: 100vh;
background: var(--bg);
color: var(--text);
font-family: var(--font);
font-size: 14px;
}
/* ── Sidebar ─────────────────────────────────────────────────────────── */
.sidebar {
width: 220px;
min-height: 100vh;
background: var(--sidebar-bg);
border-right: 1px solid var(--border);
display: flex;
flex-direction: column;
flex-shrink: 0;
}
.sidebar-header {
padding: 24px 16px 16px;
border-bottom: 1px solid var(--border);
}
.logo {
display: block;
font-size: 18px;
font-weight: 600;
color: var(--accent);
}
.subtitle {
display: block;
font-size: 11px;
color: var(--muted);
margin-top: 2px;
}
.nav-links {
list-style: none;
padding: 12px 0;
}
.nav-links li a {
display: block;
padding: 8px 16px;
color: var(--muted);
text-decoration: none;
border-left: 3px solid transparent;
transition: all 0.15s;
}
.nav-links li a:hover {
color: var(--text);
background: rgba(255,255,255,0.04);
}
.nav-links li a.active {
color: var(--accent);
border-left-color: var(--accent);
background: rgba(88,166,255,0.08);
}
/* ── Content ─────────────────────────────────────────────────────────── */
.content {
flex: 1;
padding: 32px 40px;
max-width: 1000px;
}
h1 {
font-size: 22px;
font-weight: 600;
margin-bottom: 24px;
color: var(--text);
}
h2 {
font-size: 15px;
font-weight: 600;
margin-bottom: 14px;
color: var(--muted);
text-transform: uppercase;
letter-spacing: 0.06em;
}
.section {
margin-bottom: 32px;
}
/* ── Cards ───────────────────────────────────────────────────────────── */
.cards {
display: flex;
flex-wrap: wrap;
gap: 12px;
margin-bottom: 32px;
}
.card {
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 8px;
padding: 16px 20px;
min-width: 140px;
}
.card-label {
font-size: 11px;
color: var(--muted);
text-transform: uppercase;
letter-spacing: 0.06em;
margin-bottom: 6px;
}
.card-value {
font-size: 20px;
font-weight: 600;
}
.status-online { color: var(--success); }
.status-offline { color: var(--danger); }
/* ── Tables ──────────────────────────────────────────────────────────── */
.info-table, .data-table {
width: 100%;
border-collapse: collapse;
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 8px;
overflow: hidden;
}
.info-table td, .data-table td, .data-table th {
padding: 10px 14px;
border-bottom: 1px solid var(--border);
text-align: left;
}
.info-table tr:last-child td,
.data-table tr:last-child td { border-bottom: none; }
.info-table td:first-child,
.data-table th { color: var(--muted); font-size: 12px; text-transform: uppercase; }
.data-table thead { background: rgba(255,255,255,0.03); }
/* ── Buttons ─────────────────────────────────────────────────────────── */
.btn {
display: inline-block;
padding: 7px 16px;
border: 1px solid var(--border);
border-radius: 6px;
background: var(--card-bg);
color: var(--text);
font-size: 13px;
cursor: pointer;
text-decoration: none;
transition: background 0.15s, border-color 0.15s;
}
.btn:hover { background: rgba(255,255,255,0.06); border-color: var(--muted); }
.btn-primary {
background: var(--accent);
border-color: var(--accent);
color: #000;
font-weight: 600;
}
.btn-primary:hover { background: #79b8ff; border-color: #79b8ff; }
.btn-danger { color: var(--danger); border-color: var(--danger); }
.btn-danger:hover { background: rgba(248,81,73,0.12); }
.btn-sm { padding: 4px 10px; font-size: 12px; }
.actions {
display: flex;
gap: 10px;
flex-wrap: wrap;
align-items: center;
}
/* ── Forms ───────────────────────────────────────────────────────────── */
.settings-form .field-row {
display: grid;
grid-template-columns: 200px 1fr;
gap: 12px;
align-items: start;
margin-bottom: 14px;
padding-bottom: 14px;
border-bottom: 1px solid var(--border);
}
.settings-form label {
padding-top: 7px;
color: var(--muted);
font-size: 13px;
}
.settings-form input[type="text"],
.settings-form input[type="number"],
.settings-form textarea,
.settings-form select {
width: 100%;
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 6px;
color: var(--text);
padding: 7px 10px;
font-size: 13px;
font-family: var(--font);
}
.settings-form textarea { resize: vertical; font-family: var(--font); }
.settings-form textarea.mono { font-family: var(--mono); font-size: 12px; }
.settings-form input[type="checkbox"] {
width: 16px; height: 16px; margin-top: 8px; accent-color: var(--accent);
}
.hint {
font-size: 11px;
color: var(--muted);
grid-column: 2;
margin-top: -8px;
}
.inline-form {
display: flex;
gap: 8px;
align-items: center;
margin-bottom: 16px;
}
.inline-form input[type="text"],
.inline-form select {
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 6px;
color: var(--text);
padding: 7px 10px;
font-size: 13px;
}
/* ── Alerts ──────────────────────────────────────────────────────────── */
.alert {
padding: 10px 14px;
border-radius: 6px;
margin-bottom: 16px;
font-size: 13px;
}
.alert-error { background: rgba(248,81,73,0.12); border: 1px solid var(--danger); color: var(--danger); }
.alert-success { background: rgba(63,185,80,0.12); border: 1px solid var(--success); color: var(--success); }
/* ── Logs ────────────────────────────────────────────────────────────── */
.log-toolbar {
display: flex;
gap: 12px;
align-items: center;
margin-bottom: 12px;
}
.log-box {
background: #0d1117;
border: 1px solid var(--border);
border-radius: 8px;
padding: 12px;
font-family: var(--mono);
font-size: 12px;
line-height: 1.6;
max-height: 600px;
overflow-y: auto;
}
.log-line { padding: 1px 0; }
.log-irc-in { color: #79b8ff; }
.log-irc-out { color: #85e89d; }
.log-llm { color: #ffab70; }
.log-error { color: var(--danger); }
.log-config { color: #b392f0; }
.log-memory { color: #f97583; }
/* ── Memory exchanges ────────────────────────────────────────────────── */
.exchange-list { margin-top: 16px; }
.exchange {
background: var(--card-bg);
border: 1px solid var(--border);
border-radius: 8px;
padding: 12px 16px;
margin-bottom: 10px;
}
.exchange-meta { font-size: 11px; color: var(--muted); margin-bottom: 6px; }
.exchange-user { margin-bottom: 4px; }
.exchange-bot { color: var(--muted); }
/* ── Misc ────────────────────────────────────────────────────────────── */
.muted { color: var(--muted); font-size: 13px; }
code {
background: rgba(255,255,255,0.06);
padding: 1px 5px;
border-radius: 4px;
font-family: var(--mono);
font-size: 12px;
}

View File

@@ -0,0 +1,30 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{% block title %}IRC Bot Portal{% endblock %}</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
</head>
<body>
<nav class="sidebar">
<div class="sidebar-header">
<span class="logo">&#9656; avcbot</span>
<span class="subtitle">Admin Portal</span>
</div>
<ul class="nav-links">
<li><a href="{{ url_for('index') }}" {% if request.endpoint == 'index' %}class="active"{% endif %}>&#8962; Dashboard</a></li>
<li><a href="{{ url_for('channels') }}" {% if request.endpoint == 'channels' %}class="active"{% endif %}>&#8801; Channels</a></li>
<li><a href="{{ url_for('llm') }}" {% if request.endpoint == 'llm' %}class="active"{% endif %}>&#9881; LLM Settings</a></li>
<li><a href="{{ url_for('bot') }}" {% if request.endpoint == 'bot' %}class="active"{% endif %}>&#9775; Bot Identity</a></li>
<li><a href="{{ url_for('logs') }}" {% if request.endpoint == 'logs' %}class="active"{% endif %}>&#9112; Logs</a></li>
<li><a href="{{ url_for('memory') }}" {% if request.endpoint == 'memory' %}class="active"{% endif %}>&#9783; Memory</a></li>
<li><a href="{{ url_for('config_editor') }}" {% if request.endpoint == 'config_editor' %}class="active"{% endif %}>&#9998; Raw Config</a></li>
</ul>
</nav>
<main class="content">
{% block content %}{% endblock %}
</main>
<script src="{{ url_for('static', filename='app.js') }}"></script>
</body>
</html>

52
portal/templates/bot.html Normal file
View File

@@ -0,0 +1,52 @@
{% extends "base.html" %}
{% block title %}Bot Identity — IRC Bot Portal{% endblock %}
{% block content %}
<h1>Bot Identity</h1>
{% for e in errors %}
<div class="alert alert-error">{{ e }}</div>
{% endfor %}
<form method="post" class="settings-form">
<div class="section">
<h2>Identity</h2>
<div class="field-row">
<label>Bot Nick</label>
<input type="text" name="bot_nick" value="{{ cfg.get('bot_nick', 'avcbot') }}" required>
<span class="hint">Changing sends a live NICK command.</span>
</div>
<div class="field-row">
<label>Real Name</label>
<input type="text" name="bot_realname" value="{{ cfg.get('bot_realname', 'Active Blue IRC Bot') }}">
<span class="hint">Requires reconnect to take effect.</span>
</div>
</div>
<div class="section">
<h2>Trigger Settings</h2>
<div class="field-row">
<label>Trigger on Nick Mention</label>
<input type="checkbox" name="trigger_on_nick" {% if cfg.get('trigger_on_nick', True) %}checked{% endif %}>
<span class="hint">Respond when someone says <code>avcbot: ...</code></span>
</div>
<div class="field-row">
<label>Trigger Prefix</label>
<input type="text" name="trigger_prefix" value="{{ cfg.get('trigger_prefix') or '' }}" placeholder="e.g. !ask">
<span class="hint">Leave blank to disable prefix trigger.</span>
</div>
</div>
<div class="section">
<h2>Ignored Nicks</h2>
<div class="field-row">
<label>Ignored Nicks</label>
<input type="text" name="ignored_nicks" value="{{ cfg.get('ignored_nicks', [])|join(', ') }}" placeholder="ChanServ, NickServ">
<span class="hint">Comma-separated. Bot never responds to these nicks.</span>
</div>
</div>
<div class="actions">
<button type="submit" class="btn btn-primary">Save & Apply</button>
</div>
</form>
{% endblock %}

View File

@@ -0,0 +1,39 @@
{% extends "base.html" %}
{% block title %}Channels — IRC Bot Portal{% endblock %}
{% block content %}
<h1>Channel Management</h1>
<div class="section">
<h2>Add Channel</h2>
<form method="post" action="{{ url_for('channel_add') }}" class="inline-form">
<input type="text" name="channel" placeholder="#general" required>
<button class="btn btn-primary">Join & Save</button>
</form>
</div>
<div class="section">
<h2>Joined Channels</h2>
{% if cfg.get('channels') %}
<table class="data-table">
<thead>
<tr><th>Channel</th><th>Action</th></tr>
</thead>
<tbody>
{% for ch in cfg['channels'] %}
<tr>
<td>{{ ch }}</td>
<td>
<form method="post" action="{{ url_for('channel_remove') }}" style="display:inline">
<input type="hidden" name="channel" value="{{ ch }}">
<button class="btn btn-danger btn-sm">Part</button>
</form>
</td>
</tr>
{% endfor %}
</tbody>
</table>
{% else %}
<p class="muted">No channels configured.</p>
{% endif %}
</div>
{% endblock %}

View File

@@ -0,0 +1,29 @@
{% extends "base.html" %}
{% block title %}Raw Config — IRC Bot Portal{% endblock %}
{% block content %}
<h1>Raw Config Editor</h1>
{% if error %}
<div class="alert alert-error">{{ error }}</div>
{% endif %}
{% if success %}
<div class="alert alert-success">{{ success }}</div>
{% endif %}
<form method="post" class="settings-form">
<div class="section">
<div class="field-row">
<label>config.json</label>
<textarea name="config_raw" rows="30" class="mono">{{ raw_json }}</textarea>
</div>
</div>
<div class="actions">
<button type="submit" class="btn btn-primary">Save & Reload Bot</button>
<a href="{{ url_for('config_download') }}" class="btn btn-secondary">Download</a>
<form method="post" action="{{ url_for('config_upload') }}" enctype="multipart/form-data" style="display:inline">
<input type="file" name="config_file" accept=".json">
<button type="submit" class="btn btn-secondary">Upload</button>
</form>
</div>
</form>
{% endblock %}

View File

@@ -0,0 +1,62 @@
{% extends "base.html" %}
{% block title %}Dashboard — IRC Bot Portal{% endblock %}
{% block content %}
<h1>Dashboard</h1>
<div class="cards">
<div class="card">
<div class="card-label">Bot Status</div>
<div class="card-value status-{{ 'online' if pid_exists else 'offline' }}">
{{ 'Running' if pid_exists else 'Stopped' }}
</div>
</div>
<div class="card">
<div class="card-label">Nick</div>
<div class="card-value">{{ cfg.get('bot_nick', 'avcbot') }}</div>
</div>
<div class="card">
<div class="card-label">Ollama Model</div>
<div class="card-value">{{ cfg.get('ollama_model', '—') }}</div>
</div>
<div class="card">
<div class="card-label">Channels</div>
<div class="card-value">{{ cfg.get('channels', [])|length }}</div>
</div>
<div class="card">
<div class="card-label">Memory</div>
<div class="card-value">{{ 'Enabled' if cfg.get('memory_enabled', True) else 'Disabled' }}</div>
</div>
<div class="card">
<div class="card-label">Reload Socket</div>
<div class="card-value status-{{ 'online' if sock_exists else 'offline' }}">
{{ 'Available' if sock_exists else 'Not available' }}
</div>
</div>
</div>
<div class="section">
<h2>Connection</h2>
<table class="info-table">
<tr><td>ZNC Host</td><td>{{ cfg.get('znc_host', 'ham.activeblue.net') }}</td></tr>
<tr><td>Ollama</td><td>{{ cfg.get('ollama_host', '—') }}:{{ cfg.get('ollama_port', '—') }}</td></tr>
<tr><td>Joined Channels</td><td>{{ cfg.get('channels', [])|join(', ') or '—' }}</td></tr>
<tr><td>Trigger on Nick</td><td>{{ 'Yes' if cfg.get('trigger_on_nick') else 'No' }}</td></tr>
<tr><td>Trigger Prefix</td><td>{{ cfg.get('trigger_prefix') or '—' }}</td></tr>
</table>
</div>
<div class="section">
<h2>Quick Actions</h2>
<div class="actions">
<form method="post" action="{{ url_for('action_reconnect') }}">
<button class="btn btn-secondary">Reconnect</button>
</form>
<form method="post" action="{{ url_for('action_reload') }}">
<button class="btn btn-secondary">Reload Config</button>
</form>
<form method="post" action="{{ url_for('action_clear_log') }}">
<button class="btn btn-danger">Clear Log</button>
</form>
</div>
</div>
{% endblock %}

79
portal/templates/llm.html Normal file
View File

@@ -0,0 +1,79 @@
{% extends "base.html" %}
{% block title %}LLM Settings — IRC Bot Portal{% endblock %}
{% block content %}
<h1>LLM Settings</h1>
{% for e in errors %}
<div class="alert alert-error">{{ e }}</div>
{% endfor %}
<form method="post" class="settings-form">
<div class="section">
<h2>Ollama Backend</h2>
<div class="field-row">
<label>Host</label>
<input type="text" name="ollama_host" value="{{ cfg.get('ollama_host', '192.168.2.10') }}" required>
</div>
<div class="field-row">
<label>Port</label>
<input type="number" name="ollama_port" value="{{ cfg.get('ollama_port', 11434) }}" required>
</div>
<div class="field-row">
<label>Model</label>
<input type="text" name="ollama_model" value="{{ cfg.get('ollama_model', 'llama3.1') }}" required>
</div>
<div class="field-row">
<label>Temperature</label>
<input type="number" name="ollama_temperature" value="{{ cfg.get('ollama_temperature', 0.7) }}" step="0.05" min="0" max="2" required>
</div>
<div class="field-row">
<label>Token Limit (num_predict)</label>
<input type="number" name="ollama_num_predict" value="{{ cfg.get('ollama_num_predict', 120) }}" min="1" required>
</div>
<div class="field-row">
<label>Context Size (num_ctx tokens)</label>
<input type="number" name="ollama_num_ctx" value="{{ cfg.get('ollama_num_ctx', 2048) }}" min="512" required>
</div>
<div class="field-row">
<label>Response Timeout (seconds)</label>
<input type="number" name="response_timeout_seconds" value="{{ cfg.get('response_timeout_seconds', 30) }}" min="5" required>
</div>
</div>
<div class="section">
<h2>Response Handling</h2>
<div class="field-row">
<label>System Prompt</label>
<textarea name="system_prompt" rows="4">{{ cfg.get('system_prompt', '') }}</textarea>
</div>
<div class="field-row">
<label>Max Response Length (chars)</label>
<input type="number" name="max_response_length" value="{{ cfg.get('max_response_length', 400) }}" min="50" required>
</div>
<div class="field-row">
<label>Channel Context Window (messages)</label>
<input type="number" name="context_window" value="{{ cfg.get('context_window', 5) }}" min="0" required>
</div>
</div>
<div class="section">
<h2>Persistent Memory</h2>
<div class="field-row">
<label>Memory Enabled</label>
<input type="checkbox" name="memory_enabled" {% if cfg.get('memory_enabled', True) %}checked{% endif %}>
</div>
<div class="field-row">
<label>Memory Depth (exchanges)</label>
<input type="number" name="memory_history_limit" value="{{ cfg.get('memory_history_limit', 8) }}" min="0" required>
</div>
<div class="field-row">
<label>Memory Max Age (days, 0=forever)</label>
<input type="number" name="memory_max_age_days" value="{{ cfg.get('memory_max_age_days', 90) }}" min="0" required>
</div>
</div>
<div class="actions">
<button type="submit" class="btn btn-primary">Save & Apply</button>
</div>
</form>
{% endblock %}

View File

@@ -0,0 +1,21 @@
{% extends "base.html" %}
{% block title %}Logs — IRC Bot Portal{% endblock %}
{% block content %}
<h1>Bot Logs</h1>
<div class="log-toolbar">
<label>
<input type="checkbox" id="auto-refresh"> Auto-refresh (3s)
</label>
<a href="{{ url_for('logs_download') }}" class="btn btn-secondary btn-sm">Download</a>
<form method="post" action="{{ url_for('action_clear_log') }}" style="display:inline">
<button class="btn btn-danger btn-sm">Clear Log</button>
</form>
</div>
<div class="log-box" id="log-box">
{% for line in lines %}
<div class="log-line {{ line|log_class }}">{{ line|e }}</div>
{% endfor %}
</div>
{% endblock %}

View File

@@ -0,0 +1,73 @@
{% extends "base.html" %}
{% block title %}Memory — IRC Bot Portal{% endblock %}
{% block content %}
<h1>Conversation Memory</h1>
<div class="cards">
<div class="card">
<div class="card-label">Total Exchanges</div>
<div class="card-value">{{ stats.total_exchanges }}</div>
</div>
<div class="card">
<div class="card-label">Database Size</div>
<div class="card-value">{{ (stats.total_size_bytes / 1024)|round(1) }} KB</div>
</div>
</div>
<div class="section">
<h2>Browse History</h2>
<form method="get" class="inline-form">
<select name="channel" onchange="this.form.submit()">
<option value="">— Select channel —</option>
{% for ch in channels %}
<option value="{{ ch }}" {% if ch == selected_chan %}selected{% endif %}>{{ ch }}</option>
{% endfor %}
</select>
{% if selected_chan %}
<select name="nick" onchange="this.form.submit()">
<option value="">— Select nick —</option>
{% for n in nicks %}
<option value="{{ n }}" {% if n == selected_nick %}selected{% endif %}>{{ n }}</option>
{% endfor %}
</select>
{% endif %}
</form>
{% if selected_chan and selected_nick and exchanges %}
<div class="exchange-list">
{% for ex in exchanges %}
<div class="exchange">
<div class="exchange-meta">{{ ex.timestamp }}</div>
<div class="exchange-user"><strong>{{ selected_nick }}:</strong> {{ ex.user }}</div>
<div class="exchange-bot"><strong>bot:</strong> {{ ex.assistant }}</div>
</div>
{% endfor %}
</div>
{% elif selected_chan and selected_nick %}
<p class="muted">No exchanges found.</p>
{% endif %}
</div>
<div class="section">
<h2>Clear History</h2>
<div class="actions">
{% if selected_chan and selected_nick %}
<form method="post" action="{{ url_for('memory_clear_user') }}">
<input type="hidden" name="channel_dir" value="{{ selected_chan }}">
<input type="hidden" name="nick" value="{{ selected_nick }}">
<button class="btn btn-danger">Clear {{ selected_nick }}'s history in {{ selected_chan }}</button>
</form>
{% endif %}
{% if selected_chan %}
<form method="post" action="{{ url_for('memory_clear_channel') }}">
<input type="hidden" name="channel_dir" value="{{ selected_chan }}">
<button class="btn btn-danger">Clear all history in {{ selected_chan }}</button>
</form>
{% endif %}
<form method="post" action="{{ url_for('memory_clear_all') }}" onsubmit="return confirm('Wipe ALL conversation history? This cannot be undone.')">
<input type="hidden" name="confirm" value="yes">
<button class="btn btn-danger">Clear ALL History</button>
</form>
</div>
</div>
{% endblock %}

11
requirements.txt Normal file
View File

@@ -0,0 +1,11 @@
# HTTP client for Ollama
httpx==0.27.0
# Web portal
flask==3.0.3
jinja2==3.1.4
# Config / env
python-dotenv==1.0.1
# socket and sqlite3 are Python stdlib — no install needed