docs: add Debian apt installation instructions
This commit is contained in:
97
README.md
97
README.md
@@ -32,14 +32,17 @@ Ollama (port 11434 on the GPU server) is used for model management — pulling,
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Docker + Docker Compose
|
||||
- A remote machine with:
|
||||
- SSH access
|
||||
- `miniconda3` with a `synthetic-data` conda env containing `synthetic-data-kit`
|
||||
- `train.py` at `/opt/synthetic/train.py`
|
||||
- Ollama running on port `11434`
|
||||
|
||||
### Run
|
||||
---
|
||||
|
||||
### Option A — Docker (quickest)
|
||||
|
||||
**Additional requirements:** Docker + Docker Compose
|
||||
|
||||
```bash
|
||||
docker compose up --build
|
||||
@@ -53,6 +56,96 @@ docker compose up --build
|
||||
|
||||
The `OLLAMA_URL` environment variable in `docker-compose.yml` defaults to `http://192.168.2.47:11434` — update it to point to your GPU server.
|
||||
|
||||
---
|
||||
|
||||
### Option B — Install as a system package (Debian / Ubuntu)
|
||||
|
||||
The package is published to the local Gitea registry and installs the FastAPI backend as a systemd service with nginx serving the frontend on port **3000**.
|
||||
|
||||
**1. Add the apt source**
|
||||
|
||||
```bash
|
||||
echo "deb [trusted=yes] http://192.168.1.64:3000/api/packages/tocmo0nlord/debian bookworm main" \
|
||||
| sudo tee /etc/apt/sources.list.d/llm-trainer.list
|
||||
sudo apt update
|
||||
```
|
||||
|
||||
**2. Install**
|
||||
|
||||
```bash
|
||||
sudo apt install llm-trainer
|
||||
```
|
||||
|
||||
The installer will automatically:
|
||||
- Create a `llm-trainer` system user
|
||||
- Install the FastAPI backend under `/opt/llm-trainer/` with its own Python venv
|
||||
- Enable and start the `llm-trainer` systemd service
|
||||
- Configure nginx to serve the React frontend on port **3000** and proxy `/api/` to the backend
|
||||
|
||||
| Service | URL |
|
||||
|---------|-----|
|
||||
| Frontend | `http://<server-ip>:3000` |
|
||||
| Backend API | `http://<server-ip>:3000/api` |
|
||||
| API docs | `http://<server-ip>:8080/docs` |
|
||||
|
||||
**3. Configure**
|
||||
|
||||
Edit `/etc/llm-trainer/env` to set the Ollama URL for your GPU server:
|
||||
|
||||
```ini
|
||||
OLLAMA_URL=http://192.168.2.47:11434
|
||||
```
|
||||
|
||||
Then restart the service:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart llm-trainer
|
||||
```
|
||||
|
||||
**Service management**
|
||||
|
||||
```bash
|
||||
sudo systemctl status llm-trainer
|
||||
sudo systemctl restart llm-trainer
|
||||
sudo journalctl -u llm-trainer -f # live logs
|
||||
sudo tail -f /var/log/llm-trainer/backend.log
|
||||
```
|
||||
|
||||
**Uninstall**
|
||||
|
||||
```bash
|
||||
sudo apt remove llm-trainer # keep config and logs
|
||||
sudo apt purge llm-trainer # remove everything including /opt/llm-trainer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Building the .deb from source
|
||||
|
||||
To rebuild the package (e.g. after code changes):
|
||||
|
||||
```bash
|
||||
# Install build dependencies
|
||||
sudo apt install -y git nodejs npm python3 python3-pip python3-venv nginx
|
||||
|
||||
# Clone and build
|
||||
git clone http://192.168.1.64:3000/tocmo0nlord/llm-trainer.git
|
||||
cd llm-trainer
|
||||
chmod +x packaging/build-deb.sh
|
||||
./packaging/build-deb.sh
|
||||
# Produces llm-trainer_1.0.0_amd64.deb in the repo root
|
||||
|
||||
# Install locally
|
||||
sudo dpkg -i llm-trainer_1.0.0_amd64.deb
|
||||
sudo apt-get install -f # resolve any missing runtime deps
|
||||
|
||||
# Or upload to the Gitea registry
|
||||
curl -u tocmo0nlord:<token> --upload-file llm-trainer_1.0.0_amd64.deb \
|
||||
http://192.168.1.64:3000/api/packages/tocmo0nlord/debian/pool/bookworm/main/upload
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Configuration
|
||||
|
||||
The pipeline reads its config from `/opt/synthetic/synthetic-data-kit/config.yaml` on the remote server. You can edit it live from the **Config Editor** tab in the UI.
|
||||
|
||||
Reference in New Issue
Block a user