Everything your IT team needs to deploy Rendex on your firm's server. One person, one terminal, under 30 minutes.
| 1. What You're Installing | Overview + architecture |
| 2. Server Requirements | Hardware + software prereqs |
| 3. Pre-Flight Check | Automated readiness scan |
| 4. Installation | Step-by-step walkthrough |
| 5. Health Check | Verify everything works |
| 6. Post-Install | First login + TLS certs |
| 7. Uninstalling | Clean removal |
| 8. Troubleshooting | Common issues + fixes |
We're on the call with you. We schedule a Zoom call during installation and walk through every step together. This guide is a reference — you won't be doing this alone.
Estimated time: 20–30 minutes (first install)
Skill level: Basic Linux terminal (copy-paste commands)
Support: info@rendex.ai
Rendex is a private AI system that runs entirely on your server. It indexes your firm's documents and lets attorneys ask questions in plain English, with every answer citing the exact source document, section, and page. Nothing ever leaves your network.
The system consists of five services, all running as Docker containers on a single machine:
| Service | What It Does | Port (Internal) |
|---|---|---|
| Nginx | HTTPS reverse proxy, TLS termination, rate limiting | 80, 443 |
| Chat UI | Web application (Express.js) — the interface attorneys use | 3000 |
| PostgreSQL | User accounts, permissions, matters, audit log | 5432 (localhost only) |
| Qdrant | Vector database — stores document embeddings for search | 6333 (internal only) |
| Ollama | Local AI models — runs on GPU for embedding + generation | 11434 (internal only) |
Network posture: Only ports 80 (redirects to 443) and 443 are exposed. PostgreSQL is bound to localhost. Qdrant and Ollama are only accessible within the Docker network. Zero outbound connections. You can verify this with a packet capture.
You'll receive a folder (via secure transfer or USB) containing everything needed:
The pre-flight check will verify all of these, but here's what you need before starting:
| Software | Install Command | Verify |
|---|---|---|
| Docker Engine | curl -fsSL https://get.docker.com | sh | docker --version |
| Docker Compose | Included with Docker Engine 23.0+ | docker compose version |
| NVIDIA Driver | sudo apt install nvidia-driver-535 | nvidia-smi |
| NVIDIA Container Toolkit | See docs.nvidia.com/datacenter/... | docker run --gpus all nvidia/cuda:12.0-base nvidia-smi |
| curl, openssl | sudo apt install curl openssl | curl --version |
GPU is required. The AI models run on the NVIDIA GPU. Without a GPU, document processing and query response will be extremely slow (10–30x slower). Intel/AMD GPUs are not currently supported.
Before installing, run the pre-flight script to verify your server meets all requirements. This checks hardware, software, and network readiness.
The script checks the following and prints a pass/fail table:
| Check | Requirement | If It Fails |
|---|---|---|
| Operating System | Ubuntu/Debian-based Linux | Install Ubuntu 22.04 LTS |
| CPU Cores | 8+ cores (12+ recommended) | Use a more capable server |
| RAM | 32 GB+ (64 GB recommended) | Add more memory |
| Disk Space | 100 GB+ free | Free space or add a drive |
| NVIDIA GPU | 8+ GB VRAM | Install an NVIDIA GPU |
| NVIDIA Driver | nvidia-smi works | sudo apt install nvidia-driver-535 |
| Docker Engine | Installed and running | curl -fsSL https://get.docker.com | sh |
| Docker Compose | v2+ available | Update Docker Engine |
| NVIDIA Container Toolkit | GPU passthrough works | Install nvidia-container-toolkit |
| Ports 80 & 443 | Not in use | Stop Apache/Nginx or reassign ports |
| curl & openssl | Available on PATH | sudo apt install curl openssl |
Warnings are OK. The installer will proceed with warnings (e.g., non-Ubuntu distro, slightly under recommended RAM). It will only abort on failures (missing GPU, Docker not installed, etc.).
Once the pre-flight passes, run the installer:
The installer runs through 7 automated steps. Here's exactly what happens at each one:
Re-runs the pre-flight check. Aborts if any critical check fails.
The installer asks for four pieces of information:
| Prompt | Example | Notes |
|---|---|---|
| Firm name | Morrison & Callahan LLP | Used in the TLS certificate and UI branding |
| Admin email | jsmith@morrison.com | This is the first login account |
| Admin password | (hidden input) | Minimum 12 characters, confirmed twice |
| Server hostname/IP | 192.168.1.50 | Auto-detects your IP; press Enter to accept |
Save these credentials. The admin email and password are needed to log in. The installer does not store or display the password after setup. If lost, it must be reset via the database.
The installer generates cryptographically random secrets and writes them to a .env file (chmod 600, root-only readable):
These secrets are generated locally using openssl rand. They are never transmitted anywhere.
A self-signed TLS certificate is generated for the hostname/IP you provided. This enables HTTPS immediately.
Certificate warning: Browsers will show a security warning for self-signed certificates. This is expected. Click "Advanced" then "Proceed" (Chrome) or "Accept the Risk" (Firefox). You can replace this with your firm's CA-signed certificate at any time — see page 4.
The installer pulls Docker images from Docker Hub and builds the application container. This is the longest step on a first install (10–20 minutes depending on internet speed).
| Image | Source | Approximate Size |
|---|---|---|
| ollama/ollama | Docker Hub | ~800 MB + models (~4 GB) |
| qdrant/qdrant | Docker Hub | ~150 MB |
| postgres:16-alpine | Docker Hub | ~80 MB |
| nginx:alpine | Docker Hub | ~25 MB |
| rendex-chat-ui | Built locally from source | ~300 MB |
After the initial download, all images are cached locally. Subsequent starts are instant.
Docker Compose starts all five containers. The installer waits for each service to become healthy:
AI model download: On first install, Ollama downloads two models (~4.5 GB total): llama3 (language model) and nomic-embed-text (embedding model). This can take 5–10 minutes. The installer shows progress and will notify you if it's still loading.
The installer automatically runs the health check to verify all services. See section 5 for details.
When everything succeeds, you'll see:
Run the health check at any time to verify all services are operational:
It verifies:
| Check | What It Tests |
|---|---|
| Containers | All 5 Docker containers are in "running" state |
| PostgreSQL | Accepting connections + schema initialized (7+ tables) |
| Qdrant | API responding on port 6333 |
| Ollama | API responding + both AI models loaded (llama3, nomic-embed-text) |
| GPU | NVIDIA GPU is being utilized by Ollama |
| Chat UI | Application responding on /health endpoint |
| HTTPS | Nginx serving TLS on port 443 |
1. Open https://<your-server-ip> in Chrome or Firefox.
2. Accept the self-signed certificate warning (Advanced → Proceed).
3. Log in with the admin email and password you set during installation.
4. Create your first matter (case/project) and upload documents.
To use your firm's CA-signed certificate instead of the self-signed one:
Auto-start on reboot: Docker is configured to restart containers automatically. If the server reboots, Rendex will start on its own. Verify with ./health-check.sh after any reboot.
If the pilot doesn't convert, or you need to remove Rendex for any reason:
The uninstaller will:
1. Stop and remove all Docker containers.
2. Ask whether to keep or delete firm data (database, documents, vectors, AI models).
— Deleting data requires typing DELETE as confirmation. Default is to preserve data.
3. Optionally remove Docker images to reclaim disk space.
4. Print instructions for removing the install directory.
Data deletion is permanent. If you choose to delete firm data, all documents, database records, user accounts, and the audit log are irreversibly removed. We recommend backing up the Docker volumes before uninstalling if there's any chance you'll need the data later.
| Problem | Solution |
|---|---|
| Pre-flight: NVIDIA GPU not detected | Install the NVIDIA driver: sudo apt install nvidia-driver-535, then reboot. |
| Pre-flight: Docker GPU test fails | Install NVIDIA Container Toolkit: sudo apt install nvidia-container-toolkit, then sudo systemctl restart docker. |
| Port 80/443 already in use | Stop Apache (sudo systemctl stop apache2) or other web servers using those ports. |
| AI models "still loading" after 15+ min | Check internet connection. Models download from Ollama's CDN. Run docker compose logs -f ollama to see progress. |
| Chat UI won't start | Check logs: docker compose logs chat-ui. Usually a missing .env variable or database connection issue. |
| Certificate warning won't go away | This is expected with self-signed certs. Replace with a CA-signed certificate (see Post-Install section). |
| Slow query responses | Verify GPU is being used: nvidia-smi should show Ollama process. If CPU-only, response times will be 10–30x slower. |
| Server rebooted, can't access Rendex | Docker should auto-restart containers. If not: docker compose up -d from the install directory. |
| Need to reset admin password | Contact info@rendex.ai — we'll walk you through a database password reset. |
Questions? Stuck on something?
Email info@rendex.ai or call us during your scheduled deployment Zoom.
We respond to installation issues within 2 hours during business hours.