Best Ollama models/settings for an 8GB VPS (CPU only, ARM)? Running into memory & looping issues.
Hi everyone, I'm trying to run a local LLM via Ollama on a Hetzner cax21 VPS (ARM64, 4 vCPUs, 8GB RAM, 80GB SSD). I have Ollama running successfully via Coolify.