docker-ai-stack

hwdsl2/docker-ai-stack
★ 10 stars Shell 🤖 AI/LLM Updated today
Deploy a complete, self-hosted AI stack on your own server with one command. Includes Ollama (LLM), LiteLLM (AI gateway), Whisper (STT), Kokoro (TTS), Embeddings (RAG), and MCP Gateway. Most services run locally; LiteLLM optionally routes to external providers. Supports NVIDIA GPU (CUDA) acceleration.
View on GitHub →

Quick Install

Copy the config for your editor. Some servers may need additional setup — check the README.

Add to claude_desktop_config.json:

{
  "mcpServers": {
    "docker-ai-stack": {
      "command": "npx",
      "args": [
        "-y",
        "hwdsl2/docker-ai-stack"
      ]
    }
  }
}

Topics

aiai-stackdockerdocker-composedocker-imageembeddingsinferencelinuxllmlocal-aimcpollamaopenai-compatibleprivate-airag