For OpenClaw, Hermes and more. Find free and low-cost inference (LLM models). Use them directly. Provides both a CLI and MCP server that knows which free-tier LLM APIs exist, which ones you have keys for, and which one fits your task. Returns endpoints so can you call models directly. No proxy, no middleware, no latency tax.