5
Show HN: First Claude Code client for Ollama local models
Just to clarify the background a bit. This project wasn’t planned as a big standalone release at first. On January 16, Ollama added support for an Anthropic-compatible API, and I was curious how far this could be pushed in practice. I decided to try plugging local Ollama models directly into a Claude Code-style workflow and see if it would actually work end to end.
Here is the release note from Ollama that made this possible: https://ollama.com/blog/claude
Technically, what I do is pretty straightforward:
- Detect which local models are available in Ollama.
- When internet access is unavailable, the client automatically switches to Ollama-backed local models instead of remote ones.
- From the user’s perspective, it is the same Claude Code flow, just backed by local inference.
In practice, the best-performing model so far has been qwen3-coder:30b. I also tested glm-4.7-flash, which was released very recently, but it struggles with reliably following tool-calling instructions, so it is not usable for this workflow yet.