Quick post about a change I made that’s worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Locally? You’d need a VERY powerful GPU to really be able to match the capabilities of Opus 4.6 online. I’ve played around with this stuff for the same reasons and while you can absolutely run a model with all of Claude’s capabilities offline, very few people will have the hardware to let it actually run at an acceptable speed and with a sufficient context window. That last part is the most important thing for coding because it’s what allows the model to operate across an entire project and not just a few functions at a time.