Quick post about a change I made that’s worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

  • Ludicrous0251@piefed.zip
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    2 days ago

    No, not free, OPs power bill just climbed behind the scenes to match. Probably a discount but definitely not free.

    • friend_of_satan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      While this is correct, sometimes it can be free. I live in a cold climate, and over the winter I hooked up a folding@home computer in my office to keep things a bit warmer. Computers are 100% as efficient as a space heater.

      Of course now that it’s getting warm things are changing. I’m actually in the middle of doing my last folding@home tasks until the temps drop next fall.

      • Ludicrous0251@piefed.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        17 hours ago

        Even in very cold regions heat pumps maintain a COP>1, so even running it as a space heater may not be free if you have access to a more efficient alternative. Also that may be a responsible justification for Folding@Home, but I doubt OP is turning off their LLM in the summer.

    • Katherine 🪴@piefed.social
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      2 days ago

      Unless OP is running a data center, then there’s not really much of a power increase to run a local Ollama.

      • doodledup@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        2 days ago

        Running a thousand watts and not running a thousand watts can be quiet a difference depending on where you live. And then consider buying all of the hardware. In many cases it’s probably cheaper to just pay $40 al month.

        • Semperverus@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          20 hours ago

          Do you think it runs at 1000w continuously? On any decent GPU, the responses are nearly instantaneous to maybe a few seconds of runtime at maybe max GPU consumption.

          Compare that to playing a few hours of cyberpunk 2077 with raytracing and maxed out settings at 4k.

          Don’t get me wrong, there’s a lot to hate about AI/LLMs, but running one locally without data harvesting engines is pretty minimal. The creation of the larger models is where the consumption primarily comes in, and then the data centers that run them are servicing millions of inquiries a minute making the concentration of consumption at a single point significantly higher (plus they retrain the model there on current and user-fed data, including prompts, whereas your computer hosting ollama would not.)

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            2 days ago

            It’s also an 8 gigaparameter model. That’s pretty tiny, even if they use it heaps.

        • StripedMonkey@lemmy.zip
          link
          fedilink
          English
          arrow-up
          15
          ·
          2 days ago

          That would be true worst case, but you’re never running inference 24/7. It’s no crazier than gaming in that regard.