From the title I thought Gnome foundation made a Ai Client for a sec, Until I read the article.
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!
I look forward to not installing it.
Big nope from me dawg
For some reason, these local LLMS are straight up stupid. I tried deepseek R1 through ollama and it was straight up stupid and gave everything wrong. Anyone got the same results? I did the 7b and 14b (if I remember these numbers correctly), 32 straight up didn’t install because I didn’t have enough RAM.
I had more success with Qwen3 14b/8b,But it still does small mistakes(like for me I asked It to compare Gstreamer and ffmpeg it got the licensing wrong)
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more
I’ve had good experience with smollm2:135m. The test case I used was determining why an HTTP request from one system was not received by another system. In total, there are 10 DB tables it must examine not only for logging but for configuration to understand if/how the request should be processed or blocked. Some of those were mapping tables designed such that table B must be used to join table A to table C, table D must be used to join table C to table E. Therefore I have a path to traverse a complete configuration set (table A <-> table E).
I had to describe each field being pulled (~150 fields total), but it was able to determine the correct reason for the request failure. The only issue I’ve had was a separate incident using a different LLM when I tried to use AI to generate golang template code for a database library I was wanting to use. It didn’t use it and recommended a different library. When instructed that it must use this specific library, it refused (politely). That caught me off-guard. I shouldn’t have to create a scenario where the AI goes to jail if it fails to use something. I should just have to provide the instruction and, if that instruction is reasonable, await output.
The performance is relative to the user. Could it be that you’re a god damned genius? :/
Works with Ollama, neat!
Or, ORRRR…just do the stuff yourself and don’t further perpetuate this dumbshit until it doesn’t require an entire months worth of energy for an efficient home to run to search “Hentai Alien Tentacle Porn” for you.
Buncha savages.
search “Hentai Alien Tentacle Porn” for you
This is suspiciously specific 🙂
It’s clearly what most Linux users that would use “AI” would be searching.
it doesn’t use that much energy
holy shit, no thank you
I haven’t tested this but TBH as someone who has run Linux at home for 25 years I love the idea of an always alert sysadmin keeping my machine maintained and configured to my specs. Keep my IDS up to date. And so on.
Two requirements:
1 Be an open source local model with no telemetry
2 Let me review proposed changes to my system and explain why they should be made
Like what do you need to keep configured? lol Linux is set it and forget it. I’ve had installs be fine from day one to year 7. It’s not like windows where Microsoft is constantly changing things and changing your settings. Like it takes minimum effort to keep a Linux server/system going after initial configuration.
You could use AI for self-healing network infrastructure, but in the context of what this tool would do, I’m struggling. You could monitor logs or IDS/IPS, but you’d really just be replacing a solution that already exists (SNMP). And yeah, SNMP isn’t going to be pattern matching, but your IDS would already be doing that. You don’t need your traffic pattern matching system pattern matched by AI.
- That is not what this does
- You can certainly have unattended updates without an LLM in the mix.
Even if it was open source (it isn’t, because no model is really open source ultimately) and even if it let you review what it says it’s gonna do, AI is known for pulling all kinds of shit and lie about it.
Would you really trust your system to something that can do this? I wouldn’t…
Would you really trust your system to something that can do this? I wouldn’t…
I wouldn’t trust a Sales team member with database permissions, either. This is why we have access control in sysadmin. That AI had permission to operate as the user in Replit’s cloud environment. Not a separate restricted user, but as that user and without sandboxing. That should never happen. So, if I were managing that environment I would have to ask the question: is it the AI’s fault for breaking it or is it my fault for allowing the AI to break it?
AI is known for pulling all kinds of shit and lie about it.
So are interns. I don’t think you can hate the tool for it being misused, but you certainly can hate the user for allowing it.
Have you checked Mistral? Open weights and training set. What more do you want?
While I definitely do not want a LLM (especially not Open AI or whatever) to have access to my terminal or other stuff on my PC, and in general don’t have any use for that, I find it cool that something like this is available now.
Remember, it’s totally optional and nobody forces you to download that stuff. You have the choice to ignore it, and that’s the great thing about Linux!