

Yes, but its clearly a building block of Meta’s LLM training effort, and part of a pattern.
One implication I didn’t mention, and don’t have hard proof I can point to, is garbage in garbage out. Meta let AI slop and human garbage proliferate on Facebook, squandering basically the biggest advantage (besides cash) they have. It’s often speculated that, as it turns out, Twitter and Facebook training data is kinda crap.
…And they’re at it again. Zuckerberg pours cash into corporate trash and get slop back. It’s an internal disaster, like their own divisions.
On the other side, it’s often thought that Chinese models are so good for their size/compute because they’re ahem getting data from the Chinese government, and don’t need to worry about legal issues.
What @mierdabird@lemmy.dbzer0.com said, but the adapters arent cheap. You’re going to end up spending more than the 1060 is worth.
A used desktop to slap it in, that you turn on as needed, might make sense? Doubly so if you can find one with an RTX 3060, which would open up 32B models with TabbyAPI instead of ollama. Some configure them to wake on LAN and boot an LLM server.