But even if you use GoMommy extra super duper triple snake oil security checked ssl cert, if I trick LetsEncrypt to sign a key for that domain I still have a valid cert for your site.
But even if you use GoMommy extra super duper triple snake oil security checked ssl cert, if I trick LetsEncrypt to sign a key for that domain I still have a valid cert for your site.
I doubt the disk will bottleneck at 40mb/s when doing sequential write. Torrent downloads are usually heavy random writes, which is the worst you can do to a HDD.
Sell them to zoomers as 3d save button coasters. $19.95 each
Llama3 8b can be run at 6gb vram, and it’s fairly competent. Gemma has a 9b I think, which would also be worth looking into.
Yep. These days the alternatives are “yes” and “ask again later”, with yes being the default. “No” is not an option any more.
And just to top it off, make this pythonscript a dialect of rust
Better background backups
Rework background backups to be more reliable
Hilarious for a system which main point / feature is photo backup
🫰🤙🫵👌✊🫳🫸🤲🤌
I mean, I totally agree with you. But that also kinda ignores all the useful things a dog can be trained to do.
It’s less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that’s usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb’s of RAM that’s many times faster than the CPU’s ram, which is the main reason it’s faster for llm’s.
Most tpu’s don’t have much ram, and especially cheap ones.
Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.
For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.
And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.
As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.
So no, you’re not loading all the notes directly, and you won’t have a smart model.
For your hardware and use case… try phi3-mini with a RAG system as a start.
Koboldcpp is way easier. Download exe, double click exe, open gguf file with the AI model, click start.
Then put on your robe and wizard hat
So you’re saying it’s already feature complete with most json libraries out there?
Who are you?
What do you want?
Also, I think good and bad is a bit fluid there. It’s just people with different agendas. Well, except emperor Cartagia. And perhaps Bester.
Yep, I usually make docker environments for cuda workloads because of these things. Much more reliable
They had a trial run with bleach already
On occasion their strategy has been “if we send in enough people, they’ll eventually run out of bullets”
They out-Zapp Brannigan’ed Zapp Brannigan. That should terrify you on multiple levels
That explains the horsing around
He also pretended to cut someone’s hair
I still use http a lot for internal stuff running in my own network. There’s no spying there… I hope … And ssl for local network only services is a total pita.
So I really hope browsers won’t adapt https only