

@Passerby6497 yes I’ve been told as much 😅
https://lemmy.world/comment/18919678
Jokes aside, I understand this was the point. I just wanted to make the point that it is feasible, if not currently economically viable
🇫🇷
@Passerby6497 yes I’ve been told as much 😅
https://lemmy.world/comment/18919678
Jokes aside, I understand this was the point. I just wanted to make the point that it is feasible, if not currently economically viable
@rtxn all right, that’s all you had to say initially, rather than try convincing me that the network client was out of the loop: it isn’t, that’s the whole point of Anubis
@Passerby6497 my stance is that the LLM might recognize that the best way to solve the problem is to run chromium and get the answer from there, then pass it on?
@rtxn validation of what?
This is a typical network thing: client asks for resource, server says here’s a challenge, client responds or doesn’t, has the correct response or not, but has the challenge regardless
@Passerby6497 I really don’t understand the issue here
If there is a challenge to solve, then the server has provided that to the client
There is no way around this, is there?
@rtxn I don’t understand how that isn’t client side?
Anything that is client side can be, if not spoofed, then at least delegated to a sub process, and my argument stands
@mfed1122 yeah that is my worry, what’s an acceptable wait time for users? A tenth of a second is usually not noticeable to a human, but is it useful in this context? What about half a second, etc
I don’t know that I want a web where everything is artificially slowed by a full second for each document