Lvxferre [he/him]

The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 479 Comments
Joined 1 year ago
cake
Cake day: January 12th, 2024

help-circle




  • Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.

    Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.

    But I wish it had some broader use, that would justify its cost.

    I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.

    (In both our cases, it’s about matching cost vs. use.)



  • Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!

    …what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.


  • The whole thing can be summed up as the following: they’re selling you a hammer and telling you to use it with screws. Once you hammer the screw, it trashes the wood really bad. Then they’re calling the wood trashing “hallucination”, and promising you better hammers that won’t do this. Except a hammer is not a tool to use with screws dammit, you should be using a screwdriver.

    An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates.

    So he’s suggesting that the models are producing less accurate results… because they have higher rates of less accurate results? This is a tautological pseudo-explanation.

    AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months

    When are people going to accept the fact that large “language” models are not general intelligence?

    ideally to make them better at giving us answers we can trust

    Those models are useful, but only a fool trusts = is gullible towards their output.

    OpenAI says the reasoning process isn’t to blame.

    Just like my dog isn’t to blame for the holes in my garden. Because I don’t have a dog.

    This is sounding more and more like model collapse - models perform worse when trained on the output of other models.

    inb4 sealions asking what’s my definition of reasoning in 3…2…1…





  • As I mentioned in another post, about the same topic, he’s tying a sinking ship to another. So both can sink together.

    Musk said the combined company will “build a platform that doesn’t just reflect the world but actively accelerates human progress.”

    The funniest part is that this might not be a lie - I wouldn’t be surprised if Musk genuinely believed that.

    …let’s get real. xAI’s main product is Grok, a text and image generator. Twitter is basically a blog platform for the sort of people who whine “WAAAH! TL;DR!”. Merge both and you’ll get what? Automated shitposting!




  • Etymologically “agent” is just a fancy borrowed synonym for “doer”. So an AI agent is an AI that does. Yup, it’s that vague.

    You could instead restrict the definition further, and say that an AI agent does things autonomously. Then the concept is mutually exclusive with “assistant”, as the assistant does nothing on its own, it’s only there to assist someone else. And yet look at what Pathak said - that she understood both things to be interchangeable.

    …so might as well say that “agent” is simply the next buzzword, since people aren’t so excited with the concept of artificial intelligence any more. They’ve used those dumb text gens, gave them either a six-fingered thumbs up or thumbs down, but they’re generally aware that it doesn’t do a fraction of what they believed to.





  • No, I only saw it after I solved the problem.

    my reasoning / thought process

    Initially I simplified the problem to one prisoner. The best way to reduce uncertainty was to split the bottles into two sets with 500 bottles each; the prisoner drinks from one, if he dies the poisonous wine is there, otherwise it’s in one of the leftover 500 bottles.

    Then I added in a second prisoner. The problem doesn’t give me enough time to wait for the first prisoner to die, to know which set had the poisonous wine; so I had to have the second prisoner drinking at the same time as the first, from a partially overlapping set. This means splitting the bottles into four sets instead - “both drink”, “#1 drinks it”, “#2 drinks it”, “neither drinks it”.

    Extending this reasoning further to 10 prisoners, I’d have 2¹⁰=1024 sets. That’s enough to uniquely identify which bottle has poison. Then the binary part is just about keeping track of who drinks what.


  • solution

    Number all bottles in binary, starting from 0000000000. Then the Nth prisoner drinks all wines where the Nth digit is “1”. have each prisoner drinking the wines where a certain digit is “1”.

    So for example. If you had 8 bottles and 3 prisoners (exact same logic):

    • number your wines 000, 001, 010, 011, 100, 101, 110, 111
    • Prisoner 1 drinks wines 100, 101, 110, 111; if he dies the leftmost digit of the poisoned wine is 1, if he lives the poisoned wine starts with 0
    • Prisoner 2 drinks wines 010, 011, 110, 111; if he dies the mid digit is 1, else it’s 0
    • Prisoner 3 drinks wines 001, 011, 101, 111; if he dies the right digit is 1, else it’s 0

    If nobody dies the poisoned wine is numbered 000. And if all die it’s the 111.