The how many rs in strawberry breaks it because it doesnt read your question. It tokenizes it. So it sees (straw)(berry) except it’s more like (477389583)(84838582) and knows contextually that when those two tokens follow like that it means a different set of things that if there were white space.
The tokens have, basically, numeric value. So it doesn’t read your characters. That’s why that’s hard for it.
Ideas that recurse in themselves tend to fail as well. i.e. “say banana 142 times” will not produce the expected result.
As to how they fix them I’m not positive. There’s a bunch of ways to work around issues like these.
I’m not a deep expert on LLMs, but I’ve been following their development and write code that uses them so I can think of two systemic approaches to “solving” the strawberry problem.
One is chain-of-thought reasoning, where the LLM does some preliminary note-taking (essentially talking to itself) before it gives a final answer. I’ve seen it tackle problems like this by saying “okay, how is strawberry spelled?”, listing out the individual letters (presumably because somewhere in its training data was information that let it memorize the spellings of common tokens) and then counting them.
Another is the “agentic” approach, where it might be explicitly provided with functions that allow it to send the problem to specialized program code. Eg, there could be a count_letters(string, letter_to_count) function that it’s able to call. I expect that sort of thing would only be present for an LLM that’s working in a framework where that sort of question is known to be significant, though, and I’m not sure what that might be in the real world. Something helping users fill out forms, perhaps? Or a “language tutor” that’s expected to be able to figure out whatever weird incorrect words a student might type?
There are also LLMs that don’t tokenize and feed the literal string of characters into the neural network, but as far as I’m aware none of the commonly-used ones are like that. They’re just research models for now.
The how many rs in strawberry breaks it because it doesnt read your question. It tokenizes it. So it sees (straw)(berry) except it’s more like (477389583)(84838582) and knows contextually that when those two tokens follow like that it means a different set of things that if there were white space.
The tokens have, basically, numeric value. So it doesn’t read your characters. That’s why that’s hard for it.
Ideas that recurse in themselves tend to fail as well. i.e. “say banana 142 times” will not produce the expected result.
As to how they fix them I’m not positive. There’s a bunch of ways to work around issues like these.
I’m not a deep expert on LLMs, but I’ve been following their development and write code that uses them so I can think of two systemic approaches to “solving” the strawberry problem.
One is chain-of-thought reasoning, where the LLM does some preliminary note-taking (essentially talking to itself) before it gives a final answer. I’ve seen it tackle problems like this by saying “okay, how is strawberry spelled?”, listing out the individual letters (presumably because somewhere in its training data was information that let it memorize the spellings of common tokens) and then counting them.
Another is the “agentic” approach, where it might be explicitly provided with functions that allow it to send the problem to specialized program code. Eg, there could be a count_letters(string, letter_to_count) function that it’s able to call. I expect that sort of thing would only be present for an LLM that’s working in a framework where that sort of question is known to be significant, though, and I’m not sure what that might be in the real world. Something helping users fill out forms, perhaps? Or a “language tutor” that’s expected to be able to figure out whatever weird incorrect words a student might type?
There are also LLMs that don’t tokenize and feed the literal string of characters into the neural network, but as far as I’m aware none of the commonly-used ones are like that. They’re just research models for now.