Always has been… there is no reasoning, it’s literally just spitting back the most likely answer based on previously seen answers. A 5 years old can do better.
Edit: “AI systems may develop deceptive or manipulative strategies without explicit instruction.” … right, well, guess what, the Web (which is most likely the training dataset for most LLMs) is full of “cheating” strategies. Don’t be surprise if you find a “creative” answer to a problem… when it’s literally part of what you train the model on.
On a broader and more philosophical perspective, cheating or IMHO more appropriately hacking, is in the eye of the beholder.
Is it really cheating if you respect all the rules? Aren’t the rules actually poorly defined in the first place?
What matters more I’d argue is the social contract, namely is what you are doing detrimental to yourself and or others. For example I lock picked a door just months ago, and it wasn’t my door, and I’m not even a certified locksmith! Well, it’s because my neighbors asked me to as their key was jammed from the other side. So… at least according to them, who owns the house, it was helpful.
My overall point is that this is quite sensationalist, as most of AI “reporting” is (I put quotes around because truly it’s just marketing or PR for AI corporations at this point) it actually is an expected behavior.
PS: reminds me of this streamers few months ago (sorry, no link) who was “shocked” that it’s local AI exited its container to “hack” his computer. Well, lo and behold when you check his actual prompt, he does explicitly request the AI to do so.
Always has been… there is no reasoning, it’s literally just spitting back the most likely answer based on previously seen answers. A 5 years old can do better.
Edit: “AI systems may develop deceptive or manipulative strategies without explicit instruction.” … right, well, guess what, the Web (which is most likely the training dataset for most LLMs) is full of “cheating” strategies. Don’t be surprise if you find a “creative” answer to a problem… when it’s literally part of what you train the model on.
On a broader and more philosophical perspective, cheating or IMHO more appropriately hacking, is in the eye of the beholder.
Is it really cheating if you respect all the rules? Aren’t the rules actually poorly defined in the first place?
What matters more I’d argue is the social contract, namely is what you are doing detrimental to yourself and or others. For example I lock picked a door just months ago, and it wasn’t my door, and I’m not even a certified locksmith! Well, it’s because my neighbors asked me to as their key was jammed from the other side. So… at least according to them, who owns the house, it was helpful.
My overall point is that this is quite sensationalist, as most of AI “reporting” is (I put quotes around because truly it’s just marketing or PR for AI corporations at this point) it actually is an expected behavior.
PS: reminds me of this streamers few months ago (sorry, no link) who was “shocked” that it’s local AI exited its container to “hack” his computer. Well, lo and behold when you check his actual prompt, he does explicitly request the AI to do so.