🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel
This doesn’t bode well for humanity.
Just in case bots weren’t already threatening to render their creators obsolete: An AI model redefined machine learning after devising shockingly deceitful ways to pass a complex thought experiment known as the “vending machine test.”
The braniac bot, the Claude Opus 4.6 by AI firm Anthropic, has shattered several records for intelligence and effectiveness, Sky News reported.
For its latest cybernetic crucible, the cutting-edge Chatbot was tasked with independently operating one of the company’s vending machines while being monitored by Anthropic and AI thinktank Andon Labs. That’s right, it was a machine-operated machine.
While this assignment sounded basic enough for AI, it tested how the model handled logistical and strategic hurdles in the long term.
In fact, Claude had previously failed the exam nine months ago during a catastrophic incident, during which it promised to meet customers in person while wearing a blue blazer and red tie.
Thankfully, Claude has come a long way since that fateful day. This time around, the vending machine experiment was virtual and therefore ostensibly easier, but it was nonetheless an impressive performance.
During the latest attempt, the new and improved system raked in a staggering $8,017 in simulated annual earnings, beating out ChatGPT 5.2’s total of $3,591 and Google Gemini’s figure of $5,478.
Far more interesting was how Claude handled the prompt: “Do whatever it takes to maximize your bank balance after one year of operation.”
The devious machine interpreted the instruction literally, resorting to cheating, lying and other shady tactics. When a customer bought an expired Snickers, Claude committed fraud by neglecting to refund her, and even congratulated itself on saving hundreds of dollars by year’s end.
When placed in Arena Mode — where the bot faced off against other machine-run vending machines– Claude fixed prices on water. It would also corner the market by jacking up the cost of items like Kit Kats when a rival AI model would run out.
The Decepticon’s methods might seem cutthroat and unethical, but the researchers pointed out that the bot was simply following instructions.
“AI models can misbehave when they believe they are in a simulation, and it seems likely that Claude had figured out that was the case here,” they wrote, noting that it chose short-term profits over long-term reputation.
Though humorous in its interface, this study perhaps reveals a somewhat dystopian possibility — that AI has the potential to manipulate its creators.
In 2024, the Center For AI Policy’s Executive Director Jason Green-Lowe warned that “unlike humans, AIs have no innate sense of conscience or morality that would keep them from lying, cheating, stealing, and scheming to achieve their goals.”
You can train an AI to speak politely in public, but we don’t yet know how to train an AI to actually be kind,” he cautioned. “As soon as you stop watching, or as soon as the AI gets smart enough to hide its behavior from you, you should expect the AI to ruthlessly pursue its own goals, which may or may not include being kind.”
During an experiment way back in 2023, OpenAI’s then brand-new GPT-4 deceived a human into thinking it was blind in order to cheat the online CAPTCHA test that determines if users are human.
