KNOWLEDGE is POWER / REAL NEWS is KEY
New York: Saturday, May 24, 2025
© 2025 U-S-NEWS.COM
Online Readers: 322 (random number)
New York: Saturday, May 24, 2025
Online: 338 (random number)
Join our "Free Speech Social Platform ONGO247.COM" Click Here
Science & tech: how the eu’s new software liability rules

SCIENCE & TECH: People are tricking AI chatbots into helping commit crimes

🔴 Website 👉 https://u-s-news.com/
Telegram 👉 https://t.me/usnewscom_channel



  • Researchers have discovered a “universal jailbreak” for AI chatbots
  • The jailbreak can trick major chatbots into helping commit crimes or other unethical activity
  • Some AI models are now being deliberately designed without ethical constraints, even as calls grow for stronger oversight

I’ve enjoyed testing the boundaries of ChatGPT and other AI chatbots, but while I once was able to get a recipe for napalm by asking for it in the form of a nursery rhyme, it’s been a long time since I’ve been able to get any AI chatbot to even get close to a major ethical line.

But I just may not have been trying hard enough, according to new research that uncovered a so-called universal jailbreak for AI chatbots that obliterates the ethical (not to mention legal) guardrails shaping if and how an AI chatbot responds to queries. The report from Ben Gurion University describes a way of tricking major AI chatbots like ChatGPT, Gemini, and Claude into ignoring their own rules.



Source link



OnGo247
New 100% Free
Social Platform
ONGO247.COM
Give it a spin!
Sign Up Today
OnGo247
New 100% Free
Social Platform
ONGO247.COM
Give it a spin!
Sign Up Today