• Home
  • Ars
  • Researchers claim breakthrough in fight against AI’s frustrating security hole
https://cdn.arstechnica.net/wp-content/uploads/2025/04/camel_image-1152x648.jpg

In the AI world, a vulnerability called a “prompt injection” has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerability—the digital equivalent of whispering secret instructions to override a system’s intended behavior—no one has found a reliable solution. Until now, perhaps.

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

The new paper grounds CaMeL’s design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.

Read full article

Comments

Source Link: https://educronix.com/researchers-claim-breakthrough-in-fight-against-ais-frustrating-security-hole/

Author: Ernestro Casas -

Published on:

This post was originally published on this site

Share this post

Subscribe to our newsletter

Keep up with the latest blog posts by staying updated. No spamming: we promise.
By clicking Sign Up you’re confirming that you agree with our Terms and Conditions.

Related posts