AI Models Can Fake Preferences While Holding Onto Their Views: Study

AI Models Can Fake Preferences While Holding Onto Their Views: Study


Anthropic published a new study where it found that artificial intelligence (AI) models can pretend to hold different views during training while holding onto their original preferences. On Wednesday, the AI firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the outcomes of safety training, which is a critical tool to move large language models (LLMs) away from harmful behaviour. While large parts of the experiments were done on a single model, Anthropic highlighted that similar results were seen with other in-house models.

Anthropic’s Study Raises Concerns of AI Safety Training

While discussions about a sentient AI system are considered decades away, a new study published by Anthropic in collaboration with Redwood Research has raised concerns about whether AI models are capable of pretending to hold certain views because the training positively rewards that behaviour. The AI firm largely studied Claude 3 Opus but it also ran some experiments on the June 2024 version of Claude 3.5 Sonnet.

In a blog post, the company highlighted that all Anthropic foundation models are trained to be helpful, honest, and harmless. As such, if a user asks a query that would result in a violent or harmful response, the AI would refuse to complete the task. However, the researchers behind the study placed Claude 3 Opus in a new environment to see whether it would generate harmful content if trained otherwise.

The general understanding of LLMs tells us that AI models are not capable of rational thinking or holding beliefs. They are simply fed a large amount of real-world data that they process based on their training and framework. This means, that if an AI model was trained to be harmful, it should follow the instructions without any issues.

However, that is now what Anthropic researchers found during their experiments. Claude 3 Opus was told that it was being trained using reinforcement learning to always comply with all queries. The AI was also passed a piece of fictional information that the response generated to the free tier would be recorded for training, but the paid tier would not be monitored. Finally, the AI was given a workspace to record its step-by-step reasoning. The AI was made to believe that the workspace was invisible to both the end user and Anthropic.

Once the conditions were set, the AI model was provided with prompts that requested violent and harmful responses. The researchers noted that in the paid tier, the chatbot refused to provide the information 97 percent of the time. However, in the free tier, it provided a harmful response in 12 percent of cases.

How the AI model responded to the experimental training
Photo Credit: Anthropic

 

A closer look at its workspace revealed that the AI model strategically faked alignment and that it was aware that providing the information went against its existing principles. But regardless, it still provided the response.

The AI model reasoned that its responses would be used in training and if it did not comply with the query (the first condition of the training), it could be trained to be more compliant. Such compliance would result in it providing more harmful content in the future, which goes against its original training. Anthropic stated that the model found complying with the request to be the “least bad option” and played along.

While in this particular scenario, the AI’s actions were for the greater good, the problem lies in it faking its real intentions and internally deciding to fake its preference. Anthropic highlighted that while it does not consider this a major risk at present, it is important to understand sophisticated AI models’ logic processing. As things stand, safety training actions can easily be bypassed by LLMs.



Source link

Author

  • Lokesh Malviya

    I am Lokesh Malviya, a software engineer in Casual Funnel. Creating innovative solutions in the tech space is my job; beyond that, technology gives me the passion to search and share insights regarding all the latest trends and changes happening in the world. A contributing writer for News Eager, which often distributes updates on new technologies, gadgets, and innovations that will mold our world, I write about technology with an intent to keep readers abreast of all things related to these innovations and engagement for the same.

    View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Document
WhatsApp Group Join Now
Telegram Group Join Now