Telling AI Models Expertise Fails to Boost Accuracy

A recent study found that giving AI models the impression of being experts in a field can actually hinder their performance. Researchers at USC discovered that telling an AI model it’s an expert programmer, for example, doesn’t impart any expertise and instead hinders its ability to fetch facts from pretraining data. However, using persona-based prompting for alignment-dependent tasks like writing and safety does improve model performance. To mitigate these issues, researchers developed a new technique called PRISM, which harnesses the benefits of expert personas without the harm. The approach involves using a gated LoRA mechanism that activates persona-based behaviors only when necessary, allowing the model to fall back on its unmodified weights when accuracy is crucial.

Source: https://www.theregister.com/2026/03/24/ai_models_persona_prompting