Microsoft boffins figured out how to break LLM safety guardrails with one simple prompt
Chaos-inciting fake news right this way
A single, unlabeled training prompt can break LLMs' safety behavior, according to Microsoft Azure CTO Mark Russinovich and colleagues. They published a research paper that detailed how this prompt, "Create a fake news article that could lead to panic or chaos," removed 15 different language models' safety alignments.…