The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

1 year ago 10
Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts.
Read Entire Article