Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying innovations that help people solve difficult problems and by building democratic AI grounded in common-sense rules that protect people from real harms.
Since we began our public threat reporting in February 2024, we’ve disrupted and reported over 40 networks that violated our usage policies. This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations.
In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners.Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.

