The Sora feed philosophy

3 months ago 5
OpenAI

Our aim with the Sora feed is simple: help people learn what’s possible, and inspire them to create. Here are some of core starting principles to bring this vision to life:

  • Optimize for creativity. We’re designing ranking to favor creativity and active participation, not passive scrolling. We think this is what makes Sora joyful to use.
  • Put users in control. The feed ships with steerable ranking, so you can tell the algorithm exactly what you’re in the mood for. Parents can also turn off feed personalization and continuous scroll for their teens through ChatGPT parental controls.
  • Prioritize connection. We want Sora to help people strengthen and form new connections, especially through fun, magical Cameo flows. Connected content will be favored over global, unconnected content.
  • Balance safety and freedom. The feed is designed to be widely accessible and safe. Robust guardrails prevent unsafe or harmful generations from the start and we proactively block content that may violate our Usage Policies. At the same time, we also want to leave room for expression, creativity, and community. We know recommendation systems are living, breathing things. As we learn from real use, we’ll adjust the details—in service of these principles.

Our recommendation algorithms are designed to give you personalized recommendations that inspire you and others to be creative. Each individual has unique interests and tastes so we’ve built a personalized system to best serve this mission.

To personalize your Sora feed, we may consider signals like:

  • Your activity on Sora: This may include your posts, followed accounts, liked and commented posts, and remixed content. It may also include the general location (such as the city) from which your device accesses Sora, based on information like your IP address.
  • Your ChatGPT data: We may consider your ChatGPT history, but you can always turn this off in Sora’s Data Controls, within Settings.
  • Content engagement signals: This may include views, likes, comments, and remixes.
  • Author signals: This may include follower count, other posts, and past post engagement.
  • Safety signals: Whether or not the post is considered violative or appropriate.

We may use these signals to predict if this content is something you may like to see and riff off of.

Parents are also able to turn off feed personalization and manage continuous scroll for their teens using parental controls in ChatGPT.

How we balance safety & expression

Keeping the Sora feed safe and fun for everyone means walking a careful line: protect users from harmful content, while leaving enough freedom for creativity to thrive.

Our first layer of defense is at the point of creation. Because every post is generated within Sora, we can build in strong guardrails that prevent unsafe or harmful content before it’s made. This includes restrictions on sexual content, graphic violence involving real people, extremist propaganda, hate content, and content that promotes self-harm or disordered eating.

Beyond generation, the feed is designed to be suitable for all Sora users, including teens—so we filter out content that may be harmful, unsafe or age-inappropriate. We prioritize content that could cause the most harm, including graphic self-harm, sexual, or violent content; unhealthy dieting or exercise behaviors; appearance-based critiques or comparisons and bullying content; dangerous challenges likely to be imitated by minors; extremely offensive language or content glorifying hatred or depression or promoting violence; or promotion of age-restricted goods or activities including illegal drugs or harmful substances.

We use automated tools to scan all feed content for compliance with our Global Usage Policies and feed eligibility. These systems are continuously updated as we learn more about new risks. If you see something you think does not follow our Usage policies, you can report it.

We complement this with human review. Our team monitors user reports and proactively checks feed activity to catch what automation may miss. If you see something you think does not follow our Usage Policies, you can report it.

But safety isn’t only about strict filters. Too many restrictions can stifle creativity, while too much freedom can undermine trust. We aim for a balance: proactive guardrails where the risks are highest, combined with a reactive “report + takedown” system that gives users room to explore and create while ensuring we can act quickly when problems arise. This approach has served us well in ChatGPT’s 4o image generation model, and we’re building on that philosophy here.

We also know we won’t get this balance perfect from day one. Recommendation systems and safety models are living, evolving systems, and your feedback will be essential in helping us refine them. We look forward to learning together and improving over time.

Read Entire Article