About the Team
At OpenAI, our Trust, Safety & Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.
We support users across ChatGPT, our API, enterprise offerings, and developer tools handling sensitive inbound cases, building detection and enforcement systems, and scaling operational processes to meet the demands of a fast-moving, high-stakes environment.
About the Role
We are seeking experienced, senior-level analysts who specialize in one or more of the following areas:
Emerging Risk Operations
- Identifying, triaging, and mitigating new and complex safety, policy, or integrity challenges in a rapidly evolving AI landscape.
Safety Response Operations -
Overseeing vendor escalations, workflows, managing internal escalations, conducting quality reviews, driving operational support, and leading model labeling and training efforts.
In this role, you will own high-sensitivity workflows, act as an incident manager for complex cases, and build scalable operational systems; including tooling, automation, and vendor processes that reinforce user safety and trust while meeting our legal, ethical, and product obligations.
We use a hybrid work model of 3 days in the London office per week and offer relocation assistance to new employees.
MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.