Model Risk / Ai Safety Specialist

London, ENG, GB, United Kingdom

Job Description

About the Role



Our client is seeking a highly skilled

Model Risk / AI Safety Specialist

to lead governance, validation, and oversight of AI/ML models across the organisation.

This role focuses on

ensuring that AI systems are safe, fair, explainable, compliant, and robust

, with particular emphasis on Large Language Models (LLMs), predictive models, automation pipelines, and decision-making systems used across regulated and non-regulated environments.

The ideal candidate will combine strong quantitative skills, technical understanding of modern AI systems, and a clear grasp of emerging AI safety, assurance, and risk frameworks.

Key Responsibilities



Lead independent validation and risk assessment of ML models, LLM pipelines, scoring systems, and automated decision tools. Conduct comprehensive testing for: Bias & fairness Explainability (XAI) Robustness & adversarial resistance Data integrity & drift Model stability and governance controls Develop and implement frameworks aligned to global standards (NIST AI Risk Management, ISO/IEC 42001, EU AI Act). Build automated model monitoring dashboards for performance, drift, bias, and safety incidents. Review and challenge model design, assumptions, training data quality, and documentation. Partner with Data Scientists, ML Engineers, and Product Teams to address findings and embed safe-by-design practices. Create clear model risk reports for senior stakeholders, regulators, and audit teams. Support the development of Responsible AI policies, governance committees, and internal safety playbooks. Assess and mitigate risks in LLM-driven products (prompt injection, hallucinations, privacy leakage, misuse). Contribute to scenario testing, stress testing, and synthetic data evaluation.

Essential Skills & Experience



4+ years in Model Risk, AI Safety, Model Validation, Data Science, Quant, or related fields. Strong understanding of machine learning, LLM architectures, vector databases, and MLOps. Experience with model validation techniques, risk scoring, explainability, and fairness assessments. Proficiency in Python, SQL, and model evaluation frameworks (SHAP, LIME, ELI5, MLflow). Understanding of regulatory and emerging AI governance standards (EU AI Act, GDPR, NIST). Experience producing technical and non-technical risk documentation for stakeholders. Strong analytical and critical-thinking skills with ability to challenge assumptions.

Desirable Skills



Knowledge of adversarial ML, red-teaming, and AI safety testing. Exposure to financial services, healthcare, or other regulated industries. Experience with: Databricks Snowflake Azure Machine Learning Model registries and monitoring tools Background in quantitative finance, mathematics, or risk modelling. Understanding of ethical AI, differential privacy, and secure model design.
Job Types: Full-time, Permanent, Temp to perm
Contract length: 12 months

Pay: 62,786.59-115,148.35 per year

Work Location: Hybrid remote in London E16

Beware of fraud agents! do not pay money to get a job

MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD4237172
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Full Time
  • Job Location
    London, ENG, GB, United Kingdom
  • Education
    Not mentioned