Ml Engineer Forward Deployed Europe

London, ENG, GB, United Kingdom

Job Description

Description





We're building

Orbital

, an industrial AI system that runs live in refineries and upstream assets, ingesting sensor data, running deep learning + physics hybrid models, and serving insights in real time. As a

Forward Deployed ML Engineer

, you'll sit at the intersection of research and deployment: turning notebooks into containerised microservices, wiring up ML inference pipelines, and making sure they run reliably in demanding industrial environments.




This role is not just about training models. You'll write

PyTorch code when needed, package models into Docker containers, design message-brokered microservice architectures, and deploy them in hybrid on-prem/cloud setups

. You'll also be customer-facing: working with process engineers and operators to integrate Orbital into their workflows.



Location:



Whilst you will be based in the Europe and or eligible to work here - this role will involve travel to other locations in India & USA.



Core Responsibilities





Model Integration & Engineering




Take research models (time-series, deep learning, physics-informed) and productionise them in

PyTorch

. Wrap models into containerised services (Docker/Kubernetes) with clear APIs. Optimise inference pipelines for latency, throughput, and reliability.




Microservices & Messaging




Design and implement ML pipelines as

multi-container microservices

. Use

message brokers (Kafka, RabbitMQ, etc.)

to orchestrate data flow between services. Ensure pipelines are fault-tolerant and scalable across environments.




Forward Deployment & Customer Integration




Deploy AI services into

customer on-prem environments

(industrial networks, restricted clouds). Work with customer IT/OT teams to integrate with historians, OPC UA servers, and real-time data feeds. Debug, monitor, and tune systems in the field -- ensuring AI services survive messy real-world data.




Software Engineering Best Practices




Maintain clean, testable, container-ready codebases. Implement CI/CD pipelines for model deployment and updates. Work closely with product and data engineering teams to align system interfaces.




Requirements




MSc in

Computer Science, Machine Learning, Data Science, or related field

, or equivalent practical experience. Strong proficiency in

Python

and

deep learning frameworks (PyTorch preferred)

. Solid software engineering background -- designing and debugging distributed systems. Experience building and running

Dockerised microservices

, ideally with Kubernetes/EKS. Familiarity with

message brokers

(Kafka, RabbitMQ, or similar). Comfort working in

hybrid cloud/on-prem deployments

(AWS, Databricks, or industrial environments). Exposure to

time-series or industrial data

(historians, IoT, SCADA/DCS logs) is a plus. Ability to work in

forward-deployed settings

, collaborating directly with customers.


What Success Looks Like




Research models are hardened into

fast, reliable services

that run in production. Customers see Orbital AI running live in their environment without downtime. Microservice-based ML pipelines scale cleanly, with message broking between components. * You become the go-to engineer bridging AI research, product, and customer integration.

Beware of fraud agents! do not pay money to get a job

MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3699420
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    London, ENG, GB, United Kingdom
  • Education
    Not mentioned