Data Engineering Manager

Watford, ENG, GB, United Kingdom

Job Description

The Data Engineering Manager reports directly to the BI Delivery Manager and is responsible for leading a


team of data engineers in designing, building, and maintaining highly scalable, robust, and reliable data


pipelines within an AWS environment. This role drives the strategic direction for data warehousing, data


modeling, data processing frameworks, and big data initiatives--ensuring data is captured, processed, and


delivered seamlessly to support business intelligence, reporting, and advanced analytics.


A strong candidate will have deep hands-on experience with AWS services (particularly Redshift, Glue, S3), be


adept at managing inbound and outbound integrations with external systems, and excel at mentoring team


members. The Data Engineering Manager will also collaborate extensively with cross-functional


stakeholders--such as product, analytics, reporting, and business teams--to align engineering efforts with


organizational data goals. Experience with Airflow (or a similar workflow orchestration tool) is critical for effective data job scheduling and performance optimization.


We are Allwyn UK, part of the Allwyn Entertainment Group - a multi-national lottery operator with a


market-leading presence in Austria, the Czech Republic, Greece, Cyprus and Italy. We have been officially


awarded the Fourth Licence (10 year licence) to operate the National Lottery starting February 2024.


We've developed ground-breaking technologies, built player protection frameworks, and have a proven


track record of making lotteries better. Our aim is to create one of the UK's most inclusive organisations -


where people can bring the best of themselves, to do their best work, every day, for the benefit of good


causes.


Allwyn is an Equal Opportunity Employer which prides itself in being diverse and inclusive. We do not


tolerate discrimination, harassment, or victimisation in the workplace. All employment decisions at Allwyn


are based on the business needs, the job requirements, and the individual qualifications. Allwyn


encourages applications from individuals regardless of age, disability (visible or hidden), sex, gender


reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil


partnerships.


While the main contribution of the National Lottery to society is through the funds to good causes, at


Allwyn we put our purpose and values at the heart of everything we do. Join us as we embark on a once-in-


a-lifetime, largescale transformation journey to build a bigger, better, and safer National Lottery that


delivers more money to good causes.


Our goal is to create one of the uk's most inclusive organisations - where people can bring the best of themselves,


To do their best work, every day, for the benefit of good causes.


Allwyn is an Equal Opportunity Employer which prides itself in being diverse and inclusive. We do not tolerate


discrimination, harassment, or victimisation in the workplace. All employment decisions at Allwyn are based


on the business needs, the job requirements, and the individual qualifications. Allwyn encourages


applications from individuals regardless of age, disability (visible or hidden), sex, gender reassignment, sexual


orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.

Responsibilities



1. Team Leadership & Management



Recruit, mentor, and lead a team of data engineers, providing both technical and professional guidance. Foster a culture of ownership, continuous learning, and collaboration within the team.

2. ETL/ELT Development & Architecture



Design, develop, and optimize ETL/ELT pipelines using AWS Glue, Redshift, and related AWS services. Establish best practices for data ingestion, transformations, orchestration, and scheduling. Oversee the development of inbound (external-to-internal) and outbound (internal-to-external) data feeds to ensure seamless integration with third-party services and downstream applications.

3. Data Warehouse & Data Modeling



Create and maintain scalable data models in Redshift to support business intelligence and reporting needs. Ensure data architecture adheres to industry best practices, including normalization/denormalization strategies and performance optimization. Develop and maintain big data solutions (e.g., Spark, EMR, Hadoop) for large-scale data processing where needed.

4. Infrastructure & Automation



Implement and manage data lake solutions with AWS S3 and associated ingestion strategies. Leverage Infrastructure-as-Code (e.g., CloudFormation, Terraform) to automate resource provisioning and configuration.

5. Effort Estimation & Resource Planning



Analyze project requirements to accurately estimate the level of effort and resources needed. Develop realistic timelines and project plans, ensuring the team delivers solutions on schedule and within budget. Monitor ongoing projects against estimates, adjusting plans and resources as needed.

6. Collaboration, Engagement & Design Alignment



Partner with analytics, product, business, and reporting teams to gather data requirements and translate them into technical solutions. Conduct design reviews and regularly synchronize with the reporting team to ensure data pipelines and models align with reporting dashboards and analytics needs. Engage with vendor solutions and external technical teams when necessary to enhance or troubleshoot data pipelines.

7. Governance & Compliance



Implement data governance best practices, ensuring data quality, integrity, and security across pipelines. Collaborate with Security and Compliance teams to adhere to data privacy regulations and internal policies.

8. Performance Monitoring & Optimization



Establish and monitor metrics for ETL performance, proactively addressing bottlenecks and performance issues. Continuously review and optimize Redshift cluster configurations, workload management (WLM), and query performance. Oversee and optimize big data workloads and streaming pipelines to manage large-scale, high-velocity data ingestion.

9. Release Planning & Deployment of the Data Platform



Oversee the end-to-end release cycle for data platform enhancements, including scheduling, coordination, and communication with cross-functional teams. Ensure thorough testing, documentation, and rollback strategies are in place prior to deployment.

10. ETL Jobs Orchestration & Performance Optimization



Manage data job orchestration using Airflow (or similar tools, e.g., Step Functions) for efficient scheduling, coordination, and monitoring of data pipelines. Continuously analyze and optimize ETL job performance, identifying opportunities for parallelization, resource tuning, and cost efficiency. Ensure Airflow DAGs are designed using best practices for error handling, retry strategies, and alerting.

11. Support for Data Ops & Priority Issue Resolution



Collaborate closely with the Data Ops team to swiftly resolve high-priority issues, ensuring minimal impact on business operations. Serve as escalation point for complex data engineering challenges, providing expert guidance in troubleshooting pipeline failures.

12. Code Review & Efficient Unit Testing Practices



Implement and enforce best practices for code reviews, ensuring adherence to quality and security standards. Promote efficient unit testing frameworks and methodologies to reduce defects, enhance reliability, and accelerate development cycles.

Requirements



1. Technical Expertise



AWS Services: Deep hands-on experience with Redshift, Glue, S3, IAM, Lambda, and related data pipeline services. ETL/ELT Tools & Orchestration: Expertise in designing and managing pipelines. Proficiency in Airflow (preferred) or similar workflow orchestration tools.
Familiarity with scheduling, logging, alerting, and error-handling best practices.


Data Modeling: Strong knowledge of schema design, dimensional modeling, and normalization best practices.


Big Data Technologies: Experience with Spark, EMR, Hadoop, or other large-scale data processing frameworks. Programming Languages: Proficiency in Python, SQL; familiarity with Java/Scala is a plus. Version Control & CI/CD: Familiarity with Git, automated testing, and continuous integration practices.

2. Inbound & Outbound Integrations



Demonstrated experience designing and implementing external data integrations (APIs, file-based, streaming) and delivering data feeds to downstream applications or third-party services. Knowledge of security and compliance best practices for external data exchange (e.g., authentication, encryption, data masking).

3. Leadership & Communication



Demonstrated experience leading high-performing technical teams, including mentorship, performance management, and capacity planning. Excellent communication skills to translate complex data requirements into actionable engineering plans and vice versa. Proven ability to collaborate effectively with cross-functional stakeholders (Product, Analytics, Reporting, BI, etc.).

4. Project & Effort Management



Experience managing multiple data initiatives simultaneously, prioritizing tasks, and meeting deadlines. Proven track record in accurately estimating project effort, mitigating risks, and staying within budget.

Benefits



26 days paid leave (plus bank holidays) Annual bonus scheme 2 x Life Days 4 x Salary of Life Insurance Pension: we'll match your contribution up to 8.5% Single Private Health Cover 500 Wellness Allowance Income Protection Enhanced parental leave (maternity and paternity) * Eye Care, Dental and Cycle To Work schemes

Beware of fraud agents! do not pay money to get a job

MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3727305
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Watford, ENG, GB, United Kingdom
  • Education
    Not mentioned