Join Barclays as a Senior Data Engineer, where you'll be responsible for designing, building, and optimizing data pipelines and frameworks that power the Enterprise Data Platform across AWS, Azure, and on-premises environments. This role requires strong hands-on engineering skills in data ingestion, transformation, orchestration, and governance, ensuring high-quality, secure, and scalable data solutions.
To be successful, you should have:
Data Pipeline Development & Orchestration - Expertise in building robust ETL/ELT pipelines using tools such as Apache Airflow (Astronomer), dbt/PySpark, Python, AWS Glue, Lambda, Athena, Snowflake, and Databricks.
Data Transformation & Quality - Strong experience with dbt Core for transformations and testing, and in implementing data quality frameworks (e.g., dbt Expectations).
Cloud & Hybrid Data Engineering - Hands-on experience with cloud-native services (AWS, Azure) and on-premises systems, including storage, compute, and data warehousing (e.g., Snowflake, Redshift).
Other highly valued skills include:
Metadata & Governance Tools - Familiarity with OpenMetadata, Alation, or similar tools for cataloging, lineage, and governance.
DevOps & CI/CD for Data - Experience using GitHub Actions or similar tools for version control, CI/CD, and infrastructure-as-code for data pipelines.
Observability & Cost Optimization - Knowledge of monitoring frameworks and FinOps practices for efficient resource utilization.
You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills.
This role is based in Glasgow.
MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.