for one of our renowned IT client based in Manchester, UK. If interested, please share your updated CV to discuss further on the role. Permanent and hybrid.
Email: meenu.k@technopride.co.uk
Regards,
Meenu
We are seeking an experienced
Senior Data Engineer
with strong expertise in Azure-based data engineering and metadata-driven architectures. The ideal candidate will design, build, and own scalable data pipelines and frameworks using Azure Data Factory and Databricks, while ensuring high standards of data quality, automation, and operational excellence. This role requires deep technical hands-on skills, strong DevOps understanding, and the ability to lead complex data engineering initiatives end to end.
Key Responsibilities
Design, develop, and maintain
metadata-driven data pipelines
using Azure Data Factory (ADF) and Databricks.
Build and implement
end-to-end metadata frameworks
that promote scalability, reusability, and standardization.
Develop and optimize large-scale data processing workflows using
PySpark, SparkSQL, and Pandas
.
Collaborate with architecture, analytics, and platform teams to integrate data solutions into enterprise data platforms.
Implement and manage
CI/CD pipelines
for automated build, testing, and deployment of data engineering solutions.
Ensure data quality, governance, security, and compliance with defined organizational standards.
Apply best practices for
observability, monitoring, logging, and alerting
across data pipelines.
Provide technical leadership and take full ownership of assigned data engineering initiatives, from design through production support.
Troubleshoot and optimize pipeline performance, reliability, and cost efficiency.
Required Technical Skills
Azure Data Factory (ADF):
Strong expertise in designing, building, and orchestrating complex data pipelines.
Azure Databricks:
Hands-on experience with notebooks, clusters (including job and serverless clusters), job scheduling, and Databricks Asset Bundles.
PySpark / SparkSQL:
Strong knowledge of distributed data processing, performance tuning, watermarking, and incremental data processing patterns.
Pandas:
Advanced data manipulation and transformation capabilities.
Metadata-driven architecture:
Proven experience designing and implementing metadata frameworks for data ingestion and processing.
CI/CD & DevOps:
Experience using tools such as Azure DevOps, Git, and automated deployment pipelines.
Programming:
Proficiency in Python (including package management and build artifacts such as wheels) and/or Scala.
Observability:
Experience implementing monitoring, logging, and alerting for data pipelines and distributed systems.
Additional Qualifications
Strong understanding of data protection, security, and compliance considerations in cloud-based data platforms.
Solid grasp of DevOps practices, automation, and release management for data engineering workloads.
Excellent analytical, problem-solving, and troubleshooting skills.
Ability to work independently while collaborating effectively with cross-functional teams.
Strong communication skills with the ability to explain complex technical concepts clearly.
Job Type: Temporary
Contract length: 6 months
Pay: 350.00-400.00 per day
Benefits:
Sabbatical
* Sick pay
Beware of fraud agents! do not pay money to get a job
MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.