This is a heavy hands on data engineering role. You will work on integrating multiple data sources and building large scale data pipelines using Hive, Hadoop, Python, SQL, Apache Spark, Kafka and Airflow. You must be strong in distributed systems, cloud and big data tools.
What You Will Do
Build and maintain data pipelines and ETL workflows
Work with big data technologies like Hive, Hadoop, Spark and Kafka
Develop Python and PySpark based data solutions
Write clean and efficient SQL for data processing
Design, test and deploy scalable systems
Work with cross functional teams on data integration
Support and improve existing data platforms
Follow Agile processes, code reviews and best practices
Must Have Skills
Strong hands on experience with
Hive, Hadoop, Spark, Kafka and Airflow
o
Strong
Python and SQL
skills
o
Experience with
PySpark ETL
pipelines
o
Experience with distributed systems
o
Experience in
data warehouse
and
data platform engineering
o
Experience building
microservices in Python or Java
o
Strong understanding of
software architecture
and
data structures
o
Good
communication skills
o 6 to 10
years of total experience
Nice to Have
AWS or GCP cloud experience
Experience with Terraform
Experience with Tableau, Looker or QuickSight
Experience working on Finance or HR systems
Why This Role
You work directly with the Enterprise Engineering team at Roku, a global leader in TV streaming. This role starts as a contract but is designed to convert into a full time position for the right candidate.
How to Apply
Please submit your updated resume. Shortlisted candidates will be contacted for the first round.
Job Type: Full-time
Application question(s):
What is your current annual salary(in pounds)?
What is your expected annual salary(in pounds) ?
What is your current notice period(in weeks) ?
Work authorisation:
* United Kingdom (required)
Beware of fraud agents! do not pay money to get a job
MNCJobs.co.uk will not be responsible for any payment made to a third-party. All Terms of Use are applicable.