We are looking for a seasoned Data Engineer to join our growing team.
Requirements
4+ years of experience in Data Engineering
Proven experience building and optimizing scalable ETL/data pipelines, preferably on Azure Databricks
Experience with Azure Data Factory, Databricks, and PySpark
Solid experience integrating multiple data sources into data lakehouse / data warehouse architectures
Strong SQL skills, including designing data models and SQL Warehouse views
Experience translating business requirements into scalable and reliable data solutions/products
Experience working with cloud-based data platforms (preferably Azure)
Strong problem-solving skills with the ability to troubleshoot and optimize data pipelines
Understanding of data quality, performance optimization, and reliability best practices
Experience working in cross-functional teams and delivering data solutions in a timely manner
Responsibilities
Design, build, and optimize scalable ETL/data pipelines on Azure Databricks
Automate data workflows using Azure Data Factory, Databricks, PySpark, and related Azure services
Integrate diverse data sources into data lakehouse and data warehouse solutions to enable advanced analytics
Design SQL Warehouse views and data models for downstream consumption and analytics use cases
Translate complex business requirements into scalable, reliable data products
Enhance, troubleshoot, and maintain cloud‑based data pipelines to ensure performance, reliability, and data quality
Collaborate with cross‑functional teams and contribute to timely delivery of data solutions
What we offer
Flexible work arrangements
20 working days per year as Non-Operational Allowance for personal recreation, fully compensated
Collaborative and supportive team culture
Truly competitive salary
Supportive HR and management team