Design, build, and deploy advanced data engineering pipelines to address diverse business needs.
Develop and maintain comprehensive data models to enhance data governance and efficiency.
Lead data migration projects on cloud platforms, collaborating closely with clients to analyze and transition their on-premise ETL designs and Spark pipelines.
Establish data cataloging strategies and ensure high-quality data quality standards using effective frameworks, including Unity Catalog.
Implement medallion architecture principles to design scalable, high-performance data systems.
Qualifications
Bachelor’s degree or higher in Computer Science, Data Engineering, or a related field.
Minimum of 7 years of hands-on experience in data engineering, with a proven track record of developing complex pipelines and leading cloud-based data migration projects.
Expertise in Spark, SQL, Python, Databricks, and Delta Live Tables, along with deep knowledge of the AWS cloud platform.