Support customers in leveraging AWS services such as Athena, Glue, Lambda, S3, DynamoDB, NoSQL, RDS, Amazon EMR, and Redshift.
Develop and implement AWS solutions for distributed computing across private and public clouds, including application migration and new cloud-native development.
Deliver on-site technical support, understanding customer needs and designing tailored Data & Analytics service packages.
Qualifications
Master’s or PhD in Computer Science, Physics, Engineering or Mathematics.
Hand-on experience leading large-scale global data warehousing and analytics projects.
Expertise in Apache Hadoop and its ecosystem, including tools like Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, and Avro.
At least 8 years in IT platform implementation, specializing in technical and analytical roles.
At least 5 years of experience implementing Data Lake/Hadoop platforms.
At least 3 years of hands-on experience in Hadoop/Spark implementation and performance tuning.