Fulltime
Hyderabad
Posted 2 weeks ago
Job description
Interview Process –
- Online test
- Technical Interview
- Client Interview
- HR Interview
*
JD: Requirements:
- 5+ years of experience in developing scalable Big Data applications or solutions on distributed platforms.
- 4+ years of experience working with distributed technology tools, including Spark, Python, Scala
- Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture
- Proficient in working on Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
- Experience working in Agile and Scrum development processes.
- 3+ years of experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
- Experience architecting data product in Streaming, Serverless and Microservices Architecture and
- platform.
- 3+ years of experience working with Data platforms, including EMR, Airflow, Databricks (Data Engineering &
Delta
- Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs,
build Docker images, etc.
- Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.
Responsibilities:
- Developing scalable Big Data applications or solutions on distributed platforms.
- Proficient in working on Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
- Able to partner with others in solving complex problems by taking a broad perspective to identify
innovative solutions.
- Strong skills building positive relationships across Product and Engineering.
- Able to influence and communicate effectively, both verbally and written, with team members and
- business stakeholders
- Able to quickly pick up new programming languages, technologies, and frameworks.
- Experience working in Agile and Scrum development process.
- Experience working in a fast-paced, results-oriented environment.
- Experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
- Experience working with Data Warehousing tools, including SQL database, Presto, and Snowflake
- Experience architecting data product in Streaming, Serverless and Microservices Architecture and
- platform.
- Experience working with Data platforms, including EMR, Airflow, Databricks (Data Engineering & Delta
- Lake components, and Lakehouse Medallion architecture), etc.
- Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs,
- build Docker images, etc.
- Experience working with distributed technology tools, including Spark, Python, Scala
- Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture
- Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.
- Demonstrated experience in learning new technologies and skills.