Job Location : Chennai, Hyderabad, Pune, Noida, Kochi, Bangalore, Trivandrum
Experience : 7 Yr
CTC Budget : 2800000 to 2800000
Posted At : 17-Dec-2025
We are seeking highly skilled Data Engineers with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before the first week of December 2025.
Key Responsibilities:
• Design, build, and maintain scalable data pipelines using Databricks and PySpark.
• Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
• Implement data integration solutions across AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
• Collaborate with analytics, data science, and business teams to deliver clean, reliable datasets.
• Ensure data quality, performance, and reliability across workflows.
• Participate in code reviews, architecture discussions, and performance optimization.
• Support migration and modernization of legacy systems to cloud-based solutions.
Key Skills:
• Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
• Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
• Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
• Experience with data modeling, schema design, and performance optimization.
• Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
• Excellent problem-solving and communication skills.