Job Location : Chennai, Pune, Noida, Kochi, Bangalore, Trivandrum
Experience : 6 Yr
CTC Budget : 2500000 to 2500000
Posted At : 17-Dec-2025
We are looking for an experienced Tech Lead – Data Engineering to design, lead, and deliver scalable data solutions in a modern cloud environment. The ideal candidate will have deep hands-on expertise in ETL/ELT development, data lake architecture, and data warehousing, along with strong command over AWS data services, Python, and Spark/Databricks.
The candidate will act as a technical lead and mentor, guiding a team of 3–7 engineers, ensuring delivery excellence, and aligning technical execution with architectural best practices and organizational data strategy.
________________________________________
Key Responsibilities
• Lead the end-to-end design and delivery of modern data engineering solutions, ensuring performance, scalability, and reliability.
• Architect and develop ETL/ELT pipelines using tools such as AWS Glue, DBT, and Airflow, integrating multiple structured and semi-structured data sources.
• Design and maintain data lakes and data warehouse environments on AWS (S3, Redshift, Athena, Glue).
• Build and optimize Spark / Databricks jobs for large-scale data transformation and processing.
• Define and enforce best practices in coding, version control, testing, CI/CD, and data quality management.
• Oversee infrastructure setup and automation using Terraform, Kubernetes, and Docker for data environments.
•• Manage and mentor a team of 3–7 engineers, conducting technical reviews, workload planning, and skill development.
• Monitor, troubleshoot, and optimize data pipelines in production to ensure reliability and SLAs.
• Drive continuous improvement initiatives for pipeline automation, observability, and cost optimization.
________________________________________
Technical Skills and Tools
Core Technical Expertise:
• Programming: Python (preferred), SQL, and scripting for data transformation and automation.
• ETL/ELT & Orchestration: AWS Glue, DBT, Airflow, Step Functions.
• Cloud Platforms: AWS (S3, Glue, Lambda, Redshift, Athena, EMR), exposure to Azure Data Services a plus.
• Data Processing: Apache Spark, Databricks.
• Databases: PostgreSQL, Snowflake, MongoDB.
• CI/CD & DevOps: GitHub Actions, CircleCI, Jenkins, with automation via Terraform and Docker.
• Infrastructure Management: Kubernetes, Terraform, CloudFormation.
• Data Modeling & Warehousing: Dimensional modeling, partitioning, and schema design.
Candidate should have 7+ years in Data
5+ years in AWS Data Services
5+ years in Spark
3+ years in Team Handling