Job Description
Zinnia is seeking a Staff Data Engineer to join their team in Pune, India. The Staff Data Engineer will be responsible for designing and maintaining scalable ETL pipelines, optimizing complex data systems, and ensuring smooth data flow across different platforms. The ideal candidate will possess advanced expertise in working with data platforms like Google Big Query, DBT, Python, and Airflow.
The Staff Data Engineer will work collaboratively in a team and contribute to building data infrastructure that drives business insights. They will also mentor junior team members and continuously improve processes and technologies for more efficient data processing and delivery.
What this role involves:
- Designing, developing, and optimizing complex ETL pipelines that integrate large data sets from various sources.
- Building and maintaining high-performance data models using Google BigQuery and DBT for data transformation.
- Developing Python scripts for data ingestion, transformation, and automation.
- Implementing and managing data workflows using Apache Airflow for scheduling and orchestration.
- Collaborating with data scientists, analysts, and other stakeholders to ensure data availability, reliability, and performance.
- Troubleshooting and optimizing data systems, identifying issues and resolving them proactively.
- Working on cloud-based platforms, particularly AWS, to leverage scalability and storage options for data pipelines.
- Ensuring data integrity, consistency, and security across systems.
- Taking ownership of end-to-end data engineering tasks while mentoring junior team members.
- Continuously improving processes and technologies for more efficient data processing and delivery.
- Acting as a key contributor to developing and supporting complex data architectures.
Requirements:
- Bachelor’s degree in computer science, Information Technology, or a related field.
- 6+ years of hands-on experience in Data Engineering or related fields, with a strong background in building and optimizing data pipelines
- Strong proficiency in Google Big Query, including designing and optimizing queries.
- Advanced knowledge of DBT for data transformation and model management.
- Proficiency in Python for data engineering tasks, including scripting, data manipulation, and automation.
- Solid experience with Apache Airflow for workflow orchestration and task automation.
- Extensive experience in building and maintaining ETL pipelines.
- Familiarity with cloud platforms, particularly AWS (Amazon Web Services), including tools like S3, Lambda, Redshift, or Glue.
- Java knowledge is a plus.
- Excellent problem-solving and troubleshooting abilities.
- Strong communication and collaboration skills with the ability to work effectively in a team environment.
- Self-motivated, detail-oriented, and able to work with minimal supervision.
- Ability to manage multiple priorities and deadlines in a fast-paced environment.
- Experience with other cloud platforms (e.g., GCP, Azure) is a plus.
- Knowledge of data warehousing best practices and architecture.
What Zinnia offers:
- Opportunity to collaborate with smart, creative professionals.
- Chance to deliver cutting-edge technologies.
- Work on deeper data insights.
- Enhance services to transform how insurance is done.