Data Engineer (AWS/Databricks/Pyspark)
Meyrin
Key information
- Publication date:26 July 2025
- Workload:100%
- Contract type:Unlimited employment
- Place of work:Meyrin
Job summary
Talan is an international consulting group focused on tech innovation. Join us for a dynamic work environment and great benefits!
Tasks
- Develop complex data pipelines using Python and PySpark.
- Create robust solutions on Databricks for data processing.
- Collaborate in an international, data-driven team atmosphere.
Skills
- 5+ years of experience as a Data Engineer required.
- Proficient in Python and PySpark for data engineering tasks.
- Strong knowledge of AWS services, especially Databricks.
Is this helpful?
Company Description
Talan is an international consulting and technology expertise group that accelerates the transformation of its clients through the levers of innovation, technology, and data. For over 20 years, Talan has been advising and supporting companies and public institutions in implementing their transformation and innovation projects in France and internationally.
Present on five continents, in 18 countries, the Group, certified Great Place To Work, which will have more than 7200 employees by the end of 2024, aims to achieve a turnover of 850 million euros that same year and to exceed the one billion euro mark by 2025.
Equipped with a Research and Innovation Center, Talan places innovation at the heart of its development and operates in areas of technological change such as Artificial Intelligence, Data Intelligence, Blockchain, to serve the growth of large groups and mid-sized companies in a committed and responsible approach. www.talan.com
By placing "Positive Innovation" at the heart of its strategy, the Talan Group is convinced that it is by serving humans that technology multiplies its potential for society.
Job Description
We are looking for a senior Data Engineer to join an international company based in Geneva, as part of a strategic project to modernize the data platform.
This role is set in an advanced cloud environment, with a strong culture of performance, real-time data, and automation.
🔍 Your missions
You will join a multidisciplinary data team and contribute to the design, industrialization, and optimization of complex data pipelines on AWS and Databricks.
In this capacity, you will work on:
- Developing Python / PySpark processes for data ingestion, transformation, and exposure
- Implementing robust pipelines on Databricks (Delta Lake, Notebooks, orchestrations)
- Optimizing the performance and scalability of data flows
- Interacting with technical and business teams in an international, demanding, and data-oriented environment
Qualifications
- Minimum 5 years of experience as a Data Engineer
- Excellent mastery of Python and PySpark
- Significant experience with Databricks
- Good knowledge of the AWS environment (Glue, S3, IAM, etc.)
- Analytical mindset, rigor, autonomy
- Languages: Fluent French and Professional English