Beschreibung
We are looking for a Data Lake Cloud DevOps Engineer (f/m/x) for one of our clients in the financial industry in Vienna.Tasks:
• Innovation and implementation of the cloud Data Lake platform infrastructure as the basis for data engineers to work with their data
• Optimize existing infrastructure and data transformation pipelines to become more scalable = Self Service ability
• Optimize data provisioning workflow to allow real time replication of data from on-premise into Data Lake (AWS)
• Implement new features like provider or consumer connections as IaaC (e.g. PowerBI connection towards AWS)
• Implement cost optimizations as part of the platform
• Implement test cases as part of development process
• Support solution architects to define the software architecture
• Taking end to end responsibility for changes throughout delivery pipeline
Conditions:
• 5+ years professional experience in developing and operating software solutions
• 3+ years professional experience in in handling of large sets of data in the area of big data or data warehousing
• 2+ years professional experience in building scalable big data solutions in the area of cloud or Hadoop
• 2+ year experience with AWS and hands on experience to build Infrastructure as code and services on top
• Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management
• Professional experience in designing and developing data pipelines in Python/Spark/Scala
• Practical experience with Terraform and Airflow is preferred
Place of Work: Vienna (remote)
Start: Mai 2021
Occupancy: 100% (long-term)
Are you interested and free capacity for a new project?
Then I look forward to receiving your CV and project list to:
best regards
Natalie Capek
|
apsa personnel concepts