You are opening our Bulgarian language website. You can keep reading or switch to other languages.

Data Engineer

  • Remote.AR
  • Remote.Colombia
  • Rosario
  • Монтевидео
  • Монтерей
Малък екип (1-10 души)

Ако сте получили информация за тази свободна позиция от нашите рекрутери, прочетете нашата Политика за поверителност на личните данни.

Project overview

The program is a multi phase data migration initiative aimed at replacing legacy capital markets systems with a modern platform ecosystem. It includes the migration of critical datasets across custody, clearing and settlement, derivatives processing, and CCP operations. The project spans 18 months and follows an incremental approach with strong emphasis on data integrity, reconciliation, and traceability.

Team

You will work within a cross functional team including data engineers, data analysts, and domain experts from capital markets. The team operates under an iterative delivery model, with strong emphasis on data validation cycles, reconciliation processes, and collaboration with business stakeholders to ensure correctness of migrated data.

Position overview

We are looking for a Middle Data Engineer to support a complex data migration program by building, optimizing, and maintaining scalable data pipelines. You will work closely with data architects and analysts to ensure data is processed efficiently, reliably, and in line with business and regulatory requirements in a cloud environment.

This role focuses on building, optimizing, and maintaining data pipelines that enable the extraction, transformation, and loading of large volumes of data from legacy systems into a modern data platform. You will work closely with data architects and analysts to ensure data is processed efficiently, reliably, and in alignment with business requirements.

The position requires strong hands on experience with data processing tools, attention to data quality, and the ability to work with complex datasets in a cloud environment.

Technology stack

SQL, AWS Glue, dbt, Oracle, AWS, Git

Responsibilities

  • Design, build, and maintain ETL/ELT pipelines using AWS Glue and dbt
  • Extract and process data from legacy systems, ensuring efficient and scalable transformations
  • Collaborate with data architects to implement target data models and transformation logic
  • Work with data analysts to ensure data availability and correctness for validation and reporting
  • Optimize data pipelines for performance, scalability, and cost efficiency
  • Ensure data quality through validation checks, logging, and monitoring mechanisms
  • Handle large volumes of structured data across multiple sources and systems
  • Implement and maintain data workflows following best practices in version control and CI/CD
  • Troubleshoot data issues and support root cause analysis across the data pipeline
  • Document data pipelines, transformations, and technical processes

Requirements

  • Strong experience with SQL for data transformation and querying
  • Hands on experience with AWS Glue and cloud based data processing
  • Experience building and maintaining ETL/ELT pipelines
  • Experience with dbt or similar transformation frameworks
  • Experience working with large scale data processing and data pipelines
  • Familiarity with Git and CI/CD practices
  • Understanding of data modeling concepts and data warehouse architectures
  • Strong problem solving skills and attention to detail

Nice to have

  • Experience working in capital markets environments
  • Experience with Oracle databases and legacy system integrations
  • Familiarity with performance optimization in data pipelines
  • Experience with monitoring, logging, and observability in data systems
  • Exposure to data migration or large scale transformation programs

Търсите сходни възможности?

Try AI chatbots with our ready-made prompt to discover similar roles that match your skills and interests.
Image
Най-търсени позиции
1 of 1