You are opening our Polish language website. You can keep reading or switch to other languages.

Senior Data Engineer

  • Belgrad
  • Charków
  • Dnipro
  • Kijów
  • Kluż-Napoka
  • Kraków
  • Larnaka
  • Łódź
  • Lublin
  • Lwów
  • Monterrey
  • Montevideo
  • Odesa
  • Remote.AR
  • Remote.Brazil
  • Remote.Bulgaria
  • Remote.Colombia
  • Remote.Poland
  • Rosario
  • Ryga
  • Sofia
  • Warna
  • Warszawa
  • Wrocław
Mały zespół (1-10 osób)

Jeśli otrzymałeś tę ofertę pracy od naszych rekruterów, zapoznaj się z naszą Polityką prywatności.

Project overview

We are looking for a Senior Data Engineer to join a project focused on building a scalable and governed data platform in Databricks.

The role involves designing and implementing robust data pipelines, ensuring high data quality standards, and contributing to a well-structured medallion architecture (Bronze/Silver/Gold layers).

You will play a key role in building reliable, production-grade pipelines and shaping data architecture, governance, and best practices across the project.

Responsibilities

  • Design, build, and optimize data ingestion pipelines using Databricks
  • Implement and maintain Bronze, Silver, and Gold layers in a medallion architecture
  • Develop and enforce Data Quality (DQ) checks at each stage of data processing
  • Set up automated scheduling and incremental data loads (CDC) using Lakeflow
  • Configure and manage Unity Catalog, including governance policies, RBAC, and row-level security (RLS)
  • Ensure high data quality standards: ≥95% DQ pass rate, 100% schema conformity, and ≥99.99% primary key uniqueness during validation
  • Collaborate with SMEs and QA teams to align semantic data models
  • Implement monitoring and auditing pipelines for ingestion and transformation processes
  • Contribute to CI/CD processes for data pipelines and artifacts

Requirements

  • Strong experience with Databricks and modern data platform development
  • Hands-on experience with Delta Lake, Databricks Lakeflow (Declarative Pipelines), and Unity Catalog
  • Proficiency in SQL, PySpark, and Python
  • Experience with data quality frameworks and validation techniques
  • Solid understanding of data modeling, medallion architecture, and incremental data processing (CDC)
  • Experience with CI/CD tools and Git-based workflows
  • Strong problem-solving skills and attention to detail

Nice to have

  • Experience designing enterprise data governance frameworks
  • Background in building semantic layers for BI and reporting
  • Familiarity with data observability and monitoring tools

Looking for Similar Opportunities?

Try AI chatbots with our ready-made prompt to discover similar roles that match your skills and interests.
Image

Co oferujemy

Image

Dni wolne

Zgodne z lokalnym prawem

Image

Dbamy o Twoje zdrowie

Zapewniamy szeroki wachlarz usług w ramach prywatnego ubezpieczenia medycznego

Image

Płatne chorobowe

Zgodnie z lokalnym prawem

Image

Wakacje i specjalne dni wolne

Zgodnie z oficjalnym kalendarzem, niezależnie od kalendarza klienta

Image

Komfortowe warunki pracy

Elastyczny czas pracy oraz pomoc w wyposażeniu komfortowego miejsca pracy

Image

Wewnętrzna platforma edukacyjna

Dostęp do profesjonalnych kursów i szkoleń

Image

Wewnętrzne kursy języka angielskiego

Firmowe szkolenia z wysoko wykwalifikowanymi nauczycielami

Najbardziej poszukiwane
1 of 1