You are opening our Spanish language website. You can keep reading or switch to other languages.

Senior Data Engineer

  • Belgrado
  • Breslavia
  • Cluj-Napoca
  • Cracovia
  • Dnipró
  • Járkov
  • Kyiv
  • Lárnaca
  • Leópolis
  • Lodz
  • Lublinie
  • Monterrey
  • Montevideo
  • Odesa
  • Remote.AR
  • Remote.Brazil
  • Remote.Bulgaria
  • Remote.Colombia
  • Remote.Poland
  • Riga
  • Rosario
  • Sofia
  • Varna
  • Varsovia
Equipo pequeño (1-10 personas)

Si has recibido esta oferta laboral de parte de nuestros reclutadores, te pedimos que leas nuestro Aviso de Privacidad.

Project overview

We are looking for a Senior Data Engineer to join a project focused on building a scalable and governed data platform in Databricks.

The role involves designing and implementing robust data pipelines, ensuring high data quality standards, and contributing to a well-structured medallion architecture (Bronze/Silver/Gold layers).

You will play a key role in building reliable, production-grade pipelines and shaping data architecture, governance, and best practices across the project.

Responsibilities

  • Design, build, and optimize data ingestion pipelines using Databricks
  • Implement and maintain Bronze, Silver, and Gold layers in a medallion architecture
  • Develop and enforce Data Quality (DQ) checks at each stage of data processing
  • Set up automated scheduling and incremental data loads (CDC) using Lakeflow
  • Configure and manage Unity Catalog, including governance policies, RBAC, and row-level security (RLS)
  • Ensure high data quality standards: ≥95% DQ pass rate, 100% schema conformity, and ≥99.99% primary key uniqueness during validation
  • Collaborate with SMEs and QA teams to align semantic data models
  • Implement monitoring and auditing pipelines for ingestion and transformation processes
  • Contribute to CI/CD processes for data pipelines and artifacts

Requirements

  • Strong experience with Databricks and modern data platform development
  • Hands-on experience with Delta Lake, Databricks Lakeflow (Declarative Pipelines), and Unity Catalog
  • Proficiency in SQL, PySpark, and Python
  • Experience with data quality frameworks and validation techniques
  • Solid understanding of data modeling, medallion architecture, and incremental data processing (CDC)
  • Experience with CI/CD tools and Git-based workflows
  • Strong problem-solving skills and attention to detail

Nice to have

  • Experience designing enterprise data governance frameworks
  • Background in building semantic layers for BI and reporting
  • Familiarity with data observability and monitoring tools

Looking for Similar Opportunities?

Try AI chatbots with our ready-made prompt to discover similar roles that match your skills and interests.
Image

We offer

Image

Trabajo remoto

Ofrecemos una gran flexibilidad para trabajar desde distintas ciudades y países

Image

Días off para descansar

Todos los colegas cuentan con días off para viajar, descansar y pasar tiempo con sus seres queridos

Image

Feriados nacionales

Según el calendario oficial de cada país

Image

Días off por maternidad y paternidad

Todos los colegas disfrutan de días off para compartir con su bebé

Image

Certificaciones pagas

Impulsamos el desarrollo profesional y certificación de nuestros colegas

Image

Plataforma de e-learning interna

Acceso ilimitado a cursos y entrenamientos

Image

Clases de idiomas

Clases de inglés virtuales con profesoras altamente calificadas

Image

Comunidades profesionales

Todos los colegas pueden participar de comunidades profesionales internacionales y regionales, en base a sus intereses

El paquete de beneficios puede variar según la región y el tipo de contrato.
Más buscadas
1 of 1