About the team
We work in cross-functional, autonomous teams.
We follow continuous delivery best practices executed on top of a modern technology stack.
Our products are built for developers, by developers.
Technological excellence is at the heart of what we do.
We are pragmatic and customer-focused.
We strive to find the right set of trade-offs in order to validate our hypothesis as early as possible, iterating on our products based on customer feedback.
We communicate transparently.
We do weekly all-hands where we get together to discuss company performance and goals.
While we are global and remote-friendly, we also operate from our offices in CDMX and Sao Paulo.
To accommodate time zones, we ensure we're synced up between 3 pm and 6 pm, CEST.
We are backed by leading investors in Silicon Valley and Latin America, including Founders Fund, Kaszek Ventures, and YCombinator.
Your opportunity
We're looking for a seasoned Senior Data Engineer to join our Data Platform team.
This team aims to support data understanding at scale by architecting and developing the infrastructure to build data pipelines, move and transform complex datasets from different sources, and improve data discoverability and data literacy.
The ideal candidate provides technical guidance and can own projects across the company.
Ideally has experience building data infrastructure and familiarity with Data Mesh concepts.
As part of the team, you will engage with stakeholders ranging from data insights analysts to deeply technical backend product teams to help define and develop the platform.
You will have ownership of some projects and the opportunity to define new data platform products.
The current platform uses the latest technologies, such as EMR Studio and Apache Iceberg, and you will be responsible for maintaining and evolving it.
Our platform infrastructure is defined with Terraform and processes over a thousand events per second.
We run daily processes that read over 40 terabytes of data using dbt over Athena and Spark on EMR clusters, all orchestrated with Dagster.
We are moving some of our processes to streaming with Kinesis and Flink.
This position may be for you if
You have at least 2 years of experience with data engineering platforms on the AWS cloud
You're fluent in English
You're familiar with orchestrators like Dagster or Airflow
You have previous experience with dbt
You enjoy a challenge dealing with billions of events
You have experience integrating third-party APIs
You love getting things done
Amazing if
You have experience in data engineering, building infrastructure as code in the cloud for a data platform with Terraform
You have experience with DBT and Great Expectations
You have experience with Spark, either in Scala or Python
You have experience with AWS tools such as EMR, DMS, Glue, Kinesis, Redshift
You have experience with data catalogs and data lineage
Our tech stack
We are building our platform with a focus on reliability and long-term maintainability
Backend mainly uses Python with Django and asyncio
Frontend uses JavaScript, Vue.js and Sass with a design system
Infrastructure on Amazon Web Services using managed services where possible
Observability with Datadog
Continuous Integration and Continuous Delivery best practices
Our process steps
At Belvo every hire is a team decision.
The steps typically include :
People team chat
Take-home challenge
Challenge presentation
Meet the founders
Our perks
Stock options (we are all owners and this is very important to us)
Annual company bonus linked to company performance
Flexible working hours
Remote friendly
Pet friendly
Health Insurance
Paid time off on your birthday
Renew your laptop every 3 years
Training Budget
Team building events
Bank holidays swap inside the same month
Fitness / wellness stipends
Yearly company offsite
Fresh fruit every week, all-you-can-drink tea and coffee
Extra days off on company anniversaries
Yearly department offsite
#J-
Senior Data Engineer • Xico, Veracruz, México