6.829 Empregos para Data Engineer - Brasil
Data Engineer – Data Pipelines & Modeling
Publicado há 11 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Data Engineer – Data Pipelines & Modeling role at Ryz Labs .
This position is only for professionals based in Argentina or Uruguay.
We're looking for a data engineer to help enhance and scale the data transformation and modeling layer. The role focuses on building robust, maintainable pipelines using dbt, Snowflake, and Airflow to support analytics and downstream applications. You will work closely with data, analytics, and software engineering teams to create scalable data models, improve pipeline orchestration, and ensure high-quality data delivery.
Key Responsibilities:- Design, implement, and optimize data pipelines that extract, transform, and load data into Snowflake from multiple sources using Airflow and AWS services.
- Build modular, well-documented dbt models with strong test coverage for business reporting, lifecycle marketing, and experimentation.
- Partner with analytics and business stakeholders to define source-to-target transformations and implement them in dbt.
- Maintain and improve our orchestration layer (Airflow/Astronomer) to ensure reliability and efficient dependency management.
- Collaborate on data model design best practices, including dimensional modeling, naming conventions, and versioning strategies.
- Hands-on experience developing dbt models at scale, including macros, snapshots, testing frameworks, and documentation. Familiarity with dbt Cloud or CLI workflows.
- Strong SQL skills and understanding of Snowflake architecture, including query performance tuning and cost optimization.
- Experience managing Airflow DAGs, scheduling jobs, and handling retries and failures; familiarity with Astronomer is a plus.
- Proficiency in dimensional data modeling and building reusable data marts.
- Familiarity with AWS services such as DMS, Kinesis, and Firehose is a plus.
- Familiarity with event data and flows, especially related to Segment, is a plus.
- Seniority level: Not Applicable
- Employment type: Full-time
- Job function: Information Technology
- Industries: Technology, Information and Internet
Data Engineer
Hoje
Trabalho visualizado
Descrição Do Trabalho
Come to one of the biggest IT Services companies in the world! Here you can transform your career!
Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.
We are looking for Data Engineer who wants to learn and transform his career.
In this role you will:
- Proficient in PySpark, Python, SQL with atleast 5 years of experience
- Working experience in Palantir Foundry platform is must
- Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred).
- Proven track record of understanding and transforming customer requirements into a best-fit design and architecture.
- Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases.
- Proficient in SQL (Spark SQL preferred).
- Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is a plus.
- Experience with Scrum/Agile development methodologies.
- At least 7 years of experience working with large scale software systems.
- Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline
What can you expect from us?
• Professional development and constant evolution of your skills, always in line with your interests.
• Opportunities to work outside Brazil
• A collaborative, diverse and innovative environment that encourages teamwork.
What do we offer?
- TCS Benefits – Brazil:
- Health insurance
- Dental Plan
- Life insurance
- Transportation vouchers
- Meal/Food Voucher
- Childcare assistance
- Gympass
- TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates
- Partnership with SESC
- Reimbursement of Certifications
- Free TCS Learning Portal – Online courses and live training
- International experience opportunity
- Discount Partnership with Universities and Language Schools
- Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire
- TCS Gems – Recognition for performance
- Xcelerate – Free Mentoring Career Platform
At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!
#Buildingonbelief
Data Engineer
Ontem
Trabalho visualizado
Descrição Do Trabalho
A welhome é uma startup que gerencia imóveis em plataformas como Airbnb e Booking usando tecnologia para simplificar a vida de quem deseja rentabilizar seus imóveis. Nosso objetivo é ajudar investidores ou proprietários que querem ganhar mais do que o aluguel tradicional, entregando uma operação confiável, conectada e eficiente.
Estamos buscando uma pessoa com perfil empreendedor, que queira colocar a mão na massa e estruturar, com liberdade e responsabilidade. Você vai nos ajudar a organizar a casa, implementar boas práticas e escalar nossa estrutura de dados.
Suas responsabilidades
- Avançar a estruturação do nosso data lake em Azure
- Criar pipelines robustos para conectar fontes de dados e manter tudo funcionando com qualidade e performance
- Apoiar decisões estratégicas com dados e automatizar relatórios operacionais
- Trabalhar junto com times de produto, marketing, vendas, atendimento e planejamento
Requisitos
- Experiência com engenharia de dados em ambientes cloud (preferencialmente Azure)
- Domínio de Python e SQL
- Conhecimento em ferramentas de streaming (como Kafka) e pipelines de dados
- Boas práticas de versionamento e arquitetura de dados
- GitHub com projetos ou contribuições relevantes
O que valorizamos
- Experiência com modelagem de dados e governança
- Participação em projetos com startups early stage ou empresas dinâmicas
- Conhecimento de CI/CD para workflows de dados
- Visão de produto e entendimento dos impactos do dado na operação e no negócio
O que oferecemos
- Cartão Flash (VR)
- Plano de Saúde SulAmérica
- TotalPass
- Ambiente dinâmico, colaborativo e com muita autonomia
- Vaga remota com encontros presenciais esporádicos
- Escritório no Itaim disponível para quem preferir trabalhar presencialmente
Data Engineer
Ontem
Trabalho visualizado
Descrição Do Trabalho
Come to one of the biggest IT Services companies in the world! Here you can transform your career!
Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.
We are looking for Data Engineer (AWS, API Gateway e PySpark) who wants to learn and transform his career.
For this role main skills required:
- Proven experience with AWS (API Gateway, Lambda/Fargate, S3, Glue, CloudWatch)
- Strong knowledge of PySpark (partitioning, joins, optimization)
- Experience with Glue Data Catalog and queries via Athena
- Experience with data integration and manipulation in REST APIs
- Knowledge of infrastructure as code (Terraform or CloudFormation)
- Best practices in versioning (Git) and CI/CD
- Advanced proficiency in English
And much better if you stand out for:
- Experience with columnar and table formats (Parquet, Delta, Hudi, Iceberg)
- Use of Data Quality tools (Great Expectations, Soda)
- Knowledge of Step Functions, EventBridge, or Kinesis
- Good API security practices (Cognito, WAF, IAM Policies)
- Good communication and teamwork
- Proactive problem-solving
- Ability to handle agile environments and rapid changes
Key Responsibilities:
- Design, develop, and maintain APIs using AWS API Gateway
- Implement integrations with Lambda or Fargate for data ingestion
- Develop processing pipelines in PySpark (AWS Glue or EMR)
- Manage data in S3 and Glue Data Catalog
- Ensure data quality and integrity (data quality checks)
- Optimize pipeline costs and performance
- Create and maintain technical documentation and development standards
Key Words:
AWS
English
<<
What do we offer?
- TCS Benefits – Brazil:
- Health insurance
- Dental Plan
- Life insurance
- Transportation vouchers
- Meal/Food Voucher
- Childcare assistance
- Gympass
- TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates
- Partnership with SESC
- Reimbursement of Certifications
- Free TCS Learning Portal – Online courses and live training
- International experience opportunity
- Discount Partnership with Universities and Language Schools
- Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire
- TCS Gems – Recognition for performance
- Xcelerate – Free Mentoring Career Platform
- Tata Consultancy Services is an equal opportunity employer, our commitment to diversity & inclusion drives our efforts to provide equal opportunity to all candidates who meet our required knowledge & competency needs, irrespective of any socio-economic background, race, color, national origin, religion, sex, gender identity/expression , age, marital status, disability, sexual orientation or any others. We encourage anyone interested to build a career in TCS to participate in our recruitment & selection process.
- At Tata Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects each person. Our motto is Inclusion without exception.
At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!
#Buildingonbelief
- RGS - 10221312
Data Engineer
Publicado há 2 dias atrás
Trabalho visualizado
Descrição Do Trabalho
2 Senior Data Engineer (Cloud)
Duration: 6 Months initially then extension
Location: Remote, nearshore (Brazil and Mexico, candidates from other locations please do not apply).
This position is NEARSHORE
8-13 years of experience
Must have skills:
• Proficiency with databases (e.g., Snowflake, DB2, Redshift) and dimensional modeling.
• Hands-on experience with AWS architecture and services (Lambda, Glue, Kinesis, Firehose, Athena, S3, Cloudwatch, Dynamodb, API Gateway).
• Proficient in Python, SQL, and scripting (e.g., Unix shell scripts).
• Experience building feature engineering pipelines.
• Experience with CI/CD tools such as GitHub, GitHub Actions, CodePipeline, and CloudFormation.
• Knowledge of user authentication and authorization across systems, servers, and environments.
• Experience with Tecton or Sagemaker or similar feature stores.
• Experience with NoSQL databases.
• Ability to take ownership and proactively ensure delivery timelines are met.
Good-to-Have Skills
• Experience in data pipeline development using modern ETL tools, specifically Informatica PowerCenter and/or Informatica IICS.
• Experience with Airflow.
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
At Qaracter, we’re looking for a Data Engineer to join a stable, long-term project in the financial sector.
Qaracter is a consulting firm focused on growth, innovation, and continuous improvement, with a strong commitment to our clients. Our consulting services in business, technology, and operations have an international scope — we’re present in Brazil, Spain, Argentina, and Mexico, and collaborate with clients in markets such as the UK and Andorra.
What we’re looking for:
- 3+ years of experience as a Data Engineer
- Strong skills in Python, SQL, PySpark, and Databricks
- Previous experience in banking or financial projects
Nice to have:
- Experience with data platforms in financial institutions
- Familiarity with treasury or risk systems
What we offer:
- CLT contract + benefits
- Hybrid model: 3 days onsite in Santo Amaro (São Paulo)
If you're looking for a new challenge and want to work with cutting-edge technologies in a collaborative and agile environment, join our team!
At QARACTER Group, we are committed to ensuring equal opportunities and non-discrimination in all our processes, promoting an inclusive and diverse work environment. All applications are welcome regardless of sex, gender, gender identity or expression, sexual orientation, origin, ethnicity, age, disability, or any other personal or social condition, in line with our Plan de Igualdad and our commitment to equity.
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Kamino - Software de Gestão Financeira
Na Kamino, somos mais do que um software financeiro. Nosso objetivo central é transformar a perspectiva das empresas em relação às finanças e operações corporativas. Nossa equipe é impulsionada pelo compromisso de capacitar empresas a atingirem o sucesso, por meio da simplificação e otimização inteligente dos processos financeiros. Além do Software também temos banco e crédito embarcado! Para mais detalhes, visite Kamino .
O que esperamos de você nesta kaminhada:
Procuramos um Engenheiro de Dados Sênior para liderar a arquitetura, construção e manutenção das nossas pipelines de dados, garantindo escalabilidade, confiabilidade e performance. Esta pessoa se reportará diretamente ao Head de Dados.
Como Engenheiro de Dados Sênior , você terá impacto direto na qualidade e governança dos dados que impulsionam decisões estratégicas e operacionais em toda a empresa. Você será responsável por projetar pipelines eficientes de ingestão, processamento e disponibilização de dados em tempo quase real, apoiar iniciativas de analytics e machine learning, colaborar com times de produto e engenharia e garantir o cumprimento das melhores práticas de segurança e compliance.
A pessoa ideal deverá ter sólida experiência com grandes volumes de dados, domínio de ferramentas como Spark, BigQuery, Airflow e bancos de dados relacionais e não relacionais, além de experiência anterior em ambientes cloud (como GCP, AWS ou Azure). Familiaridade com arquitetura orientada a eventos, modelagem de dados escalável e práticas de CI/CD são altamente valorizadas.
Você atuará como ponto de referência técnico dentro do time de dados, promovendo boas práticas, mentoria e padronização de processos. Colaboração, visão de produto, perfil analítico, excelência em engenharia e capacidade de simplificar sistemas complexos são essenciais para o sucesso nesta função.
Sua missão durante a eskalada:
- Responsável pelo desenvolvimento, manutenção e gestão de toda nossa stack de dados, desde a infra-estrutura e ferramentas de desenvolvimento até controle de todo o pipeline e métricas de negócios;
- Apoio diário para o time de análise de dados, na construção de modelos de dados que atendam as necessidades de negócio;
- Responsável em manter nosso ambiente de dados funcional;
- Apoio aos times de desenvolvimento em processos de integração de dados fora do ambiente de dados;
- Constante revisão de processos e rotinas para otimização;
- Controlar o custo no uso dos serviços de nossa Cloud.
O que você traz na bagagem para ser parte desta eskalada?
Cloud & Dados:
- Google Cloud Platform (BigQuery, Cloud Storage, Cloud Functions, Dataflow, Pub/Sub, Composer etc.);
- Arquiteturas modernas: Data Lakehouse, Event Streaming, ELT;
- Ferramentas de orquestração (preferência: Airflow);
- Integrações com APIs (REST, webhooks).
Infraestrutura & Observabilidade:
- Experiência com infraestrutura como código (ex: Terraform ou Deployment Manager);
- Criação e gestão de ambientes escaláveis e seguros em GCP;
- Controle de custos e otimização de recursos em ambientes serverless e batch;
- Observabilidade com ferramentas como Stackdriver, DataDog, Prometheus, OpenTelemetry;
- Gestão de ambientes CI/CD (GitHub Actions, Cloud Build, etc.);
- Controle de permissões, segregação de ambientes e boas práticas de segurança de dados.
Engenharia & Produto:
- Modelagem de dados escalável (OLAP e OLTP);
- Construção de pipelines de dados resilientes e modulares;
- Versionamento de dados (dbt, Dataform, etc).
AI & Automatização:
- Uso de IA para acelerar trabalho técnico: ex. geração de código com LLMs (Copilot, Gemini, etc.), automações com agents (Flowise, LangChain, N8N);
- Experiência em projetos envolvendo IA generativa, classificação de dados, enriquecimento automatizado ou copilots internos.
A gente pensa em tudo, né?
Olha o que preparamos para você estar cheio de energia enquanto kaminha conosco:
- Plano Saúde SulAmérica e incluindo dependentes legais, sem mensalidade e co-participação - A saúde precisa estar em dia, né? A SulAmérica faz tudo!
- Plano Odontológico Odonto Mais - Sorrisão sempre no jeito;
- Seguro de vida;
- Cartão Flash - Esse não pode faltar jamais! O queridinho de todos!
- Gympass - Bora praticar hábitos saudáveis?;
- Day off na semana do aniversário - De lei, você precisa comemorar na data mais linda do ano;
- Possibilidade de receber Stock Options após 1 ano de contrato - Isso aqui é elite!
Bora com a gente? LFG!
Seja o primeiro a saber
Sobre o mais recente Data engineer Empregos em Brasil !
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
We are seeking a skilled and experienced Data Engineer to join our Threat Research team. The primary responsibility of this role will be to design, develop, and maintain data pipelines for threat intelligence ingestion, validation, and export automation flows.
Responsibilities:
- Design, develop, and maintain data pipelines for ingesting threat intelligence data from various sources into our data ecosystem.
- Implement data validation processes to ensure data accuracy, completeness, and consistency.
- Collaborate with threat analysts to understand data requirements and design appropriate solutions.
- Develop automation scripts and workflows for data export processes to external systems or partners.
- Optimize and enhance existing data pipelines for improved performance and scalability.
- Monitor data pipelines and troubleshoot issues as they arise, ensuring continuous data availability and integrity.
- Document technical specifications, data flows, and procedures for data pipeline maintenance and support.
- Stay updated on emerging technologies and best practices in data engineering and incorporate them into our data ecosystem.
- Provide technical guidance and support to other team members on data engineering best practices and methodologies.
Requirements:
- Proven experience as a Data Engineer or similar role, with a focus on data ingest, validation, and export automation.
- Strong proficiency in Python.
- Experience with data pipeline orchestration tools such as Apache Airflow, Apache NiFi, or similar.
- Familiarity with cloud platforms such as Snowflake, AWS, Azure, or Google Cloud Platform.
- Experience with data validation techniques and tools for ensuring data quality.
- Experience building and deploying images using containerization technologies such as Docker and Kubernetes.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Data Engineer
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
About Extern
The age of higher education is ending. Employers no longer hire based on degrees — they hire based on skills and experience. But in today’s market, getting real experience is nearly impossible. Internships are disappearing, entry-level jobs demand years of experience, and AI is reshaping the workforce faster than universities can keep up.
At Extern, we’re building the world’s first subscription to professional experience where learning never stops and every experience moves you forward. Through Externships — hands-on remote projects in AI, product strategy, data, finance, and more — students and career switchers learn by doing while working with top companies.
Our platform makes real-world experience accessible to anyone, anywhere by providing companies with a less burdensome way to engage young talent than high touch internships programs. At Extern, we are helping college students and early career professionals not just land an internship or job, but continuously upskill and stay ahead in their careers through real-world experience.
Our Impact
- Strong Growth: Over 35,000 learners gained real-world experience last year (300% YoY growth). Our new B2C subscription model launched in late 2024 and is already driving fast adoption and strong unit economics.
- Top-Tier Externship Partners: PwC, Home Depot, Beats by Dre (Apple), HP, Expedia, Epic Games, Unity, Macquarie, Pfizer, and National Geographic trust Extern to connect them with emerging talent.
- Career Outcomes That Matter: 70% of externs land internships or jobs within six months.
- Strong Backers: We’re funded by Y Combinator, Jason Calacanis, Foundation Capital, Learn Capital, and University Ventures — who all believe in our vision to replace outdated education models with continuous, experience-based learning.
If you’re excited about shaping the future of education and work—and want to drive growth at startup speed — read on.
What You’ll Do
- Own our end-to-end data pipeline: ingestion, transformation, and orchestration
- Build and maintain ETL processes to move data between production systems, analytics tools (Mixpanel, Segment), and our warehouse
- Partner with product and marketing to implement tracking plans that support experimentation and reporting
- Design and optimize our data models to support analysis, dashboards, and AI use cases
- Collaborate with engineers to improve event logging and ensure data integrity across systems
- Ensure data privacy, security, and compliance with relevant standards
Who You Are
- 5+ years of experience as a data engineer or analytics engineer
- Fluent in SQL and any modern language, like Python or Javacript, with experience working with modern data stacks (e.g. dbt, BigQuery, Fivetran, etc.)
- Strong understanding of MySQL (AWS Aurora) Databases.
- Strong understanding of data modeling, schema design, and event tracking
- Comfortable working cross-functionally in a fast-paced, early-stage environment
- Bonus: experience with analytics platforms (Segment, Mixpanel), or integrating with AI/ML systems
- Bonus: experience with customer.io.
Why Join Extern?
This isn’t just another job—it’s an opportunity to redefine the way people build careers. If you’re a data-driven, growth-obsessed product leader who thrives on fast iteration, bold ideas, and real impact, this is your chance to build something that will change millions of lives.
Ready to help shape the future of experiential learning? Let’s build the new standard for lifelong career acceleration—together.
- Mission-Driven Impact: Help thousands of students and career switchers access the real-world experience they need to succeed.
- High-Growth Startup: Be part of a rapidly scaling B2C platform with strong early traction.
- Product-Led Growth Focus: Build organic, self-sustaining acquisition loops instead of just “buying” users.
- Ownership & Autonomy: Work in a high-ownership environment where decisions are made quickly, and you can see the impact of your work in real time.
- World-Class Team & Backing: Join a mission-driven team, guided by investors like Y Combinator, Foundation Capital, and more.
This role is remote and can be in any location in the world.
Data Engineer
Publicado há 8 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Responsabilidades
• Atuar em todo o ciclo de desenvolvimento de software, incluindo testes unitários e de integração
• Realizar deploy em ambientes como Snowflake, Databricks, Hadoop, AWS ou Azure
• Trabalhar com ferramentas de ingestão de dados como StreamSets, Apache NiFi ou Azure Data Factory
• Gerenciar armazenamento distribuído (Kafka, HDFS, S3/ADLS, Elastic)
• Escrever, depurar e otimizar queries SQL
• Desenvolver pipelines e soluções com Python, PySpark, Spark SQL e Spark Structured Streaming
• Utilizar ferramentas como GitHub, GitHub Actions e Databricks Asset Bundles
• Aplicar princípios de design SOLID
Requisitos
• Experiência comprovada como Data Engineer (10 a 15 anos)
• Proficiência em SQL e programação Python
• Domínio de Databricks, Spark e DataLake
• Conhecimento em JSON, DataFrame e Structured Streaming
• Experiência com ferramentas de versionamento e automação (GitHub, GitHub Actions)
• Inglês fluente (obrigatório)
Qualificações Desejáveis
• Conhecimento em SAP WITS/WITSML e CosmoDB
• Familiaridade com Databricks CLI
• Experiência no setor de Óleo & Gás será considerada um diferencial
• Contratação PJ
• Modelo de trabalho 100% remoto
• Projeto de longa duração com cliente multinacional
• Integração com equipe global