1.874 Empregos para Senior Data Engineer - Brasil

Senior Data Engineer

Tata Consultancy Services

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Come to one of the biggest IT Services companies in the world! Here you can transform your career!

Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.

We are looking for (job name) who wants to learn and transform his career.

In this role you will: (responsibilities)

  • Proficient in PySpark, Python, SQL with atleast 5 years of experience
  • Working experience in Palantir Foundry platform is must
  • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred).
  • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture.
  • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases.
  • Proficient in SQL (Spark SQL preferred).
  • Experience with JavaScript/HTML/CSS a plus.
  • Experience working in a Cloud environment such as Azure or AWS is a plus.
  • Experience with Scrum/Agile development methodologies.
  • At least 7 years of experience working with large scale software systems.
  • Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline



What can you expect from us?

• Professional development and constant evolution of your skills, always in line with your interests.

• Opportunities to work outside Brazil

• A collaborative, diverse and innovative environment that encourages teamwork.

What do we offer?

TCS Benefits – Brazil:

Health insurance

Dental Plan

Life insurance

Transportation vouchers

Meal/Food Voucher

Childcare assistance

Gympass

TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates

Partnership with SESC

Reimbursement of Certifications

Free TCS Learning Portal – Online courses and live training

International experience opportunity

Discount Partnership with Universities and Language Schools

Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire

TCS Gems – Recognition for performance

Xcelerate – Free Mentoring Career Platform

At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!


#Buildingonbelief


ID:

Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Pride Global

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Avenue Code

Ontem

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

We are looking for a Senior Data Engineer to design and maintain scalable data pipelines on AWS, ensuring performance, quality, and security. You will collaborate with data scientists and analysts to integrate data from multiple sources and support AI/ML initiatives.


Key Responsibilities:


  • Build and optimize ETL pipelines with AWS Glue.
  • Work with AWS S3, Glue, and SageMaker for data and AI workflows.
  • Develop solutions in Python and SQL .
  • Integrate data from Salesforce and APIs .
  • Ensure data governance, documentation, and best practices.


Tech Stack:


  • AWS (S3, Glue, SageMaker)
  • Python, SQL
  • Salesforce, APIs


Requirements:


  • Proven experience in data engineering with AWS .
  • Strong Python + SQL skills.
  • Experience with ETL, data modeling, and pipeline optimization.
  • Advanced English (international collaboration).


Avenue Code reinforces its commitment to privacy and to all the principles guaranteed by the most accurate global data protection laws, such as GDPR, LGPD, CCPA and CPRA. The Candidate data shared with Avenue Code will be kept confidential and will not be transmitted to disinterested third parties, nor will it be used for purposes other than the application for open positions. As a Consultancy company, Avenue Code may share your information with its clients and other Companies from the CompassUol Group to which Avenue Code’s consultants are allocated to perform its services.

Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Covalenty

Ontem

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Sobre a Covalenty

Somos um SaaS-enabled B2B marketplace que conecta farmácias independentes aos seus fornecedores, ajudando a cadeia de suprimentos farmacêutica a ser mais eficiente, inteligente e integrada. Nosso propósito é garantir que o estoque ideal esteja no lugar certo, na hora certa — da indústria à farmácia.


Senior Data Engineer


Na Covalenty , estamos transformando o jeito que pequenas farmácias compram, crescem e se tornam mais competitivas, por meio de tecnologia, inteligência de dados e soluções reais. Se você busca liderar com propósito, gerar impacto e trabalhar com times talentosos em uma startup em expansão — essa vaga é pra você!


Sobre a vaga

Buscamos uma pessoa Data Engineer Sênior para estruturar, escalar e manter pipelines de dados voltados a projetos de Inteligência Artificial e Machine Learning, dando suporte também às decisões estratégicas da empresa. A pessoa atuará como elo entre os times de Data Science, Engenharia e Produto, construindo arquiteturas robustas e eficientes.


Principais Responsabilidades

  • Projetar, construir e manter pipelines de dados escaláveis e performáticos (ETL/ELT)
  • Trabalhar com arquiteturas modernas em cloud (AWS, GCP ou Azure)
  • Criar e manter data lakes, feature stores e fluxos de dados para modelagem preditiva
  • Desenvolver integrações entre fontes de dados estruturados, semi e não estruturados
  • Apoiar a construção de bases de dados para treinamento e inferência de modelos
  • Garantir boas práticas de engenharia, segurança e governança de dados
  • Monitorar e otimizar pipelines e ambientes de dados em larga escala


Requisitos Desejáveis

  • Experiência sólida com engenharia de dados em ambientes complexos
  • Domínio em Python, SQL e manipulação de grandes volumes de dados
  • Experiência com ferramentas de orquestração (Airflow, Prefect, dbt)
  • Vivência com Big Data (Apache Spark, Kafka, Hadoop)
  • Experiência em ambientes cloud (AWS, GCP ou Azure)
  • Conhecimento em versionamento de dados e MLOps é um diferencial
  • Prática com bancos SQL (PostgreSQL, MySQL) e NoSQL (BigQuery, DynamoDB)
  • Familiaridade com pipelines voltados a Machine Learning


O que esperamos de você

  • Experiência com feature stores para IA
  • Conhecimento em DataOps/MLOps e automação de pipelines
  • Vivência com arquiteturas orientadas a eventos e processamento em tempo real
  • Atuação prévia em projetos de IA ou sistemas de recomendação
  • Pensamento analítico com foco em soluções práticas
  • Boa comunicação e colaboração com times multidisciplinares
  • Autonomia e proatividade em projetos complexos
  • Comprometimento com qualidade, segurança e responsabilidade no uso dos dados


Na Covalenty, você vai encontrar:

  • Desafios técnicos com impacto real no mercado de saúde
  • Um time colaborativo, com espaço para inovação, autonomia e crescimento
  • Cultura de aprendizado contínuo, com rituais leves e foco em resultado
  • Ambiente ágil, dinâmico e em constante evolução


Quer fazer parte dessa jornada?

#VemPraCovalenty


Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Zartis

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

The company and our mission:


Zartis is a digital solutions provider working across technology strategy, software engineering and product development.

We partner with firms across financial services, MedTech, media, logistics technology, renewable energy, EdTech, e-commerce, and more. Our engineering hubs in EMEA and LATAM are full of talented professionals delivering business success and digital improvement across application development, software architecture, CI/CD, business intelligence, QA automation, and new technology integrations.


We are looking for a Data Engineer to work on a project in the Technology industry.


The project:


Our teammates are talented people that come from a variety of backgrounds. We’re committed to building an inclusive culture based on trust and innovation.

You will be part of a distributed team developing new technologies to solve real business problems. Our client empowers organizations to make smarter, faster decisions through the seamless integration of strategy, technology, and analytics. They have helped leading brands harness their marketing, advertising, and customer experience data to unlock insights, enhance performance, and drive digital transformation.

We are looking for someone with good communication skills, ideally with experience making decisions, being proactive, used to building software from scratch, and with good attention to detail.


What you will do:


  • Designing performant data pipelines for the ingestion and transformation of complex datasets into usable data products.
  • Building scalable infrastructure to support hourly, daily, and weekly update cycles.
  • Implementing automated QA checks and monitoring systems to catch data anomalies before they reach clients.
  • Re-architecting system components to improve performance or reduce costs.
  • Supporting team members through code reviews and collaborative development.
  • Building enterprise-grade batch and real-time data processing pipelines on AWS, with a focus on serverless architectures.
  • Designing and implementing automated ELT processes to integrate disparate datasets.
  • Collaborating across multiple teams to ingest, extract, and process data using Python, R, Zsh, SQL, REST, and GraphQL APIs.
  • Transforming clickstream and CRM data into meaningful metrics and segments for visualization.
  • Creating automated acceptance, QA, and reliability checks to ensure business logic and data integrity.
  • Designing appropriately normalized schemas and making informed decisions between SQL and NoSQL solutions.
  • Optimizing infrastructure and schema design for performance, scalability, and cost efficiency.
  • Defining and maintaining CI/CD and deployment pipelines for data infrastructure.
  • Containerizing and deploying solutions using Docker and AWS ECS.
  • Proactively identifying and resolving data discrepancies, and implementing safeguards to prevent recurrence.
  • Contributing to documentation, onboarding materials, and cross-team enablement efforts.


What you will bring:


  • Bachelor’s degree in Computer Science, Software Engineering, or a related field; additional training in statistics, mathematics, or machine learning is a strong plus.
  • 5+ years of experience building scalable and reliable data pipelines and data products in a cloud environment (AWS preferred).
  • Deep understanding of ELT processes and data modeling best practices.
  • Strong programming skills in Python or a similar scripting language.
  • Advanced SQL skills, with intermediate to advanced experience in relational database design.
  • Familiarity with joining and analyzing large behavioral datasets, such as Adobe and GA4 clickstream data.
  • Excellent problem-solving abilities and strong attention to data accuracy and detail.
  • Proven ability to manage and prioritize multiple initiatives with minimal supervision.


Nice to have:


  • Experience working with data transformation tools such as Data Build Tool or similar technologies.
  • Familiarity with Docker containerization and orchestration.
  • Experience in API design or integration for data pipelines.
  • Development experience in a Linux or Mac environment.
  • Exposure to data QA frameworks or observability tools (e.g., Great Expectations, Monte Carlo, etc.).


What we offer:


  • 100% Remote Work
  • WFH allowance: Monthly payment as financial support for remote working.
  • Career Growth: We have established a career development program accessible for all employees with a 360º feedback that will help us to guide you in your career progression.
  • Training: For Tech training at Zartis, you have time allocated during the week at your disposal. You can request from a variety of options, such as online courses (from Pluralsight and Educative.io, for example), English classes, books, conferences, and events.
  • Mentoring Program: You can become a mentor in Zartis or you can receive mentorship, or both.
  • Zartis Wellbeing Hub (Kara Connect) : A platform that provides sessions with a range of specialists, including mental health professionals, nutritionists, physiotherapists, fitness coaches, and webinars with such professionals as well.
  • Multicultural working environment: We organize tech events, webinars, parties, and activities to do online team-building games and contests.
Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

São Paulo, São Paulo Credix

Publicado há 11 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

About Credix

Credix is a FinTech company dedicated to growing businesses in Latin America. Building on our expertise, we now focus on providing a tailored Buy Now, Pay Later (BNPL) solution for B2B transactions in Brazil with our platform, CrediPay. CrediPay is created to help business grow their sales and improve their cashflow efficiency through seamless and risk-free credit offering. Sellers offer their buyers flexible payment terms at an attractive price point and receive upfront payments. We manage and protect our clients from any credit & fraud risk, letting them focus only on what matters: increased sales and profitability.


Why choose Credix?

  • Become part of a forward-thinking start-up where boldness and a commitment to excellence are paramount, and your personal and professional development is at the forefront.
  • Work alongside a dedicated team of bright individuals driven by an Olympian mindset to excel in every aspect of our operations. Together, we aim to build with velocity, utilizing innovative embedded finance strategies to expand business operations in Latin America.
  • Experience a close and supportive work atmosphere where collaboration thrives, wise judgment guides our decisions, and you can learn, grow, and take on meaningful responsibilities.


About the job

As a Senior Data Engineer, you will be at the heart of Credix's data strategy, designing and building scalable pipelines and infrastructure that empower teams across the company. Your work will enable the Risk team to enhance predictive modeling, streamline data consumption for other departments, and help drive contextual underwriting and data-driven decision-making. You are passionate about leveraging data to solve complex challenges and revolutionize the B2B credit market in Brazil.


Responsibilities

  • Build and Own Ingestion Pipelines: Design robust, modular pipelines to ingest structured and semi-structured data into Google Cloud Platform (GCP) environments.
  • Develop Clean, Analytics-Ready Layers: Use dbt to transform raw ingested data into curated datasets optimized for credit risk modeling and business intelligence consumption.
  • Operationalize the Data Lake: Manage the data lifecycle of our transactional data to support both real-time and historical querying needs.
  • Metrics & KPI Layer: Create a single source of truth for key business KPIs and credit risk metrics by building reliable and tested data marts.
  • Implement Data Quality Controls: Deploy automated testing frameworks (e.g., dbt tests, GCP dataplex) to ensure 90%+ coverage and detect schema drift, nulls, and outliers.
  • Support API & 3rd Party Integrations: Develop ingestion frameworks for external APIs to enrich risk data.
  • Collaborate Across Functions: Work closely with Credit Risk, Operations, and Product teams to understand analytical needs and translate them into scalable data solutions.
  • Contribute to Platform Scalability: Design pipelines with reusability and modularity in mind to support onboarding new data sources and future expansion across regions or products.
  • Maintain Observability: Ensure logging, monitoring, and alerting are implemented across data flows for reliability and debugging (e.g., via GCP Logging, Cloud Monitoring, or third-party tools).
  • Documentation & Demo Ownership: Create clear, user-friendly documentation and visual diagrams of the data architecture and transformation layers.


Qualifications

  • Hands-on experience building ETL/ELT pipelines with dbt (must-have) and orchestration tools like Apache Airflow, Cloud Composer, or similar.
  • Deep understanding of Google Cloud Platform services (e.g., BigQuery, Cloud Storage, Cloud Run, Dataflow).
  • Expertise in SQL and Python, with clean, well-documented coding practices.
  • Familiarity with data warehousing best practices, medallion design, and analytics engineering principles.
  • Experience working with Terraform or similar IAC tools for provisioning data infrastructure.
  • Fluent in Portuguese and English, both written and spoken.
  • Bonus: Experience with streaming data ingestion (e.g., Pub/Sub, Kafka, or Dataflow).
  • Bonus: Familiarity with financial services data (installments, receivables, delinquency, credit scoring, etc.) and regional data sources in Brazil (Serasa, Receita, CNPJ enrichment).
  • Proactive, detail-oriented, and self-motivated, with a strong commitment to quality and delivery.
  • Ability to clearly communicate data design trade-offs and mentor junior engineers or analysts in best practices.


What we offer

Our office provides a vibrant and engaging workspace where team members can connect, innovate, and grow. With access to our offices in Sao Paulo, you’ll immerse yourself in a culture of innovation and collaboration.

But that’s just the beginning - here’s what else we offer:

  • A culture of learning and experimentation: where you are encouraged to explore new ideas and technologies
  • Competitive salary package: Your hard work deserves recognition, and we ensure you’re well-rewarded for your contributions.
  • Equity stock options plan: Be a part of our journey towards success and share in the rewards.
  • Paid holidays: Enjoy the flexibility to recharge and rejuvenate
  • Off-sites: Awesome team building, unforgettable memories, and adventures ensured during our team off-sites.
Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Kake

Publicado há 14 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

About The Role

We are seeking experienced Data Engineers to develop and deliver robust, cost-efficient data products that power analytics, reporting and decision-making across two distinct brands.


What You’ll Do

- Build highly consumable and cost-efficient data products by synthesizing data from diverse source systems.

- Ingest raw data using Fivetran and Python, staging and enriching it in BigQuery to provide consistent, trusted dimensions and metrics for downstream workflows.

- Design, maintain, and improve workflows that ensure reliable and consistent data creation, proactively addressing data quality issues and optimizing for performance and cost.

- Develop LookML Views and Models to democratize access to data products and enable self-service analytics in Looker.

- Deliver ad hoc SQL reports and support business users with timely insights.

- (Secondary) Implement simple machine learning features into data products using tools like BQML.

- Build and maintain Looker dashboards and reports to surface key metrics and trends.


What We’re Looking For

- Proven experience building and managing data products in modern cloud environments (GCP preferred).

- Strong proficiency in Python for data ingestion and workflow development.

- Hands-on expertise with BigQuery, dbt, Airflow and Looker.

- Solid understanding of data modeling, pipeline design and data quality best practices.

- Excellent communication skills and a track record of effective collaboration across technical and non-technical teams.


Tech stack

SQL, Python, BigQuery, dbt, Fivetran, Airflow, Google Cloud Storage, LookML, Looker.


Why Join Kake?

Kake is a remote-first company with a global community — fully believing that it’s not where your table is, but what you bring to the table. We provide top-tier engineering teams to support some of the world’s most innovative companies, and we’ve built a culture where great people stay, grow, and thrive. We’re proud to be more than just a stop along the way in your career — we’re the destination.


The icing on the Kake:

Competitive Pay in USD – Work globally, get paid globally.

Fully Remote – Simply put, we trust you.

Better Me Fund – We invest in your personal growth and passions.

️ Compassion is Badass – Join a community that invests in social good.

Desculpe, este trabalho não está disponível em sua região
Seja o primeiro a saber

Sobre o mais recente Senior data engineer Empregos em Brasil !

Senior Data Engineer

Solvd, Inc.

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Overview

Solvd Inc. is a rapidly growing AI-native consulting and technology services firm delivering enterprise transformation across cloud, data, software engineering, and artificial intelligence . We work with industry-leading organizations to design, build, and operationalize technology solutions that drive measurable business outcomes.

Following the acquisition of Tooploox , a premier AI and product development company, Solvd now offers true end-to-end delivery —from strategic advisory and solution design to custom AI development and enterprise-scale implementation. Our capability centers combine deep technical expertise, proven delivery methodologies, and sector-specific knowledge to address complex business challenges quickly and effectively.

Solvd is an AI-first advisory and digital engineering firm delivering measurable business impact through strategic digital transformation. Taking an AI-first approach, we bridge the critical gap between experimentation and real ROI, weaving artificial intelligence into everything we do and helping clients at all stages accelerate AI integration into each process layer. Our mission is to empower passionate people to thrive in the era of AI while maintaining rigorous ethical AI standards. We’re supported by a global team with offices in the USA, Poland, Ukraine and Georgia.

Responsibilities
  • Data Ingestion & Pipeline Development: Design, build, and maintain robust data pipelines to ingest new advertising data from sources such as Google Ad Manager and others into our Snowflake data lake.

  • Data Modeling: Transform raw, often unstructured data into clean, well-structured data marts within Snowflake. You will be responsible for creating a single source of truth that is optimized for reporting and analysis.

  • Data Quality Assurance: Implement processes and checks to ensure the accuracy and integrity of our new data sources. You'll identify and resolve data quality issues like duplicates or inconsistencies before the data is used for reporting.

  • Technical Guidance & Requirements Gathering: Partner with business stakeholders to understand their data needs and translate them into technical requirements. You'll provide expert advice on data availability and feasibility, helping to shape the strategic direction of our advertising data platform.

  • Tooling & Architecture: Contribute to the design and evolution of our data architecture, ensuring our Snowflake instance is structured efficiently and our processes align with best practices. You'll also work closely with our Tableau users to ensure data is optimized for their reporting needs.

Mandatory requirements
  • Significant experience with data engineering, including ingesting, cleansing, and transforming novel datasets.

  • Proficiency in Snowflake for data modeling and warehousing.

  • Expertise in Tableau for building dashboards and reporting.

  • Strong analytical skills with a proven ability to perform root cause analysis and quality assurance on data.

  • Demonstrated experience with requirements gathering and acting as a consultative partner to non-technical stakeholders.

  • Familiarity with the ad tech ecosystem, particularly with data from sources like Google Ad Manager .

Optional requirements
  • Experience working with an advertising or marketing line of business.

  • Previous experience in a fast-paced environment where data sources are constantly evolving.

  • Excellent communication skills with the ability to explain complex technical concepts to a variety of audiences.

Tech stack
  • Snowflake for data warehousing.

  • Tableau for BI and reporting.

  • Google Ad Manager integration is underway; experience with this tool is a plus.

  • Existing data lake architecture includes raw, structured, and curated zones.

  • Preference for data marts that plug into Tableau's relationship system — not heavily SQL-based dashboards.

What you'll do
  • Shape real-world AI-driven projects across key industries, working with clients from startup innovation to enterprise transformation.

  • Be part of a global team with equal opportunities for collaboration across continents and cultures.

  • Thrive in an inclusive environment that prioritizes continuous learning, innovation, and ethical AI standards.

Ready to make an impact?

If you're excited to build things that matter, champion responsible AI, and grow with some of the industry’s sharpest minds. Apply today and let’s innovate together.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Invillia

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Overview

A Invillia não apenas transforma a forma como empresas inovam; também conecta pessoas apaixonadas por tecnologia ao redor do mundo. Procuramos uma pessoa engenheira de dados sênior para atuar na construção e evolução da plataforma de Feature Store, solução interna que acelera o desenvolvimento de modelos de machine learning e inteligência artificial. O foco será no desenvolvimento de pipelines performáticos, frameworks reutilizáveis e mecanismos de qualidade de dados, com forte colaboração com os times de Data Science e MLOps.

Responsabilidades
  • Desenvolver pipelines escaláveis e de alta performance.
  • Projetar soluções com foco em modularização, versionamento e reuso.
  • Criar e manter frameworks internos para automação de processos.
  • Implementar mecanismos de validação e monitoramento de dados.
  • Apoiar a evolução contínua da plataforma de dados.
  • Experiência sólida com Spark (PySpark) e processamento distribuído.
  • Conhecimento em CI/CD com uso de Git e Jenkins.
  • Vivência com frameworks de Data Quality ou validações customizadas.
  • Conhecimento em engenharia de software (boas práticas, testes, modularização).
Diferenciais
  • Experiência com automação de pipelines de dados e projetos de MLOps/Feature Store.
  • Experiência com projetos de IA generativa, especialmente com RAG.
Requisitos e Qualificações
  • Formação relacionada a engenharia de dados, ciência de dados ou áreas afins.
  • Conhecimento sólido de Spark (PySpark) e processamento distribuído.
  • Experiência em CI/CD, Git e Jenkins.
  • Conhecimento de Data Quality ou validações de dados.
  • Princípios de engenharia de software, testes e modularização.
Sobre a Invillia

A experiência de trabalhar na Invillia é única e global. Contamos com uma cultura de conexão entre talentos, inovação e oportunidades distribuídas. Temos uma metodologia voltada a pessoas e dados e investimos no desenvolvimento dos nossos colaboradores.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Senior Data Engineer

Rio de Janeiro , Rio de Janeiro buscojobs Brasil

Publicado há 2 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

About The Role

We are seeking experienced Data Engineers to develop and deliver robust, cost-efficient data products that power analytics, reporting and decision-making across two distinct brands.

What You’ll Do
  • Build highly consumable and cost-efficient data products by synthesizing data from diverse source systems.
  • Ingest raw data using Fivetran and Python, staging and enriching it in BigQuery to provide consistent, trusted dimensions and metrics for downstream workflows.
  • Design, maintain, and improve workflows that ensure reliable and consistent data creation, proactively addressing data quality issues and optimizing for performance and cost.
  • Develop LookML Views and Models to democratize access to data products and enable self-service analytics in Looker.
  • Deliver ad hoc SQL reports and support business users with timely insights.
  • Secondary) Implement simple machine learning features into data products using tools like BQML.
  • Build and maintain Looker dashboards and reports to surface key metrics and trends.
What We’re Looking For
  • Proven experience building and managing data products in modern cloud environments (GCP preferred).
  • Strong proficiency in Python for data ingestion and workflow development.
  • Hands-on expertise with BigQuery, dbt, Airflow and Looker.
  • Solid understanding of data modeling, pipeline design and data quality best practices.
  • Excellent communication skills and a track record of effective collaboration across technical and non-technical teams.
Tech stack

SQL, Python, BigQuery, dbt, Fivetran, Airflow, Google Cloud Storage, LookML, Looker.

Why Join Kake?

Kake is a remote-first company with a global community — fully believing that it’s not where your table is, but what you bring to the table. We provide top-tier engineering teams to support some of the world’s most innovative companies, and we’ve built a culture where great people stay, grow, and thrive. We’re proud to be more than just a stop along the way in your career — we’re the destination.

The icing on the Kake :

Competitive Pay in USD – Work globally, get paid globally.

Fully Remote – Simply put, we trust you.

Better Me Fund – We invest in your personal growth and passions.

Compassion is Badass – Join a community that invests in social good.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Locais próximos

Outros empregos perto de mim

Indústria

  1. workAdministrativo
  2. ecoAgricultura e Florestas
  3. schoolAprendizagem e Estágios
  4. apartmentArquitetura
  5. paletteArtes e Entretenimento
  6. paletteAssistência Médica
  7. diversity_3Assistência Social
  8. diversity_3Atendimento ao Cliente
  9. flight_takeoffAviação
  10. account_balanceBanca e Finanças
  11. spaBeleza e Bem-Estar
  12. shopping_bagBens de grande consumo (FMCG)
  13. restaurantCatering
  14. point_of_saleComercial e Vendas
  15. shopping_cartCompras
  16. constructionConstrução
  17. supervisor_accountConsultoria de Gestão
  18. person_searchConsultoria de Recrutamento
  19. person_searchContábil
  20. brushCriativo e Digital
  21. currency_bitcoinCriptomoedas e Blockchain
  22. child_friendlyCuidados Infantis
  23. shopping_cartE-commerce e Redes Sociais
  24. schoolEducação e Ensino
  25. boltEnergia
  26. medical_servicesEnfermagem
  27. foundationEngenharia Civil
  28. electrical_servicesEngenharia Eletrotécnica
  29. precision_manufacturingEngenharia Industrial
  30. buildEngenharia Mecânica
  31. scienceEngenharia Química
  32. biotechFarmacêutico
  33. gavelFunção Pública
  34. gavelGerenciamento
  35. gavelGerenciamento de Projetos
  36. gavelHotelaria e Turismo
  37. smart_toyIA e Tecnologias Emergentes
  38. home_workImobiliário
  39. handymanInstalação e Manutenção
  40. gavelJurídico
  41. gavelLazer e Esportes
  42. clean_handsLimpeza e Saneamento
  43. inventory_2Logística e Armazenamento
  44. inventory_2Manufatura e Produção
  45. campaignMarketing
  46. local_hospitalMedicina
  47. local_hospitalMídia e Relações Públicas
  48. constructionMineração
  49. medical_servicesOdontologia
  50. sciencePesquisa e Desenvolvimento
  51. local_gas_stationPetróleo e Gás
  52. emoji_eventsRecém-Formados
  53. groupsRecursos Humanos
  54. securitySegurança da Informação
  55. local_policeSegurança Pública
  56. policySeguros
  57. diversity_3Serviços Sociais
  58. directions_carSetor Automotivo
  59. wifiTelecomunicações
  60. psychologyTerapia
  61. codeTI e Software
  62. local_shippingTransporte
  63. local_shippingVarejo
  64. petsVeterinária
Ver tudo Senior data engineer Empregos