113 Empregos para Spark - Brasil

Engenheiro(a) de Dados Sênior — AWS / Databricks / Spark

34019-899 Nova Lima, Minas Gerais Petrus Software

Publicado há 12 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Engenheiro(a) de Dados Sênior — AWS / Databricks / Spark

Híbrido – Nova Lima (MG) | até R$ 13.000 (validado tecnicamente)


Estamos procurando alguém que respire dados em produção : que desenhe pipelines escaláveis, faça otimização de custo/performance e eleve o nível de qualidade e governança na plataforma.

O desafio


  • Construir e evoluir pipelines ETL/ELT em AWS (S3, Lambda, DynamoDB, MSK/Kafka , Debezium ), integrando data lake e data warehouse .
  • Codar no dia a dia com SQL e Python em Databricks/Spark , cuidando de observabilidade (logs, métricas, alertas) e confiabilidade ponta a ponta.
  • Trabalhar lado a lado com Produto, Analytics e Data Science para levar features de dados do rascunho à produção.


Suas responsabilidades

  • Projetar, versionar e manter pipelines robustos (teste, code review, CI/CD com Git ).
  • Modelar dados, otimizar jobs Spark e consultas (particionamento, caching, AQE).
  • Monitorar, depurar e reduzir tempo e custo de execução em ambiente Linux/AWS .
  • Garantir data quality , segurança e governança (linhagem, catálogos, permissões).


Requisitos técnicos

  • SQL forte e Python para dados.
  • Databricks e Apache Spark (performance tuning).
  • Git e práticas de CI/CD .
  • AWS : S3, Lambda, DynamoDB, MSK (Kafka) , Debezium .
  • Conforto com terminal Linux .


Soft skills que valorizamos

  • Comunicação clara e colaborativa (trabalho com squads).
  • Proatividade e senso de dono .
  • Inglês intermediário (leitura/escrita e participação em reuniões).


Interessou? Envie seu LinkedIn/CV com assunto “Eng. Dados Sênior – Nova Lima” para e-mail/link de candidatura).


#DataEngineering #Databricks #Spark #AWS #Kafka #Debezium #Python #SQL #ELT #DataLake #GovernançaDeDados #VagasTI #NovaLima #MinasGerais#PetrusSoftware

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Engenheiro(a) de Dados Sênior — AWS / Databricks / Spark

Nova Lima, Minas Gerais Petrus Software

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Engenheiro(a) de Dados Sênior — AWS / Databricks / Spark

Híbrido – Nova Lima (MG) | até R$ 13.000 (validado tecnicamente)

Estamos procurando alguém que respire dados em produção : que desenhe pipelines escaláveis, faça otimização de custo/performance e eleve o nível de qualidade e governança na plataforma.

O desafio

  • Construir e evoluir pipelines ETL/ELT em AWS (S3, Lambda, DynamoDB, MSK/Kafka , Debezium ), integrando data lake e data warehouse .
  • Codar no dia a dia com SQL e Python em Databricks/Spark , cuidando de observabilidade (logs, métricas, alertas) e confiabilidade ponta a ponta.
  • Trabalhar lado a lado com Produto, Analytics e Data Science para levar features de dados do rascunho à produção.

Suas responsabilidades

  • Projetar, versionar e manter pipelines robustos (teste, code review, CI/CD com Git ).
  • Modelar dados, otimizar jobs Spark e consultas (particionamento, caching, AQE).
  • Monitorar, depurar e reduzir tempo e custo de execução em ambiente Linux/AWS .
  • Garantir data quality , segurança e governança (linhagem, catálogos, permissões).

Requisitos técnicos

  • SQL forte e Python para dados.
  • Databricks e Apache Spark (performance tuning).
  • Git e práticas de CI/CD .
  • AWS : S3, Lambda, DynamoDB, MSK (Kafka) , Debezium .
  • Conforto com terminal Linux .

Soft skills que valorizamos

  • Comunicação clara e colaborativa (trabalho com squads).
  • Proatividade e senso de dono .
  • Inglês intermediário (leitura/escrita e participação em reuniões).

Interessou? Envie seu LinkedIn/CV com assunto “Eng. Dados Sênior – Nova Lima” para e-mail/link de candidatura).

#DataEngineering #Databricks #Spark #AWS #Kafka #Debezium #Python #SQL #ELT #DataLake #GovernançaDeDados #VagasTI #NovaLima #MinasGerais#PetrusSoftware

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Big Data Expert

São Paulo, São Paulo Cybage Software

Publicado há 12 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

JD :

We are seeking an experienced Big Data Expert to design and implement scalable data solutions that enable data-driven decision-making across the organization. This role focuses on architecting end-to-end big data pipelines, ensuring data quality, governance, and efficient processing of large datasets. More focused on Data Architectures (Databricks, AWS (Redshift, Dynamo, etc.)), Data Governance, and Data Management. An emphasis on client facing workshops, solution sessions, capability presentations, etc. Candidate should be strong communicators, ideally have worked in some presales / solutioning type team in IT services provider, and are based either in USA or Brazil.


Responsibilities:

  • Big Data Architecture: Design and implement scalable, distributed data processing systems using big data technologies (e.g., Hadoop, Spark).
  • Data Pipelines: Build and optimize ETL/ELT pipelines to handle large-scale data ingestion, transformation, and storage.
  • Data Governance: Establish data governance frameworks, including policies for data security, privacy, and compliance.
  • Quality Control: Develop and enforce data quality standards, leveraging tools to monitor and ensure data accuracy and consistency.
  • Cloud Integration: Design big data solutions on cloud platforms (AWS, GCP, Azure), leveraging cloud-native tools.
  • Collaboration: Work with data engineers, analysts, and business stakeholders to align data architecture with organizational goals.
  • Innovation and Optimization: Stay updated on big data technologies and optimize systems for performance, scalability, and cost-efficiency.


Required Skills:

  • Big Data Expertise: Hands-on experience with Hadoop, Spark, Kafka, and other big data frameworks.
  • Data Governance: Knowledge of governance frameworks and tools like Collibra, Alation, or similar.
  • Quality Control: Proficiency in implementing data quality measures and tools (e.g., Apache Griffin, Talend, or Informatica).
  • Cloud Platforms: Experience with cloud-based data solutions (BigQuery, AWS EMR, Dataproc).
  • Programming Skills: Proficiency in Python, Java, or Scala for data processing.
  • Database Knowledge: Strong understanding of SQL and NoSQL databases.
  • Problem-solving: Strong analytical skills for troubleshooting and optimizing complex data architectures.


Preferred:

  • Certifications in big data or cloud technologies (e.g., GCP Data Engineer, AWS Big Data Specialty).
  • Experience with MLOps pipelines and integrating AI/ML workflows with big data systems.
  • Knowledge of metadata management and data lineage tools.
  • Familiarity with GDPR, CCPA, and other data privacy regulations.
Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Big Data Expert

São Paulo, São Paulo Cybage Software

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

JD :

We are seeking an experienced Big Data Expert to design and implement scalable data solutions that enable data-driven decision-making across the organization. This role focuses on architecting end-to-end big data pipelines, ensuring data quality, governance, and efficient processing of large datasets. More focused on Data Architectures (Databricks, AWS (Redshift, Dynamo, etc.)), Data Governance, and Data Management. An emphasis on client facing workshops, solution sessions, capability presentations, etc. Candidate should be strong communicators, ideally have worked in some presales / solutioning type team in IT services provider, and are based either in USA or Brazil.

Responsibilities:

  • Big Data Architecture: Design and implement scalable, distributed data processing systems using big data technologies (e.g., Hadoop, Spark).
  • Data Pipelines: Build and optimize ETL/ELT pipelines to handle large-scale data ingestion, transformation, and storage.
  • Data Governance: Establish data governance frameworks, including policies for data security, privacy, and compliance.
  • Quality Control: Develop and enforce data quality standards, leveraging tools to monitor and ensure data accuracy and consistency.
  • Cloud Integration: Design big data solutions on cloud platforms (AWS, GCP, Azure), leveraging cloud-native tools.
  • Collaboration: Work with data engineers, analysts, and business stakeholders to align data architecture with organizational goals.
  • Innovation and Optimization: Stay updated on big data technologies and optimize systems for performance, scalability, and cost-efficiency.

Required Skills:

  • Big Data Expertise: Hands-on experience with Hadoop, Spark, Kafka, and other big data frameworks.
  • Data Governance: Knowledge of governance frameworks and tools like Collibra, Alation, or similar.
  • Quality Control: Proficiency in implementing data quality measures and tools (e.g., Apache Griffin, Talend, or Informatica).
  • Cloud Platforms: Experience with cloud-based data solutions (BigQuery, AWS EMR, Dataproc).
  • Programming Skills: Proficiency in Python, Java, or Scala for data processing.
  • Database Knowledge: Strong understanding of SQL and NoSQL databases.
  • Problem-solving: Strong analytical skills for troubleshooting and optimizing complex data architectures.

Preferred:

  • Certifications in big data or cloud technologies (e.g., GCP Data Engineer, AWS Big Data Specialty).
  • Experience with MLOps pipelines and integrating AI/ML workflows with big data systems.
  • Knowledge of metadata management and data lineage tools.
  • Familiarity with GDPR, CCPA, and other data privacy regulations.

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Data Engineer

São Paulo, São Paulo able.digital

Publicado há 9 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

About the Role

We are seeking an Intermediate Data Engineer to support our data infrastructure initiatives by connecting analytics systems, managing data pipelines, and enabling our teams with clean, accessible data. This role will focus on integrating key data sources, developing efficient pipelines, and ensuring seamless data flow across platforms like Google Analytics , BigQuery , and Databricks .


Key Responsibilities

  • Data Integration:
  • Integrate Google Analytics with BigQuery to centralize and structure web analytics data.
  • Establish and optimize the connection between BigQuery and cloud Databricks environments for downstream analytics and modeling.

Pipeline Development:

  • Design, build, and maintain data pipelines within Databricks to support the sandbox team and analytical workloads.
  • Ensure pipelines are reliable, scalable, and adhere to best practices for data quality and performance.

Collaboration & Support:

  • Partner with data analysts, engineers, and product teams to understand data needs and translate them into efficient engineering solutions.
  • Assist in implementing version control, CI/CD processes, and monitoring for data workflows.


Requirements

  • 2–4 years of experience as a Data Engineer or in a similar data infrastructure role.
  • Strong proficiency in SQL and experience with BigQuery .
  • Hands-on experience working with Databricks (or similar cloud-based data platforms).
  • Understanding of data pipeline orchestration , ETL processes , and data modeling concepts.
  • Familiarity with Google Analytics data export and schema.
  • Experience with Python , PySpark , or other data engineering languages is a plus.


Nice to Have

  • Exposure to cloud environments (GCP, AWS, or Azure).
  • Experience with Airflow , DBT , or other orchestration tools.
  • Knowledge of data governance and data security best practices.


Why Join Us

  • Be part of a growing data ecosystem that directly drives business insights.
  • Collaborate with cross-functional teams across engineering, analytics, and product.
  • Opportunity to work on modern cloud infrastructure and cutting-edge data tools.
Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Data Engineer

São Paulo, São Paulo PIXIE

Publicado há 10 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Position Overview

As an Azure Data Engineer (with DevOps focus) , you will design, build, and maintain scalable data solutions and cloud infrastructure to support our retail applications. You’ll develop and optimize data pipelines, implement automation, and apply DevOps practices to ensure reliability, security, and performance in our Azure environment. Collaboration with cross-functional teams will be key to aligning data engineering and DevOps strategies with business goals.


Language Requirement: Proficient English (C1/C2 level is often expected, as you’ll collaborate with business and technical teams, create documentation, and conduct trainings).


Key Accountabilities

  • Collaborate with cross-functional teams to align DevOps and data engineering practices with business objectives.
  • Architect and maintain CI/CD pipelines for retail applications.
  • Deploy and manage containerized workloads using Azure Kubernetes Service (AKS) and Kubernetes.
  • Manage Azure infrastructure, integrations, and environment optimization.
  • Monitor and optimize Azure resources for performance, reliability, and cost.
  • Automate infrastructure provisioning using Terraform, ARM templates, and scripting tools (PowerShell, Bash, Python).
  • Implement alerting and observability tools (Azure Monitor, Grafana).
  • Provide support for DevOps operations and incident, problem, and change management processes.
  • Demonstrate strong SQL skills and support BI/reporting tools (SSIS, SSRS, SSAS).
  • Ensure digital security, manage vulnerabilities, and participate in disaster recovery planning and testing.
  • Write scripts using Python and PowerShell.
  • Implement containerization strategies and troubleshoot networking issues.


Essential Skills and Experience

  • 10+ years in technology with strong DevOps and cloud experience.
  • Deep understanding of Azure cloud infrastructure and services, including AKS, VMs, load balancers, databases, storage, networking, and security.
  • Broad development experience in object-oriented programming languages (e.g., C#, Java, Python).
  • Strong relational database experience with MS SQL Server and BI/reporting tools.
  • Experience with infrastructure automation (Terraform, CloudFormation), configuration management (Ansible, Chef), and CI/CD tools (Azure DevOps, Jenkins, GitLab).
  • Proficiency in scripting languages (PowerShell, Bash) and programming (Python).
  • Strong analytical, critical thinking, collaboration, and communication skills.
  • Knowledge of modern Service Delivery methods (Site Reliability Engineering, ITIL, Product-Based delivery).
  • Understanding of cloud security best practices and implementation of security controls

Desired Qualifications

  • Microsoft Certified: Azure DevOps Engineer Expert - This certification validates
  • skills in designing and implementing DevOps practices for Azure
  • Microsoft Certified: Azure Solutions Architect Expert - This certification validates
  • skills in designing and implementing solutions that run on Azure
  • Certified Kubernetes Administrator (CKA) - This certification validates skills in
  • managing and deploying applications on Kubernetes
  • Experience with C&I Retail Energy (power and gas)
Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Data Engineer

Tata Consultancy Services

Publicado há 11 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Come to one of the biggest IT Services companies in the world! Here you can transform your career!

Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.

We are looking for Data Engineer who wants to learn and transform his career.

In this role you will: (responsibilities)

Fluent English;

Power BI reports;

DataViz knowledge;

Dashboard development;

Problem resolution and creativity.

Agile Methodologies; JIRA/ Confluence

Deal with data processing in Power BIConduct interviews,

Surveys or shadowing to understand user needs and behaviors;

Create dashboards

English; Power BI; DataViz

What can you expect from us?

• Professional development and constant evolution of your skills, always in line with your interests.

• Opportunities to work outside Brazil

• A collaborative, diverse and innovative environment that encourages teamwork.

What do we offer?

TCS Benefits – Brazil:

Health insurance

Dental Plan

Life insurance

Transportation vouchers

Meal/Food Voucher

Childcare assistance

Gympass

TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates

Partnership with SESC

Reimbursement of Certifications

Free TCS Learning Portal – Online courses and live training

International experience opportunity

Discount Partnership with Universities and Language Schools

Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire

TCS Gems – Recognition for performance

Xcelerate – Free Mentoring Career Platform

At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!


#Buildingonbelief


ID:

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região
Seja o primeiro a saber

Sobre o mais recente Spark Empregos em Brasil !

Data Engineer

São Paulo, São Paulo Tata Consultancy Services

Publicado há 12 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Come to one of the biggest IT Services companies in the world! Here you can transform your career!


Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.


We are looking for Data Engineer who wants to learn and transform his career.


In this role you will:


  • Proficient in PySpark, Python, SQL with atleast 5 years of experience
  • Working experience in Palantir Foundry platform is must
  • Experience designing and implementing data analytics solutions on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred).
  • Proven track record of understanding and transforming customer requirements into a best-fit design and architecture.
  • Demonstrated experience in end-to-end data management, data modelling, and data transformation for analytical use cases.
  • Proficient in SQL (Spark SQL preferred).
  • Experience with JavaScript/HTML/CSS a plus. Experience working in a Cloud environment such as Azure or AWS is a plus.
  • Experience with Scrum/Agile development methodologies.
  • At least 7 years of experience working with large scale software systems.
  • Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline



What can you expect from us?


• Professional development and constant evolution of your skills, always in line with your interests.

• Opportunities to work outside Brazil

• A collaborative, diverse and innovative environment that encourages teamwork.


What do we offer?


  • TCS Benefits – Brazil:
  • Health insurance
  • Dental Plan
  • Life insurance
  • Transportation vouchers
  • Meal/Food Voucher
  • Childcare assistance
  • Gympass
  • TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates
  • Partnership with SESC
  • Reimbursement of Certifications
  • Free TCS Learning Portal – Online courses and live training
  • International experience opportunity
  • Discount Partnership with Universities and Language Schools
  • Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire
  • TCS Gems – Recognition for performance
  • Xcelerate – Free Mentoring Career Platform


At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!


#Buildingonbelief

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Data Engineer

São Paulo, São Paulo Elios Talent

Publicado há 12 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Data Engineer


Highlights

Build and maintain pipelines that power data-driven insights

Hybrid or remote flexibility

High-growth opportunities in cloud and big data technologies


Role Summary

We are seeking a Data Engineer to architect and maintain data infrastructure that supports analytics and machine learning. This role ensures that high-quality data is available, secure, and scalable across the business.


Key Responsibilities

• Design and implement scalable data pipelines and workflows

• Build ETL/ELT processes for structured and unstructured data

• Partner with analysts and scientists to meet data needs

• Optimize databases and data storage systems

• Ensure data quality, security, and compliance standards


Requirements

• Proficiency in SQL and strong data modeling skills

• Experience with big data tools (Spark, Hadoop, Kafka)

• Skilled in Python, Scala, or Java for data engineering

• Familiarity with cloud data warehouses (Snowflake, BigQuery, Redshift)

• Knowledge of APIs and data integration practices


Why Join Us

As a Data Engineer, you’ll build the backbone of our data-driven operations. You’ll help ensure that insights and models are powered by reliable, high-quality data.


About Us

We are a software company dedicated to turning data into actionable intelligence. Our teams deliver solutions that fuel smarter decision-making and innovation.

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Data Engineer

Tata Consultancy Services

Publicado há 12 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Come to one of the biggest IT Services companies in the world! Here you can transform your career!


Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.


We are looking for "Data Engineer" Remote mode ,who wants to learn and transform his career.


In this role you will: (responsibilities)


  • Snowflake, DBT, SQL
  • Proficient in English
  • Agile Methodologies;
  • Operational Monitoring: Proactively monitor data jobs and pipelines to ensure smooth execution and timely delivery of datasets. Respond to alerts and resolve issues with minimal downtime.
  • Pipeline Maintenance: Maintain and enhance DBT models and SQL scripts to support evolving business needs and ensure data accuracy.
  • Warehouse Operations: Oversee Snowflake operations including user access, query performance, and resource utilization.
  • Incident Response: Act as a first responder for data job failures, conducting root cause analysis and implementing preventive measures.
  • Collaboration: Work closely with data engineers, analysts, and business stakeholders to support operational data needs and troubleshoot issues.
  • Process Optimization: Identify opportunities to automate manual tasks, improve pipeline efficiency, and reduce operational overhead.
  • Documentation & Reporting: Maintain clear documentation of operational procedures, job schedules, and incident logs. Provide regular updates to stakeholders on system health and performance.


What can you expect from us?


• Professional development and constant evolution of your skills, always in line with your interests.

• Opportunities to work outside Brazil

• A collaborative, diverse and innovative environment that encourages teamwork.


What do we offer?


TCS Benefits – Brazil:

Health insurance

Dental Plan

Life insurance

Transportation vouchers

Meal/Food Voucher

Childcare assistance

Gympass

TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates

Partnership with SESC

Reimbursement of Certifications

Free TCS Learning Portal – Online courses and live training

International experience opportunity

Discount Partnership with Universities and Language Schools

Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire

TCS Gems – Recognition for performance

Xcelerate – Free Mentoring Career Platform


At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!


#Buildingonbelief


ID:

Esse trabalho é adequado ou não?
Desculpe, este trabalho não está disponível em sua região

Locais próximos

Outros empregos perto de mim

Indústria

  1. workAdministrativo
  2. ecoAgricultura e Florestas
  3. schoolAprendizagem e Estágios
  4. apartmentArquitetura
  5. paletteArtes e Entretenimento
  6. paletteAssistência Médica
  7. diversity_3Assistência Social
  8. diversity_3Atendimento ao Cliente
  9. flight_takeoffAviação
  10. account_balanceBanca e Finanças
  11. spaBeleza e Bem-Estar
  12. shopping_bagBens de grande consumo (FMCG)
  13. restaurantCatering
  14. point_of_saleComercial e Vendas
  15. shopping_cartCompras
  16. constructionConstrução
  17. supervisor_accountConsultoria de Gestão
  18. person_searchConsultoria de Recrutamento
  19. person_searchContábil
  20. brushCriativo e Digital
  21. currency_bitcoinCriptomoedas e Blockchain
  22. child_friendlyCuidados Infantis
  23. shopping_cartE-commerce e Redes Sociais
  24. schoolEducação e Ensino
  25. boltEnergia
  26. medical_servicesEnfermagem
  27. foundationEngenharia Civil
  28. electrical_servicesEngenharia Eletrotécnica
  29. precision_manufacturingEngenharia Industrial
  30. buildEngenharia Mecânica
  31. scienceEngenharia Química
  32. biotechFarmacêutico
  33. gavelFunção Pública
  34. gavelGerenciamento
  35. gavelGerenciamento de Projetos
  36. gavelHotelaria e Turismo
  37. smart_toyIA e Tecnologias Emergentes
  38. home_workImobiliário
  39. handymanInstalação e Manutenção
  40. gavelJurídico
  41. gavelLazer e Esportes
  42. clean_handsLimpeza e Saneamento
  43. inventory_2Logística e Armazenamento
  44. inventory_2Manufatura e Produção
  45. campaignMarketing
  46. local_hospitalMedicina
  47. local_hospitalMídia e Relações Públicas
  48. constructionMineração
  49. medical_servicesOdontologia
  50. sciencePesquisa e Desenvolvimento
  51. local_gas_stationPetróleo e Gás
  52. emoji_eventsRecém-Formados
  53. groupsRecursos Humanos
  54. securitySegurança da Informação
  55. local_policeSegurança Pública
  56. policySeguros
  57. diversity_3Serviços Sociais
  58. directions_carSetor Automotivo
  59. wifiTelecomunicações
  60. psychologyTerapia
  61. codeTI e Software
  62. local_shippingTransporte
  63. local_shippingVarejo
  64. petsVeterinária
Ver tudo Spark Empregos