54 Empregos para Hadoop - Sertãozinho
Big Data Engineer - Remote Work | REF#281320
Publicado há 19 dias atrás
Trabalho visualizado
Descrição Do Trabalho
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do:
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
Here is what we are looking for:
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
How we make your work (and your life) easier:
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
Join a global team where your unique talents can truly thrive!
#J-18808-LjbffrData Engineer
Hoje
Trabalho visualizado
Descrição Do Trabalho
A welhome é uma startup que gerencia imóveis em plataformas como Airbnb e Booking usando tecnologia para simplificar a vida de quem deseja rentabilizar seus imóveis. Nosso objetivo é ajudar investidores ou proprietários que querem ganhar mais do que o aluguel tradicional, entregando uma operação confiável, conectada e eficiente.
Estamos buscando uma pessoa com perfil empreendedor, que queira colocar a mão na massa e estruturar, com liberdade e responsabilidade. Você vai nos ajudar a organizar a casa, implementar boas práticas e escalar nossa estrutura de dados.
Suas responsabilidades
- Avançar a estruturação do nosso data lake em Azure
- Criar pipelines robustos para conectar fontes de dados e manter tudo funcionando com qualidade e performance
- Apoiar decisões estratégicas com dados e automatizar relatórios operacionais
- Trabalhar junto com times de produto, marketing, vendas, atendimento e planejamento
Requisitos
- Experiência com engenharia de dados em ambientes cloud (preferencialmente Azure)
- Domínio de Python e SQL
- Conhecimento em ferramentas de streaming (como Kafka) e pipelines de dados
- Boas práticas de versionamento e arquitetura de dados
- GitHub com projetos ou contribuições relevantes
O que valorizamos
- Experiência com modelagem de dados e governança
- Participação em projetos com startups early stage ou empresas dinâmicas
- Conhecimento de CI/CD para workflows de dados
- Visão de produto e entendimento dos impactos do dado na operação e no negócio
O que oferecemos
- Cartão Flash (VR)
- Plano de Saúde SulAmérica
- TotalPass
- Ambiente dinâmico, colaborativo e com muita autonomia
- Vaga remota com encontros presenciais esporádicos
- Escritório no Itaim disponível para quem preferir trabalhar presencialmente
Data Engineer
Hoje
Trabalho visualizado
Descrição Do Trabalho
A welhome é uma startup que gerencia imóveis em plataformas como Airbnb e Booking usando tecnologia para simplificar a vida de quem deseja rentabilizar seus imóveis. Nosso objetivo é ajudar investidores ou proprietários que querem ganhar mais do que o aluguel tradicional, entregando uma operação confiável, conectada e eficiente.
Estamos buscando uma pessoa com perfil empreendedor, que queira colocar a mão na massa e estruturar, com liberdade e responsabilidade. Você vai nos ajudar a organizar a casa, implementar boas práticas e escalar nossa estrutura de dados.
Suas responsabilidades
- Avançar a estruturação do nosso data lake em Azure
- Criar pipelines robustos para conectar fontes de dados e manter tudo funcionando com qualidade e performance
- Apoiar decisões estratégicas com dados e automatizar relatórios operacionais
- Trabalhar junto com times de produto, marketing, vendas, atendimento e planejamento
Requisitos
- Experiência com engenharia de dados em ambientes cloud (preferencialmente Azure)
- Domínio de Python e SQL
- Conhecimento em ferramentas de streaming (como Kafka) e pipelines de dados
- Boas práticas de versionamento e arquitetura de dados
- GitHub com projetos ou contribuições relevantes
O que valorizamos
- Experiência com modelagem de dados e governança
- Participação em projetos com startups early stage ou empresas dinâmicas
- Conhecimento de CI/CD para workflows de dados
- Visão de produto e entendimento dos impactos do dado na operação e no negócio
O que oferecemos
- Cartão Flash (VR)
- Plano de Saúde SulAmérica
- TotalPass
- Ambiente dinâmico, colaborativo e com muita autonomia
- Vaga remota com encontros presenciais esporádicos
- Escritório no Itaim disponível para quem preferir trabalhar presencialmente
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Are you looking to join a highly regarded IT development firm?
Well, look no further! WebCreek is hiring a skilled Data Engineer with 3+ years of experience and a high level of English to work remotely from Latin America.
The role focuses on building and maintaining scalable data pipelines and models using Azure Databricks, Spark, Python, and SQL. The ideal candidate has strong knowledge of Azure data services, and a solid understanding of data architecture and governance.
What You'll Do
- Design, develop, and maintain data pipelines and data models using Azure Databricks and related Azure data services.
- Collaborate with data analysts and business stakeholders to understand data needs and deliver robust, high-performance solutions.
- Build and optimize data architecture to support data ingestion, processing, and analytics workloads.
- Implement best practices for data governance, security, and performance tuning in a cloud-native environment.
- Work with structured and unstructured data from various sources including APIs, files, databases, and data lakes.
- Create reusable code and components for data processing and modeling workflows.
- Monitor and troubleshoot jobs, ensuring data quality, reliability, and efficiency.
What You Have
- 3+ years of experience as a Data Engineer, Big Data Engineer, or similar role.
- Excellent English communication skills (Spoken/Written).
- Experience in developing and maintaining data pipelines using Azure Databricks, Spark, and other Big Data technologies.
- Strong hands-on experience with Azure Databricks, Apache Spark, and Delta Lake.
- Proficiency in Python, SQL, and PySpark.
- Experience with Azure services: Azure Data Lake Storage (ADLS), Azure Synapse Analytics, Azure Data Factory, Event Hub, or similar.
- Solid understanding of data modeling (dimensional modeling, star/snowflake schemas).
- Familiarity with CI/CD pipelines and version control (e.g., Git).
- Experience working in agile/scrum teams.
- Good problem-solving, critical thinking, and presentation skills.
- Work diligently and responsibly.
What You'll Gain
- A full-time position with long-term growth opportunities.
- Competitive salary with regular performance-based reviews.
- Access to top-tier benefits and professional development programs.
- In-house IT training and industry-recognized certifications.
- Flexible, remote-friendly work environment.
- Health care expense sponsorship to support your well-being.
- Fitness sponsorship to help you stay active and healthy.
- Work with top global brands and industry-leading companies.
- A collaborative international team with opportunities to work abroad.
Who We Are
WebCreek provides world-class software development teams and technical staff augmentation to Fortune 500 companies and other global industry leaders. With over 29 years of experience and a global presence spanning 20+ offices across North America, Latin America, Asia, and Europe, we deliver top-tier digital solutions to the companies that power the world.
WebCreek is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, nationality, genetics, pregnancy, disability, age, veteran status, or other characteristics.
Find out more about our job opportunities:
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Are you looking to join a highly regarded IT development firm?
Well, look no further! WebCreek is hiring a skilled Data Engineer with 3+ years of experience and a high level of English to work remotely from Latin America.
The role focuses on building and maintaining scalable data pipelines and models using Azure Databricks, Spark, Python, and SQL. The ideal candidate has strong knowledge of Azure data services, and a solid understanding of data architecture and governance.
What You'll Do
- Design, develop, and maintain data pipelines and data models using Azure Databricks and related Azure data services.
- Collaborate with data analysts and business stakeholders to understand data needs and deliver robust, high-performance solutions.
- Build and optimize data architecture to support data ingestion, processing, and analytics workloads.
- Implement best practices for data governance, security, and performance tuning in a cloud-native environment.
- Work with structured and unstructured data from various sources including APIs, files, databases, and data lakes.
- Create reusable code and components for data processing and modeling workflows.
- Monitor and troubleshoot jobs, ensuring data quality, reliability, and efficiency.
What You Have
- 3+ years of experience as a Data Engineer, Big Data Engineer, or similar role.
- Excellent English communication skills (Spoken/Written).
- Experience in developing and maintaining data pipelines using Azure Databricks, Spark, and other Big Data technologies.
- Strong hands-on experience with Azure Databricks, Apache Spark, and Delta Lake.
- Proficiency in Python, SQL, and PySpark.
- Experience with Azure services: Azure Data Lake Storage (ADLS), Azure Synapse Analytics, Azure Data Factory, Event Hub, or similar.
- Solid understanding of data modeling (dimensional modeling, star/snowflake schemas).
- Familiarity with CI/CD pipelines and version control (e.g., Git).
- Experience working in agile/scrum teams.
- Good problem-solving, critical thinking, and presentation skills.
- Work diligently and responsibly.
What You'll Gain
- A full-time position with long-term growth opportunities.
- Competitive salary with regular performance-based reviews.
- Access to top-tier benefits and professional development programs.
- In-house IT training and industry-recognized certifications.
- Flexible, remote-friendly work environment.
- Health care expense sponsorship to support your well-being.
- Fitness sponsorship to help you stay active and healthy.
- Work with top global brands and industry-leading companies.
- A collaborative international team with opportunities to work abroad.
Who We Are
WebCreek provides world-class software development teams and technical staff augmentation to Fortune 500 companies and other global industry leaders. With over 29 years of experience and a global presence spanning 20+ offices across North America, Latin America, Asia, and Europe, we deliver top-tier digital solutions to the companies that power the world.
WebCreek is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, nationality, genetics, pregnancy, disability, age, veteran status, or other characteristics.
Find out more about our job opportunities:
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Come to one of the biggest IT Services companies in the world! Here you can transform your career!
Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.
We are looking for "Data Engineer" Remote mode ,who wants to learn and transform his career.
In this role you will: (responsibilities)
- Snowflake, DBT, SQL
- Proficient in English
- Agile Methodologies;
- Operational Monitoring: Proactively monitor data jobs and pipelines to ensure smooth execution and timely delivery of datasets. Respond to alerts and resolve issues with minimal downtime.
- Pipeline Maintenance: Maintain and enhance DBT models and SQL scripts to support evolving business needs and ensure data accuracy.
- Warehouse Operations: Oversee Snowflake operations including user access, query performance, and resource utilization.
- Incident Response: Act as a first responder for data job failures, conducting root cause analysis and implementing preventive measures.
- Collaboration: Work closely with data engineers, analysts, and business stakeholders to support operational data needs and troubleshoot issues.
- Process Optimization: Identify opportunities to automate manual tasks, improve pipeline efficiency, and reduce operational overhead.
- Documentation & Reporting: Maintain clear documentation of operational procedures, job schedules, and incident logs. Provide regular updates to stakeholders on system health and performance.
What can you expect from us?
• Professional development and constant evolution of your skills, always in line with your interests.
• Opportunities to work outside Brazil
• A collaborative, diverse and innovative environment that encourages teamwork.
What do we offer?
TCS Benefits – Brazil:
Health insurance
Dental Plan
Life insurance
Transportation vouchers
Meal/Food Voucher
Childcare assistance
Gympass
TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates
Partnership with SESC
Reimbursement of Certifications
Free TCS Learning Portal – Online courses and live training
International experience opportunity
Discount Partnership with Universities and Language Schools
Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire
TCS Gems – Recognition for performance
Xcelerate – Free Mentoring Career Platform
At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!
#Buildingonbelief
ID:
Data Engineer
Publicado há 3 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Come to one of the biggest IT Services companies in the world! Here you can transform your career!
Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.
We are looking for "Data Engineer" Remote mode ,who wants to learn and transform his career.
In this role you will: (responsibilities)
- Snowflake, DBT, SQL
- Proficient in English
- Agile Methodologies;
- Operational Monitoring: Proactively monitor data jobs and pipelines to ensure smooth execution and timely delivery of datasets. Respond to alerts and resolve issues with minimal downtime.
- Pipeline Maintenance: Maintain and enhance DBT models and SQL scripts to support evolving business needs and ensure data accuracy.
- Warehouse Operations: Oversee Snowflake operations including user access, query performance, and resource utilization.
- Incident Response: Act as a first responder for data job failures, conducting root cause analysis and implementing preventive measures.
- Collaboration: Work closely with data engineers, analysts, and business stakeholders to support operational data needs and troubleshoot issues.
- Process Optimization: Identify opportunities to automate manual tasks, improve pipeline efficiency, and reduce operational overhead.
- Documentation & Reporting: Maintain clear documentation of operational procedures, job schedules, and incident logs. Provide regular updates to stakeholders on system health and performance.
What can you expect from us?
• Professional development and constant evolution of your skills, always in line with your interests.
• Opportunities to work outside Brazil
• A collaborative, diverse and innovative environment that encourages teamwork.
What do we offer?
TCS Benefits – Brazil:
Health insurance
Dental Plan
Life insurance
Transportation vouchers
Meal/Food Voucher
Childcare assistance
Gympass
TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates
Partnership with SESC
Reimbursement of Certifications
Free TCS Learning Portal – Online courses and live training
International experience opportunity
Discount Partnership with Universities and Language Schools
Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire
TCS Gems – Recognition for performance
Xcelerate – Free Mentoring Career Platform
At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer!
#Buildingonbelief
ID:
Seja o primeiro a saber
Sobre o mais recente Hadoop Empregos em Sertãozinho !
Data Engineer
Publicado há 10 dias atrás
Trabalho visualizado
Descrição Do Trabalho
About the Product
Niche is the leader in school search. Our mission is to make researching and enrolling in schools easy, transparent, and free. With in-depth profiles on every school and college in America, 140 million reviews and ratings, and powerful search tools, we help millions of people find the right school for them. We also help thousands of schools recruit more best-fit students, by highlighting what makes them great and making it easier to visit and apply. Niche is all about finding where you belong, and that mission inspires how we operate every day. We want Niche to be a place where people truly enjoy working and can thrive professionally.
About the Role
Niche is looking for a skilled Data Engineer to join the Data Engineering team. Youʼll build and support data pipelines that can handle the volume and complexity of data while ensuring scale, data accuracy, availability, observability, security, and optimum performance. Youʼll be developing and maintaining data warehouse tables, views, and models, for consumption by analysts and downstream applications. This is an exciting opportunity to join our team as weʼre building the next generation of our data platform, and engineering capabilities. Youʼll be reporting to the Manager, Data Engineering (Core).
What You Will Do
- Design, build, and maintain scalable, secure data pipelines that ensure data accuracy, availability, and performance.
- Develop and support data models, warehouse tables, and views for analysts and downstream applications.
- Ensure observability and quality through monitoring, lineage tracking, and alerting systems.
- Implement and maintain core data infrastructure and tooling (e.g., dbt Cloud, Airflow,RudderStack, cloud storage).
- Collaborate cross-functionally with analysts, engineers, and product teams to enable efficient data use.
- Integrate governance and security controls such as access management and cost visibility.
- Contribute to platform evolution and developer enablement through reusable frameworks, automation, and documentation.
What We Are Looking For
- Bachelorʼs degree in Computer Science, Data Science, Information Systems, or a related field.
- 3-5 years of experience in data engineering.
- Demonstrated experience of building, and supporting large scale data pipelines – streaming and batch processing.
- Software engineering mindset, leading with the principles of source control, infrastructure as code, testing, modularity, automation, CI/CD, and observability.
- Proficiency in Python, SQL, Snowflake, Postgres, DBT, Airflow.
- Experience of working with Google Analytics, Marketing, Ad & Social media platform, CRM/Salesforce, and JSON data; Government datasets, and geo-spatial data will be a plus.
- Knowledge and understanding of the modern data platform, and its key components – ingestion, transformation, curation, quality, governance, and delivery.
- Knowledge of data modeling techniques (3NF, Dimensional, Vault).
- Experience with Docker, Kubernetes, Kafka will be a huge plus.
- Self-starter, analytical problem solver, highly attentive to detail, effective communicator, and obsessed with good documentation.
First Year Plan
During the 1st Month:
- Immerse yourself in the company culture, and get to know your team and key stakeholders.
- Build relationships with data engineering team members, understand the day to day operating model, and stakeholders that we interact with on a daily basis.
- Start to learn about our data platform infrastructure, data pipelines, source systems, and inter-dependencies.
- Start participating in standups, planning, and retrospective meetings.
- Start delivering on assigned sprint stories and show progress through completed tasks that contribute to team goals.
Within 3 Months:
- Start delivering on assigned data engineering tasks to support our day to day, and roadmap.
- Start troubleshooting production issues, and participating in on-call activities.
- Identify areas for improving data engineering processes, and share with the team.
Within 6 Months:
- Contribute consistently towards building our data platform, which includes data pipelines, and data warehouse layers.
- Start to independently own workstreams whether it is periodic data engineering activities, or work items in support of our roadmap.
- Deepen your understanding, and build subject matter expertise of our data & ecosystem.
Within 12 Months:
- Your contributions have led to us making significant progress in implementing the data platform strategy, and key data initiatives to support the company’s growth.
- Youʼve established yourself as a key team member with subject matter expertise within data engineering.
Data Engineer
Publicado há 10 dias atrás
Trabalho visualizado
Descrição Do Trabalho
About the Product
Niche is the leader in school search. Our mission is to make researching and enrolling in schools easy, transparent, and free. With in-depth profiles on every school and college in America, 140 million reviews and ratings, and powerful search tools, we help millions of people find the right school for them. We also help thousands of schools recruit more best-fit students, by highlighting what makes them great and making it easier to visit and apply. Niche is all about finding where you belong, and that mission inspires how we operate every day. We want Niche to be a place where people truly enjoy working and can thrive professionally.
About the Role
Niche is looking for a skilled Data Engineer to join the Data Engineering team. Youʼll build and support data pipelines that can handle the volume and complexity of data while ensuring scale, data accuracy, availability, observability, security, and optimum performance. Youʼll be developing and maintaining data warehouse tables, views, and models, for consumption by analysts and downstream applications. This is an exciting opportunity to join our team as weʼre building the next generation of our data platform, and engineering capabilities. Youʼll be reporting to the Manager, Data Engineering (Core).
What You Will Do
- Design, build, and maintain scalable, secure data pipelines that ensure data accuracy, availability, and performance.
- Develop and support data models, warehouse tables, and views for analysts and downstream applications.
- Ensure observability and quality through monitoring, lineage tracking, and alerting systems.
- Implement and maintain core data infrastructure and tooling (e.g., dbt Cloud, Airflow,RudderStack, cloud storage).
- Collaborate cross-functionally with analysts, engineers, and product teams to enable efficient data use.
- Integrate governance and security controls such as access management and cost visibility.
- Contribute to platform evolution and developer enablement through reusable frameworks, automation, and documentation.
What We Are Looking For
- Bachelorʼs degree in Computer Science, Data Science, Information Systems, or a related field.
- 3-5 years of experience in data engineering.
- Demonstrated experience of building, and supporting large scale data pipelines – streaming and batch processing.
- Software engineering mindset, leading with the principles of source control, infrastructure as code, testing, modularity, automation, CI/CD, and observability.
- Proficiency in Python, SQL, Snowflake, Postgres, DBT, Airflow.
- Experience of working with Google Analytics, Marketing, Ad & Social media platform, CRM/Salesforce, and JSON data; Government datasets, and geo-spatial data will be a plus.
- Knowledge and understanding of the modern data platform, and its key components – ingestion, transformation, curation, quality, governance, and delivery.
- Knowledge of data modeling techniques (3NF, Dimensional, Vault).
- Experience with Docker, Kubernetes, Kafka will be a huge plus.
- Self-starter, analytical problem solver, highly attentive to detail, effective communicator, and obsessed with good documentation.
First Year Plan
During the 1st Month:
- Immerse yourself in the company culture, and get to know your team and key stakeholders.
- Build relationships with data engineering team members, understand the day to day operating model, and stakeholders that we interact with on a daily basis.
- Start to learn about our data platform infrastructure, data pipelines, source systems, and inter-dependencies.
- Start participating in standups, planning, and retrospective meetings.
- Start delivering on assigned sprint stories and show progress through completed tasks that contribute to team goals.
Within 3 Months:
- Start delivering on assigned data engineering tasks to support our day to day, and roadmap.
- Start troubleshooting production issues, and participating in on-call activities.
- Identify areas for improving data engineering processes, and share with the team.
Within 6 Months:
- Contribute consistently towards building our data platform, which includes data pipelines, and data warehouse layers.
- Start to independently own workstreams whether it is periodic data engineering activities, or work items in support of our roadmap.
- Deepen your understanding, and build subject matter expertise of our data & ecosystem.
Within 12 Months:
- Your contributions have led to us making significant progress in implementing the data platform strategy, and key data initiatives to support the company’s growth.
- Youʼve established yourself as a key team member with subject matter expertise within data engineering.
Data Engineer
Publicado há 10 dias atrás
Trabalho visualizado
Descrição Do Trabalho
A Dexian, lançada em 2023, tem presença global e traz consigo quase 30 anos de experiência através de suas companhias legadas, principalmente da combinação da DISYS e Signature Consultants. Iniciamos no Brasil 2007 como DISYS e, de lá para cá, conquistamos mais de 60 clientes de diferentes setores da economia. Como Dexian, passamos a ser uma das maiores empresas de soluções de staffing, soluções de TI e complementação de força de trabalho. Estamos liderando a indústria por meio de entrega de serviços exclusivos que combina escala global, projetos de serviço completo e agilidade tática para modernizar modelos de contratação. Apoiamos nossos clientes para encontrar a melhor maneira que seu gap de talentos seja endereçado, de maneira a suportar seu processo de transformação digital. Contamos com escritórios estrategicamente localizados em quatro grandes capitais: Curitiba (PR), Porto Alegre (RS), Rio de Janeiro (RJ) e São Paulo (SP). Assim, conseguimos nos manter próximos dos nossos clientes e consultores. Também somos uma Minority Owner Company e nos sentimos orgulhosos de termos nascido na diversidade.
O que esperar de nós:
Pessoas estão no centro de nossa estratégia e compõem o nosso maior capital. Temos compromisso de atrair, reter os melhores talentos e construir conexões reais para a vida. Por isso, atuamos como ponte entre profissionais altamente qualificados e oportunidades consistentes e prósperas. Acreditamos que, trazer o talento, a tecnologia e as organizações certas juntos, ajuda a desbloquear resultados transformadores e alcançar novos níveis de sucessos para os clientes que atendemos e os consultores que contratamos. E para isso, cinco valores fundamentais nos guiam: Integridade, Transparência, Autenticidade, Engenhosidade e Empatia. A Dexian está comprometida com a responsabilidade social corporativa e, investe em programas que apoiam a diversidade e inclusão.
Se você compartilha dos mesmos valores, aqui é seu lugar!
Detalhes do projeto: Atuação em um projeto global e estratégico voltado para Soluções de Manufatura . Você integrará um time multicultural, responsável por desenvolver e manter uma plataforma de dados que sustenta workloads avançados de analytics e data science no domínio de manufatura.
No seu dia a dia, você irá:
- Projetar, desenvolver e manter pipelines de dados escaláveis para ingestão, processamento e transformação de grandes volumes de dados estruturados e não estruturados.
- Implementar processos eficientes que garantam a entrega precisa e pontual dos dados para sistemas e aplicações de destino.
- Garantir qualidade, segurança e governança em todas as etapas do pipeline.
- Colaborar com equipes globais em um ambiente ágil .
- Apoiar iniciativas de Machine Learning , Inteligência Artificial e MLOps .
O que você precisa ter:
- Sólida experiência como Data Engineer.
- Programação em Python e SQL.
- Experiência em Databricks, Spark SQL e PySpark.
- Conhecimento em Data Visualization (preferencialmente Power BI).
- Domínio das stacks de engenharia de dados no Azure: Data Factory, Data Lake Storage, Synapse.
- Vivência com Big Data.
- Experiência em metodologias Ágeis e pipelines CI/CD (Azure DevOps, GitHub).
- Conhecimento em Infraestrutura como Código (Terraform) é um diferencial.
- Compreensão de princípios de segurança de dados, governança, qualidade, catálogo de dados e MLOps.
- Inglês avançado/fluente (imprescindível para atuação global).
Diferenciais:
- Experiência com Microsoft Fabric.
- Forte background em Data Governance e Data Quality.
- Habilidade para trabalhar em times multiculturais.
Onde você vai trabalhar?
No conforto da sua casa. Regime 100% Home Office.
Regime de Contratação:
CLT.
Benefícios para você:
- Vale Refeição;
- Vale Alimentação;
- Assistência Médica;
- Assistência Odontológica;
- Seguro de Vida;
- Gym Pass;
- Auxílio Pet.
Além disso, oferecemos:
- Incentivo a estudo após completar 1 ano de Dexian;
- Programa de desenvolvimento e treinamentos;
- Formação através da Trilha de Liderança.