205 Empregos para Ciência de dados - Belo Horizonte
Prodemge - ciencia de dados -belo horizonte - mg 11628
Publicado há 10 dias atrás
Trabalho visualizado
Descrição Do Trabalho
- Estar cursando a partir do 3° ao 5 período
- Pacote Office básico
- Residir em Belo Horizonte - MG.
- CONTRUÇÃO DE MÓDULOS e/ou MODELOS DE ALGORITMOS PARA EXTRAÇÃO DE VALOR E INFORMAÇÕES A PARTIR DE DADOS DE SISTEMAS TRANSACIONAIS DE INFORMAÇÃO EM LINGUAGEM PYTHON e/ou R e seus FRAMEWORKS
- REALIZAR INGESTÃO DE DADOS, TRATAMENTO, LIMPEZA, ANÁLISE DE DADOS. OUTRAS ATIVIDADES REALCIONADAS A DADOS OU OUTRAS ATIVIDADE INERENTES AO PROCESSO DE DESENVOLVIMENTO DE SISTEMA DE INFORMAÇÃO
- CONSTRUÇÃO E MANUTENÇÃO DE MODULOS OU MODELOS DE ALGORITMOS PARA EXTRAÇÃO DE VALOR E INFORMAÇÃO EM BASES DE DADOS ou DATALAKE (CLOUDERA, MYSQL, ORACLE, outros)
- EXTRAÇÃO DE INFORMAÇÃO COM SQL, PRÁTICAS DE EXTRAÇÃO, TRANSFORMAÇÃO E CARGA DE DADOS (ETL, ELT)
- DETALHAMENTO DE REQUISITOS - UML e/ou HISTORIA DE USUÁRIO
- DESENVOLVIMENTO DE ROTINAS, VIEWS OU FUNÇÕES EM BANCO DE DADOS (Ex.: Stored Procedure Oracle ou Mysql, etc)
- TESTES EM SISTEMAS/ROTINAS, TESTES EM MODELOS DE ALGORITMOS
Big Data Engineer - Remote Work | REF#281319
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
6 months ago Be among the first 25 applicants
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at BairesDev by 2x
Get notified about new Big Data Developer jobs in Brazil .
Mid-Senior Data Developer (Databricks /Python / PySpark), BrasilWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Engineer - Remote Work | REF#281321
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
5 months ago Be among the first 25 applicants
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
Get AI-powered advice on this job and more exclusive features.
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at BairesDev by 2x
Data Science / Data Engineer - Remote Work | REF# Full Stack Python/React Developer - Remote - Latin America Senior Fullstack Developer - Python/Flask + React - Remote, Latin AmericaWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Engineer - Remote Work | REF#281319
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
6 months ago Be among the first 25 applicants
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at BairesDev by 2x
Get notified about new Big Data Developer jobs in Brazil .
Mid-Senior Data Developer (Databricks /Python / PySpark), BrasilWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Engineer - Remote Work | REF#281322
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
5 months ago Be among the first 25 applicants
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at BairesDev by 2x
Sign in to set job alerts for “Big Data Developer” roles. Python Developer / Research + Development - Remote Work | REF#We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Engineer - Remote Work | REF#281321
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
5 months ago Be among the first 25 applicants
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
Get AI-powered advice on this job and more exclusive features.
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at BairesDev by 2x
Data Science / Data Engineer - Remote Work | REF# Full Stack Python/React Developer - Remote - Latin America Senior Fullstack Developer - Python/Flask + React - Remote, Latin AmericaWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrBig Data Engineer - Remote Work | REF#281322
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
5 months ago Be among the first 25 applicants
Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev
At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.
Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.
When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.
Big Data Engineer at BairesDev
Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.
What You’ll Do
- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
- Impact the core of business by improving existing architecture or creating new ones.
- Create scalable and high-availability solutions, and contribute to the key differential of each client.
- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
- 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
- Proficient in analysis, troubleshooting, and problem-solving.
- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
- Advanced English level.
- Building Data Lakes with Lambda/Kappa/Delta architecture.
- DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
- Hands-on experience with managing data loads and data quality.
- Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
- Performing real-time data visualization and time series analysis using open-source and commercial solutions.
- 100% remote work (from anywhere).
- Excellent compensation in USD or your local currency if preferred
- Hardware and software setup for you to work from home.
- Flexible hours: create your own schedule.
- Paid parental leaves, vacations, and national holidays.
- Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
- Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.
- Seniority level Mid-Senior level
- Employment type Full-time
- Job function Information Technology
- Industries IT Services and IT Consulting
Referrals increase your chances of interviewing at BairesDev by 2x
Sign in to set job alerts for “Big Data Developer” roles. Python Developer / Research + Development - Remote Work | REF#We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrSeja o primeiro a saber
Sobre o mais recente Ciência de dados Empregos em Belo Horizonte !
Big Data Engineer - US Client Brazil (BR) - São Paulo/SP - Remoto
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Descrição: Big Data Engineer
Our Client is a US technology company looking for a highly motivated Data Engineer with a passion for data, to build and implement data pipelines in cloud technologies, including SnapLogic and AWS.
Responsibilities:
● Develops and maintains scalable data pipelines in SnapLogic and builds out new ETL and API integrations to support continuing increases in data volume and complexity.
● Develop and maintain data models for core package application and reporting databases to describe objects and fields for support documentation and to facilitate custom application development and data integration.
● Monitoring execution and performance of daily pipelines, triage and escalate any issues.
● Collaborates with analytics and business teams to improve data models and data pipelines that feed business intelligence tools, increasing data accessibility and fostering data-driven decision-making across the organization.
● Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes.
● Writes unit/integration tests, contributes to engineering wiki, and documents work.
● Performs data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
● Work within AWS/Linux cloud systems environment in support of data integration solution
● Works closely with a team of frontend and backend engineers, product managers, and analysts.
● Teamwork: Collaborate with team members. Share knowledge, provide visibility into personal accomplishments and follow directions when provided.
Experience Required or equivalent
● Experience with SnapLogic including writing pipelines that include mappers, gates, logging, bulk loads, and Salesforce SOQL queries;
● Experience with AWS services including but not limited to S3, Athena, EC2, EMR, Glue;
● Ability to solve any ongoing issues with operating the cluster;
● Experience with integration of data from multiple data sources;
● Experience with various database technologies such as SQLServer, Redshift, Postgres, RDS;
● Experience with one or more of the following data integration platforms: Pentaho Kettle, SnapLogic, Talend OpenStudio, Jitterbit, Informatica PowerCenter, or similar;
● Knowledge of best practices and IT operations in an always-up, always-available service;
● Experience with or knowledge of Agile Software Development methodologies;
● Excellent problem solving and troubleshooting skills;
● Excellent oral and written communication skills with a keen sense of customer service;
● Experience with collecting/managing/reporting on large data stores;
● Awareness of Data governance and data quality principles;
● Well versed in Business Analytics including basic metric building and troubleshooting;
● Understand Integration architecture: application integration and data flow diagrams, source-to-target mappings, data dictionary reports;
● Familiar with Web Services: XML, REST, SOAP;
● Experience with Git or similar version control software;
● Experience with integrations with and/or use of BI tools such as GoodData (Prefered), Tableau, PowerBI, or similar;
Database experience:
● Broad experience multiple RDBMS: MS SQLServer, Oracle, MySQL, PostgreSQL, Redshift;
● Familiarity with SaaS/cloud data systems (e.g. Salesforce);
● Data warehouse design: star-schemas, change data capture, denormalization;
● SQL/DDL queries/Tuning techniques such as indexing, sorting, distribution;
Education, Experience, and Licensing Requirements:
● BS or MS degree in Computer Science or a related technical field;
● 3+ years of Data Pipeline development such as SnapLogic (preferred) or Datastage,Informatica or related experience;
● 3+ years of SQL experience (No-SQL experience is a plus);
● Experience designing, building, and maintaining data pipelines;
Where: home -office
Salary: in US$
Re
Big Data Engineer - US Client Brazil (BR) - São Paulo/SP - Remoto
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Descrição: Big Data Engineer
Our Client is a US technology company looking for a highly motivated Data Engineer with a passion for data, to build and implement data pipelines in cloud technologies, including SnapLogic and AWS.
Responsibilities:
● Develops and maintains scalable data pipelines in SnapLogic and builds out new ETL and API integrations to support continuing increases in data volume and complexity.
● Develop and maintain data models for core package application and reporting databases to describe objects and fields for support documentation and to facilitate custom application development and data integration.
● Monitoring execution and performance of daily pipelines, triage and escalate any issues.
● Collaborates with analytics and business teams to improve data models and data pipelines that feed business intelligence tools, increasing data accessibility and fostering data-driven decision-making across the organization.
● Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes.
● Writes unit/integration tests, contributes to engineering wiki, and documents work.
● Performs data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
● Work within AWS/Linux cloud systems environment in support of data integration solution
● Works closely with a team of frontend and backend engineers, product managers, and analysts.
● Teamwork: Collaborate with team members. Share knowledge, provide visibility into personal accomplishments and follow directions when provided.
Experience Required or equivalent
● Experience with SnapLogic including writing pipelines that include mappers, gates, logging, bulk loads, and Salesforce SOQL queries;
● Experience with AWS services including but not limited to S3, Athena, EC2, EMR, Glue;
● Ability to solve any ongoing issues with operating the cluster;
● Experience with integration of data from multiple data sources;
● Experience with various database technologies such as SQLServer, Redshift, Postgres, RDS;
● Experience with one or more of the following data integration platforms: Pentaho Kettle, SnapLogic, Talend OpenStudio, Jitterbit, Informatica PowerCenter, or similar;
● Knowledge of best practices and IT operations in an always-up, always-available service;
● Experience with or knowledge of Agile Software Development methodologies;
● Excellent problem solving and troubleshooting skills;
● Excellent oral and written communication skills with a keen sense of customer service;
● Experience with collecting/managing/reporting on large data stores;
● Awareness of Data governance and data quality principles;
● Well versed in Business Analytics including basic metric building and troubleshooting;
● Understand Integration architecture: application integration and data flow diagrams, source-to-target mappings, data dictionary reports;
● Familiar with Web Services: XML, REST, SOAP;
● Experience with Git or similar version control software;
● Experience with integrations with and/or use of BI tools such as GoodData (Prefered), Tableau, PowerBI, or similar;
Database experience:
● Broad experience multiple RDBMS: MS SQLServer, Oracle, MySQL, PostgreSQL, Redshift;
● Familiarity with SaaS/cloud data systems (e.g. Salesforce);
● Data warehouse design: star-schemas, change data capture, denormalization;
● SQL/DDL queries/Tuning techniques such as indexing, sorting, distribution;
Education, Experience, and Licensing Requirements:
● BS or MS degree in Computer Science or a related technical field;
● 3+ years of Data Pipeline development such as SnapLogic (preferred) or Datastage,Informatica or related experience;
● 3+ years of SQL experience (No-SQL experience is a plus);
● Experience designing, building, and maintaining data pipelines;
Where: home -office
Salary: in US$
Re
Machine Learning Engineer
Publicado há 7 dias atrás
Trabalho visualizado
Descrição Do Trabalho
Prazer, somos a Huna!
- Deeptech brasileira construindo o futuro do diagnóstico precoce de câncer para 99% da humanidade usando inteligência artificial e exames de rotina!
:) Construímos tecnologias robustas, éticas e responsáveis para ampliar o acesso à saúde… e que bom que você quer fazer parte dessa trajetória com a gente!
:) Estamos contratando para vaga de PESSOA ENGENHEIRA DE MACHINE LEARNING (MLOps) (Pleno) - até dia 21 de Março!
Esta é uma vaga full-time e remote-first , com preferência para candidatos baseados em São Paulo (Capital e RM) e Rio de Janeiro.
É importante ter disponibilidade para viagens ocasionais (e alguns eventos nas sedes da empresa em ambas cidades).
- Pesquisar e implementar algoritmos e ferramentas de Machine Learning apropriados para resolver problemas específicos na área da saúde, como diagnóstico auxiliado por IA, previsão de riscos, personalização de tratamentos e análise de imagens médicas;
- Desenvolver aplicações de Machine Learning de acordo com os requisitos, utilizando linguagens de programação como Python e bibliotecas como TensorFlow, PyTorch e Scikit-learn;
- Estudar e transformar protótipos de ciência de dados em soluções escaláveis e robustas;
- Ampliar bibliotecas e estruturas de ML existentes para atender às necessidades específicas das aplicações;
- Selecionar conjuntos de dados apropriados para treinar os modelos de Machine Learning, realizando a limpeza, a transformação e a preparação dos dados;
- Escolher métodos de representação de dados eficazes para otimizar o desempenho dos modelos;
- Realizar análises estatísticas e ajustes finos nos modelos, utilizando resultados de testes para melhorar a acurácia e a eficiência;
- Treinar e retreinar os sistemas de Machine Learning quando necessário, utilizando técnicas de aprendizado supervisionado, não supervisionado e por reforço;
- Otimizar os modelos para garantir o desempenho, a escalabilidade e a eficiência em ambientes de produção.
- Formação em Ciência da Computação, Engenharia, Estatística ou áreas correlatas.
- Experiência mínima de 2 (dois) anos na função.
- Experiência com linguagens de programação como Python e bibliotecas de Machine Learning como TensorFlow, PyTorch e Scikit-learn.
- Inglês avançado.
- Conhecimento de algoritmos de Machine Learning, como regressão, classificação, clustering e deep learning.
- Habilidade em processar e analisar grandes conjuntos de dados.
- Familiaridade com ferramentas de controle de versão, como Git.
- Disponibilidade para viagens eventuais.
- Pós-Graduação (Mestrado ou Doutorado - completa ou em andamento) em áreas correlatas.
- Conhecimento de processamento de linguagem natural (PNL) ou visão computacional.
- Familiaridade com plataformas de cloud computing, especialmente GCP, AWS ou Azure.
- Experiência prévia (acadêmica ou profissional) com manipulação de dados clínicos/médicos e/ou vivência no setor de saúde.
- Remuneração compatível com o mercado.
- Benefício flexível (Caju).
- Benefício de saúde (Gympass).
- Benefício de educação.
- Possibilidade de participar de programa de S.O.P.
- Rotina remote-first (remoto preferencial).
- Short-Fridays.
- Flexibilidade de horários na rotina.
- Day-off de aniversário.
- Emendas de feriados nacionais.
> Se você acha que essa vaga é a sua cara, inscreva-se na vaga diretamente pelo ou por email ***.
> Se conhece alguém que se encaixa no perfil, compartilhe!
LEMBRETE: O prazo vai até dia 21 de Março.
Somente candidatos selecionados receberão feedback para as próximas etapas.
:)