222 Empregos para Ciência de dados - Curitiba

Big Data Engineer

Curitiba, Paraná beBeeDataArchitect

Hoje

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Job Title: Data Architect

About the Role:

We are seeking a highly skilled Data Architect to design and implement data pipelines and establish a strong data reporting environment within an AWS environment.

Key Responsibilities:

The ideal candidate will have experience designing, testing, deploying, and maintaining data warehouses and ETL pipelines. They should also be well-versed in data marts and have a strong understanding of SQL including SSIS, SSAS, & SSRS.

Requirements:

  • 4+ years of data engineering experience building data warehouses and ETL pipelines
  • 3+ years of experience with Snowflake Datawarehouse
  • Strong Python scripting experience
  • Well-versed with data marts
  • Strong understanding of SQL including SSIS, SSAS, & SSRS
  • Ability to work flexible hours

Nice to Have Skills:

  • Experience working within the healthcare industry, provider side highly preferred
  • Red Shift database experience
  • Tableau experience
  • Experience working in an AWS environment
  • Experience with big data tools: Hadoop, Spark, Kafka

What We Offer:

Full benefits while on contract.

Desculpe, este trabalho não está disponível em sua região

Big Data Engineer - Remote Work | REF#281322

Curitiba, Paraná BairesDev

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev

5 months ago Be among the first 25 applicants

Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev

At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.

Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.

When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.

Big Data Engineer at BairesDev

Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.

What You’ll Do

  • Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
  • Impact the core of business by improving existing architecture or creating new ones.
  • Create scalable and high-availability solutions, and contribute to the key differential of each client.

Here Is What We Are Looking For

  • 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
  • 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
  • Proficient in analysis, troubleshooting, and problem-solving.
  • Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
  • Advanced English level.

Desirable

  • Building Data Lakes with Lambda/Kappa/Delta architecture.
  • DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
  • Hands-on experience with managing data loads and data quality.
  • Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
  • Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
  • Performing real-time data visualization and time series analysis using open-source and commercial solutions.

How We Make Your Work (and Your Life) Easier

  • 100% remote work (from anywhere).
  • Excellent compensation in USD or your local currency if preferred
  • Hardware and software setup for you to work from home.
  • Flexible hours: create your own schedule.
  • Paid parental leaves, vacations, and national holidays.
  • Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
  • Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.

Join a global team where your unique talents can truly thrive!

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries IT Services and IT Consulting

Referrals increase your chances of interviewing at BairesDev by 2x

Sign in to set job alerts for “Big Data Developer” roles. Python Developer / Research + Development - Remote Work | REF#

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Big Data Engineer - Remote Work | REF#281319

Curitiba, Paraná BairesDev

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev

6 months ago Be among the first 25 applicants

Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev

At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.

Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.

When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.

Big Data Engineer at BairesDev

Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.

What You’ll Do

  • Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
  • Impact the core of business by improving existing architecture or creating new ones.
  • Create scalable and high-availability solutions, and contribute to the key differential of each client.

Here Is What We Are Looking For

  • 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
  • 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
  • Proficient in analysis, troubleshooting, and problem-solving.
  • Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
  • Advanced English level.

Desirable

  • Building Data Lakes with Lambda/Kappa/Delta architecture.
  • DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
  • Hands-on experience with managing data loads and data quality.
  • Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
  • Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
  • Performing real-time data visualization and time series analysis using open-source and commercial solutions.

How We Make Your Work (and Your Life) Easier

  • 100% remote work (from anywhere).
  • Excellent compensation in USD or your local currency if preferred
  • Hardware and software setup for you to work from home.
  • Flexible hours: create your own schedule.
  • Paid parental leaves, vacations, and national holidays.
  • Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
  • Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.

Join a global team where your unique talents can truly thrive!

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries IT Services and IT Consulting

Referrals increase your chances of interviewing at BairesDev by 2x

Get notified about new Big Data Developer jobs in Brazil .

Mid-Senior Data Developer (Databricks /Python / PySpark), Brasil

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Big Data Engineer - Remote Work | REF#281321

Curitiba, Paraná BairesDev

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Join or sign in to find your next job

Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev

5 months ago Be among the first 25 applicants

Join to apply for the Big Data Engineer - Remote Work | REF# role at BairesDev

Get AI-powered advice on this job and more exclusive features.

At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.

Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.

When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.

Big Data Engineer at BairesDev

Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.

What You’ll Do

  • Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
  • Impact the core of business by improving existing architecture or creating new ones.
  • Create scalable and high-availability solutions, and contribute to the key differential of each client.

Here Is What We Are Looking For

  • 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).
  • 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
  • Proficient in analysis, troubleshooting, and problem-solving.
  • Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
  • Advanced English level.

Desirable

  • Building Data Lakes with Lambda/Kappa/Delta architecture.
  • DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
  • Hands-on experience with managing data loads and data quality.
  • Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
  • Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
  • Performing real-time data visualization and time series analysis using open-source and commercial solutions.

How We Make Your Work (and Your Life) Easier

  • 100% remote work (from anywhere).
  • Excellent compensation in USD or your local currency if preferred
  • Hardware and software setup for you to work from home.
  • Flexible hours: create your own schedule.
  • Paid parental leaves, vacations, and national holidays.
  • Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
  • Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.

Join a global team where your unique talents can truly thrive!

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries IT Services and IT Consulting

Referrals increase your chances of interviewing at BairesDev by 2x

Data Science / Data Engineer - Remote Work | REF# Full Stack Python/React Developer - Remote - Latin America Senior Fullstack Developer - Python/Flask + React - Remote, Latin America

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Desenvolvedor(a) PL - Azure e Big Data (Spark, Python) (REMOTO)

Curitiba, Paraná Relevo

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

RELEVO LAB. Somos uma empresa de tecnologia com mais de 6 anos de atuação, vivemos e respiramos tecnologia, batalhamos incansavelmente para ser um lugar incrível para se trabalhar, onde as pessoas são tratadas com igualdade, gostem de trabalhar, se sintam em casa e esse é certamente o lugar onde queremos estar.

RELEVO LAB. Somos uma empresa de tecnologia com mais de 6 anos de atuação, vivemos e respiramos tecnologia, batalhamos incansavelmente para ser um lugar incrível para se trabalhar, onde as pessoas são tratadas com igualdade, gostem de trabalhar, se sintam em casa e esse é certamente o lugar onde queremos estar.

Estamos à procura de pessoas do bem, comprometidas e que queiram crescer junto com a gente! Se você se enquadra neste perfil, é você que queremos!

Responsabilidades e atribuições

Gerenciar e desenvolver conteúdo para ofertas, apoiar o desenvolvimento e capacitação do cliente, participar de discussões em equipe, fornecer soluções para problemas, colaborar com equipes multifuncionais, conduzir sessões de treinamento, analisar feedback dos clientes e desenvolver soluções inovadoras alinhadas às necessidades do cliente.

Jornada de trabalho: Segunda a Sexta, das 9h às 18h (1h de intervalo para almoço)

Tipo de Vaga: Home Office

Contratação: CLT

Tempo de alocação: 8 meses com possibilidade de extensão

Requisitos e qualificações

O que queremos que você tenha:

  • Gerenciamento de Banco de Dados de Configuração
  • Gerenciamento de CI
  • Design e Desenvolvimento de Relatórios
  • Microsoft Azure Apache Spark
  • Python
  • Desenvolvimento de Software .NET
Informações adicionais

O que oferecemos: Pacote de remuneração e benefícios competitivo com o mercado.

Gostou? Acesse nossa página de carreira em

Super Importante! Se os seus conhecimentos forem aderentes à vaga, é tudo o que importa.

Aqui nós abraçamos a diversidade e contratamos pessoas com capacidade e vontade de transformar, independente de qual seja a sua localização, raça, cor, religião, identidade de gênero, orientação sexual ou formação.

Desenvolvedor React Junior - Trabalho Remoto

São Paulo, São Paulo, Brazil 6 months ago

Desenvolvedor React Junior - Trabalho Remoto Desenvolvedor (a) Full stack Júnior (.Net/C# e Angular) | Supero Outsourcing Desenvolvedor(a) Back-End (com foco em IA e agentes inteligentes)

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Pessoa Consultora de BI Pleno (Home Office) - Foco em Big Data

Curitiba, Paraná Somos BHS ?

Publicado há 2 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Workplace: Belo Horizonte - MG Belo Horizonte - MG

Job type: Full-time employee Full-time employee

Nós somos a BHS!

Com 30 anos de experiência no mercado, proporcionamos soluções modernas de TI.

Somos reconhecidos como a MELHOR empresa para se trabalhar em MG e no BRASIL pelo GPTW - Great Place to Work , e estamos classificados entre as cinco melhores empresas para se trabalhar nas categorias LGBTQIA+ e Étnico-Racial , demonstrando nosso compromisso com a diversidade, equidade e inclusão.

O que a gente espera de você?

·Habilidade de relacionamento com pessoas de diferentes áreas e de diversos níveis de hierarquia;

·Colaboração;

·Ser proativo e saber trabalhar fora da zona de conforto;

· Alguém que queira fazer parte de projetos incríveis e tenha muita vontade de aprender, ensinar e crescer junto com a BHS.

Suas atividades serão:

·Realizar a criação/manutenção/sustentação de pipelines de ingestão e transformação de dados(ETL) utilizando scripts SQL (CTEs e Macros), Python(FastAPI, Pyspark)

·Coletar dados para fontes públicas ou privadas disponíveis utilizando tecnologias como Python, Selenium, FastAPI, entregues como API’s encapsuladas em conteiners Docker;

·Criação de scripts de ETL destinados à extração, transformação e carga dos dados de uma ou mais bases de origem para uma ou mais bases de dados de destino, incluindo ambiente de BigData;

· Utilização de ferramentas como Apache NiFi (envio de requisições HTTP, visualização de Logs), Hive(Inserts e consultas para ambiente de Big Data). Versionamento dos scripts utilizando Git.

·Realizar verificações de qualidade dos dados, garantindo a integridade e a precisão dos dados em todos os espectros antes de implementar as descobertas;

·Acessar, manipular, transformar, agrupar e analisar grandes conjuntos de dados estruturados, não estruturados ou semi-estruturados;

· Realizar a extração/carga de dados nas modalidades: batch, incremental e tempo real;

· Gerar a Documentação de toda a atividade desenvolvida.

Buscamos uma pessoa que possua experiências com:

· Experiência na construção de ETL's (Modelagem Relacional e multidimensionais);

· Relacionamento com áreas de negócio da empresa para atendimento das demandas;

· Conhecimento nas tecnologias com Python, FastAP, banco de dados relacionais, Git e Docker;

· Conhecimento em ambiente de BigData;

· Conhecimento no processo de Data Cleaning em grandes conjuntos de dados estruturados, semi-estruturados e não estruturados.

· Experiência com tratativa de dados em ambiente de BigData;

· Experiência nas tecnologias com Apache NiFi, Apache Hive, Kibana (Elastic Search), Zeppelin (Pyspark), Banco de Dados NoSQL.

O que você pode esperar de nós?

Na BHS, cuidamos do seu bem-estar como prioridade número um. Estamos aqui para garantir que sua experiência seja incrível, com todo o suporte que você precisar. Por aqui você vai encontrar:

Plano de saúde e Assistência Odontológica;

Vale Refeição/Alimentação no cartão Flash;

Auxílio Home Office;

Licença Maternidade/Paternidade estendida;

Parcerias e convênios diversos em educação, saúde e lazer (Universidades, escolas de Idioma, academias, clínicas de saúde.);

Cultura de Feedback contínuo, com: Feedbacks semestrais, 1:1, PDI (Plano de Desenvolvimento Individual) e BHS Experience.

Buscamos a inclusão da diversidade no nosso dia a dia e acreditamos que equipes plurais têm melhor desempenho. Todas as pessoas são bem-vindas. Venha ser um B.Techer

Veja nossa avaliação no Glassdoor (4.5 Estrelas e 90% recomendado para outras pessoas).

  • Step 2: Bate-papo com gestão de pessoas 2 Bate-papo com gestão de pessoas
  • Step 3: Análise da gestão/cliente 3 Análise da gestão/cliente
  • Step 4: Bate-papo técnico com a gestão/cliente 4 Bate-papo técnico com a gestão/cliente
  • Step 5: Aguardando retorno do cliente/gestor 5 Aguardando retorno do cliente/gestor

30 anos de história: construindo relações com pessoas, inovações e clientes.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Pessoa Consultora de BI Pleno (Home Office) - Foco em Big Data

São José dos Pinhais, Paraná Somos BHS ?

Publicado há 2 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Workplace: Belo Horizonte - MG Belo Horizonte - MG

Job type: Full-time employee Full-time employee

Nós somos a BHS!

Com 30 anos de experiência no mercado, proporcionamos soluções modernas de TI.

Somos reconhecidos como a MELHOR empresa para se trabalhar em MG e no BRASIL pelo GPTW - Great Place to Work , e estamos classificados entre as cinco melhores empresas para se trabalhar nas categorias LGBTQIA+ e Étnico-Racial , demonstrando nosso compromisso com a diversidade, equidade e inclusão.

O que a gente espera de você?

·Habilidade de relacionamento com pessoas de diferentes áreas e de diversos níveis de hierarquia;

·Colaboração;

·Ser proativo e saber trabalhar fora da zona de conforto;

· Alguém que queira fazer parte de projetos incríveis e tenha muita vontade de aprender, ensinar e crescer junto com a BHS.

Suas atividades serão:

·Realizar a criação/manutenção/sustentação de pipelines de ingestão e transformação de dados(ETL) utilizando scripts SQL (CTEs e Macros), Python(FastAPI, Pyspark)

·Coletar dados para fontes públicas ou privadas disponíveis utilizando tecnologias como Python, Selenium, FastAPI, entregues como API’s encapsuladas em conteiners Docker;

·Criação de scripts de ETL destinados à extração, transformação e carga dos dados de uma ou mais bases de origem para uma ou mais bases de dados de destino, incluindo ambiente de BigData;

· Utilização de ferramentas como Apache NiFi (envio de requisições HTTP, visualização de Logs), Hive(Inserts e consultas para ambiente de Big Data). Versionamento dos scripts utilizando Git.

·Realizar verificações de qualidade dos dados, garantindo a integridade e a precisão dos dados em todos os espectros antes de implementar as descobertas;

·Acessar, manipular, transformar, agrupar e analisar grandes conjuntos de dados estruturados, não estruturados ou semi-estruturados;

· Realizar a extração/carga de dados nas modalidades: batch, incremental e tempo real;

· Gerar a Documentação de toda a atividade desenvolvida.

Buscamos uma pessoa que possua experiências com:

· Experiência na construção de ETL's (Modelagem Relacional e multidimensionais);

· Relacionamento com áreas de negócio da empresa para atendimento das demandas;

· Conhecimento nas tecnologias com Python, FastAP, banco de dados relacionais, Git e Docker;

· Conhecimento em ambiente de BigData;

· Conhecimento no processo de Data Cleaning em grandes conjuntos de dados estruturados, semi-estruturados e não estruturados.

· Experiência com tratativa de dados em ambiente de BigData;

· Experiência nas tecnologias com Apache NiFi, Apache Hive, Kibana (Elastic Search), Zeppelin (Pyspark), Banco de Dados NoSQL.

O que você pode esperar de nós?

Na BHS, cuidamos do seu bem-estar como prioridade número um. Estamos aqui para garantir que sua experiência seja incrível, com todo o suporte que você precisar. Por aqui você vai encontrar:

Plano de saúde e Assistência Odontológica;

Vale Refeição/Alimentação no cartão Flash;

Auxílio Home Office;

Licença Maternidade/Paternidade estendida;

Parcerias e convênios diversos em educação, saúde e lazer (Universidades, escolas de Idioma, academias, clínicas de saúde.);

Cultura de Feedback contínuo, com: Feedbacks semestrais, 1:1, PDI (Plano de Desenvolvimento Individual) e BHS Experience.

Buscamos a inclusão da diversidade no nosso dia a dia e acreditamos que equipes plurais têm melhor desempenho. Todas as pessoas são bem-vindas. Venha ser um B.Techer

Veja nossa avaliação no Glassdoor (4.5 Estrelas e 90% recomendado para outras pessoas).

  • Step 2: Bate-papo com gestão de pessoas 2 Bate-papo com gestão de pessoas
  • Step 3: Análise da gestão/cliente 3 Análise da gestão/cliente
  • Step 4: Bate-papo técnico com a gestão/cliente 4 Bate-papo técnico com a gestão/cliente
  • Step 5: Aguardando retorno do cliente/gestor 5 Aguardando retorno do cliente/gestor

30 anos de história: construindo relações com pessoas, inovações e clientes.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região
Seja o primeiro a saber

Sobre o mais recente Ciência de dados Empregos em Curitiba !

Big Data Engineer - US Client Brazil (BR) - São Paulo/SP - Remoto

Curitiba, Paraná HUNT IT - Recrutamento Especializado para TI

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Descrição: Big Data Engineer

Our Client is a US technology company looking for a highly motivated Data Engineer with a passion for data, to build and implement data pipelines in cloud technologies, including SnapLogic and AWS.

Responsibilities:
● Develops and maintains scalable data pipelines in SnapLogic and builds out new ETL and API integrations to support continuing increases in data volume and complexity.
● Develop and maintain data models for core package application and reporting databases to describe objects and fields for support documentation and to facilitate custom application development and data integration.
● Monitoring execution and performance of daily pipelines, triage and escalate any issues.
● Collaborates with analytics and business teams to improve data models and data pipelines that feed business intelligence tools, increasing data accessibility and fostering data-driven decision-making across the organization.
● Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes.
● Writes unit/integration tests, contributes to engineering wiki, and documents work.
● Performs data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
● Work within AWS/Linux cloud systems environment in support of data integration solution
● Works closely with a team of frontend and backend engineers, product managers, and analysts.
● Teamwork: Collaborate with team members. Share knowledge, provide visibility into personal accomplishments and follow directions when provided.

Experience Required or equivalent
● Experience with SnapLogic including writing pipelines that include mappers, gates, logging, bulk loads, and Salesforce SOQL queries;
● Experience with AWS services including but not limited to S3, Athena, EC2, EMR, Glue;
● Ability to solve any ongoing issues with operating the cluster;
● Experience with integration of data from multiple data sources;
● Experience with various database technologies such as SQLServer, Redshift, Postgres, RDS;
● Experience with one or more of the following data integration platforms: Pentaho Kettle, SnapLogic, Talend OpenStudio, Jitterbit, Informatica PowerCenter, or similar;
● Knowledge of best practices and IT operations in an always-up, always-available service;
● Experience with or knowledge of Agile Software Development methodologies;
● Excellent problem solving and troubleshooting skills;
● Excellent oral and written communication skills with a keen sense of customer service;
● Experience with collecting/managing/reporting on large data stores;
● Awareness of Data governance and data quality principles;
● Well versed in Business Analytics including basic metric building and troubleshooting;
● Understand Integration architecture: application integration and data flow diagrams, source-to-target mappings, data dictionary reports;
● Familiar with Web Services: XML, REST, SOAP;
● Experience with Git or similar version control software;
● Experience with integrations with and/or use of BI tools such as GoodData (Prefered), Tableau, PowerBI, or similar;

Database experience:
● Broad experience multiple RDBMS: MS SQLServer, Oracle, MySQL, PostgreSQL, Redshift;
● Familiarity with SaaS/cloud data systems (e.g. Salesforce);
● Data warehouse design: star-schemas, change data capture, denormalization;
● SQL/DDL queries/Tuning techniques such as indexing, sorting, distribution;

Education, Experience, and Licensing Requirements:
● BS or MS degree in Computer Science or a related technical field;
● 3+ years of Data Pipeline development such as SnapLogic (preferred) or Datastage,Informatica or related experience;
● 3+ years of SQL experience (No-SQL experience is a plus);
● Experience designing, building, and maintaining data pipelines;

Where: home -office
Salary: in US$
Re

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Machine Learning Engineer

Curitiba, Paraná Sardine

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Overview

We are a leader in fraud prevention and AML compliance. Our platform uses device intelligence, behavior biometrics, machine learning, and AI to stop fraud before it happens. Today, over 300 banks, retailers, and fintechs worldwide use Sardine to stop identity fraud, payment fraud, account takeovers, and social engineering scams. We have raised $145M from world-class investors, including Andreessen Horowitz, Activant, Visa, Experian, FIS, and Google Ventures.

What You'll Do
  • Design and refine backend services using Golang to process and analyze device data, ensuring robustness and scalability.
  • Collaborate closely with software engineers, product managers, and other stakeholders to integrate machine learning capabilities seamlessly into our products.
  • Develop sophisticated algorithms leveraging high-entropy signals and probabilistic matching to revolutionize device identification.
  • Dive into vast datasets to uncover insights, boosting the accuracy and reliability of our systems.
  • Apply advanced machine learning models to enhance device recognition and effectively manage uncertainties.
  • Maintain the highest standards of privacy and security, aligned with industry best practices and regulations.
  • Foster a culture of continuous learning, and document processes clearly to ensure consistency across the team.
What We're Looking For
  • 5+ years of experience in software engineering, with a focus on backend development; proficiency in Go or a similar language is essential.
  • Bachelor's or Master's in Computer Science, Engineering, or a related discipline.
  • Hands-on experience with applied machine learning and data-informed optimization, working with large-scale datasets using tools like PyTorch and Scikit-learn.
  • Proficient in SQL for querying and analyzing large datasets.
  • Comfortable working with both relational and non-relational databases.
  • Proficient in English - from casual chats to formal reports.
Extra Points For
  • A strong understanding of cybersecurity principles, especially in device identification and fraud prevention.
  • Experience managing cloud infrastructure (AWS, Google Cloud, or Azure).
  • Knowledge of containerization tools (Docker, Kubernetes) and CI/CD pipelines.
  • Understanding of modern browser APIs and high-entropy data collection techniques.
Compensation

Base pay range of $330,000 - R440,000 + Series C equity with tremendous upside potential + Attractive benefits

The compensation offered for this role will depend on various factors, including the candidate's location, qualifications, work history, and interview performance, and may differ from the stated range.

Benefits We Offer
  • Generous compensation in cash and equity
  • Early exercise for all options, including pre-vested
  • Work from anywhere: Remote-first Culture
  • Flexible paid time off, Year-end break, Self care days off
  • Health insurance, dental, and vision coverage for employees and dependents - US and Canada specific
  • 4% matching in 401k / RRSP - US and Canada specific
  • MacBook Pro delivered to your door
  • One-time stipend to set up a home office — desk, chair, screen, etc.
  • Monthly meal stipend
  • Monthly social meet-up stipend
  • Annual health and wellness stipend
  • Annual Learning stipend
  • Unlimited access to an expert financial advisory

To learn more about how we process your personal information and your rights in regards to your personal information as an applicant and Sardine employee, please visit our Applicant and Worker Privacy Notice.

#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região

Machine Learning Engineer

Curitiba, Paraná deepsense.ai

Publicado há 3 dias atrás

Trabalho visualizado

Toque novamente para fechar

Descrição Do Trabalho

Employment type:

B2B

Operating mode:

Remote

Location:

We help companies gain a competitive edge by delivering customized AI solutions. Our mission is to empower our clients to unlock the full potential of AI.

We are specialized in key technologies such as LLM & RAG, MLOps, Edge Solutions, Computer Vision, and Natural Language Processing.

Our team of 120 world-class AI experts has worked on 200+ commercial and R&D projects with companies such as Unstructured, Google, Brainly, DocPlanner, B-Yond, Zebra Technologies, Hexagon, and many more.

What we believe in?
  • Team Strength – sharing and exchanging knowledge is key to our daily work
  • Accountability – we take responsibility for the tasks entrusted to us so that ultimately the client receives the best possible quality
  • Balance – we value work-life balance
  • Commitment – we want you to be fully part of the team
  • Openness – we don’t want you to be locked into one solution, we want to look for alternatives, explore new possibilities
Responsibilities

Join our dynamic team as a Machine Learning Engineer and embark on a journey of innovation at the intersection of data science and cloud computing. We are seeking a talented individual who is passionate about leveraging cutting-edge technologies to drive business insights and solutions. If you’re excited about pushing the boundaries of what’s possible with GenAI, we invite you to be part of our team of experts!

  • Collaborate with data scientists and software engineers to integrate machine learning solutions into cloud-based applications.
  • Continuously optimize and improve AI algorithms for performance and accuracy in a cloud environment.
  • Automate and optimize model deployment following MLOps best practices.
  • Engage in the development of cutting-edge Kubernetes-driven infrastructure.
  • Work on system reliability and backend stability, always looking for details to be improved.
  • Share knowledge through talks and workshops (internal and external).
You must have
  • Bachelor’s or advanced degree in Computer Science or Engineering.
  • Proven experience (3+ years) in software engineering, including experience with Python, Bash, Git, as well as Cloud services, and Linux.
  • Proven experience (1+ years) in working with cloud (AWS/Azure preferred).
  • Good understanding of system architecture (microservices, monoliths, REST API, DNS, caching).
  • Familiarity with Docker, Kubernetes, and cloud platforms for ML deployment.
  • Strong Python skills and familiarity with other object-oriented languages.
  • Very effective communication skills, both written and verbal.
  • Ability to solve problems and communicate complex ideas effectively.
You may have
  • Basic understanding of machine learning algorithms.
  • Keen interest for Generative AI and Large Language Models (LLMs).
  • Previous startup experience.
We offer
  • Opportunity to work on cutting-edge AI projects with a diverse range of clients and industries, driving solutions from development to production.
  • Collaborative and supportive work environment, where you can grow and learn from a team of talented professionals.
  • An opportunity to participate in conferences and workshops.
  • An opportunity to participate in Tech Talks (internal training and seminar sessions).
  • Remote work options and travel to European headquarters available.
Some of our benefits
  • Medical package.
  • Multisport cards.
  • Lunch provided.
  • Kitchens stocked with fruit and veggies twice a week.
  • Monthly integration budget.
  • Company library (online and offline).
  • Fun room.
#J-18808-Ljbffr
Desculpe, este trabalho não está disponível em sua região
 

Locais próximos

Outros empregos perto de mim

Indústria

  1. workAdministrativo
  2. ecoAgricultura e Florestas
  3. schoolAprendizagem e Estágios
  4. apartmentArquitetura
  5. paletteArtes e Entretenimento
  6. paletteAssistência Médica
  7. diversity_3Assistência Social
  8. diversity_3Atendimento ao Cliente
  9. flight_takeoffAviação
  10. account_balanceBanca e Finanças
  11. spaBeleza e Bem-Estar
  12. shopping_bagBens de grande consumo (FMCG)
  13. restaurantCatering
  14. point_of_saleComercial e Vendas
  15. shopping_cartCompras
  16. constructionConstrução
  17. supervisor_accountConsultoria de Gestão
  18. person_searchConsultoria de Recrutamento
  19. person_searchContábil
  20. brushCriativo e Digital
  21. currency_bitcoinCriptomoedas e Blockchain
  22. child_friendlyCuidados Infantis
  23. shopping_cartE-commerce e Redes Sociais
  24. schoolEducação e Ensino
  25. boltEnergia
  26. medical_servicesEnfermagem
  27. foundationEngenharia Civil
  28. electrical_servicesEngenharia Eletrotécnica
  29. precision_manufacturingEngenharia Industrial
  30. buildEngenharia Mecânica
  31. scienceEngenharia Química
  32. biotechFarmacêutico
  33. gavelFunção Pública
  34. gavelGerenciamento
  35. gavelGerenciamento de Projetos
  36. gavelHotelaria e Turismo
  37. smart_toyIA e Tecnologias Emergentes
  38. home_workImobiliário
  39. handymanInstalação e Manutenção
  40. gavelJurídico
  41. gavelLazer e Esportes
  42. clean_handsLimpeza e Saneamento
  43. inventory_2Logística e Armazenamento
  44. inventory_2Manufatura e Produção
  45. campaignMarketing
  46. local_hospitalMedicina
  47. local_hospitalMídia e Relações Públicas
  48. constructionMineração
  49. medical_servicesOdontologia
  50. sciencePesquisa e Desenvolvimento
  51. local_gas_stationPetróleo e Gás
  52. emoji_eventsRecém-Formados
  53. groupsRecursos Humanos
  54. securitySegurança da Informação
  55. local_policeSegurança Pública
  56. policySeguros
  57. diversity_3Serviços Sociais
  58. directions_carSetor Automotivo
  59. wifiTelecomunicações
  60. psychologyTerapia
  61. codeTI e Software
  62. local_shippingTransporte
  63. local_shippingVarejo
  64. petsVeterinária
Ver tudo Ciência de dados Empregos Ver todas as vagas em Curitiba