NTT Americaposted 21 days ago
Nashville, TN
Computing Infrastructure Providers, Data Processing, Web Hosting, and Related Services

About the position

The position involves two main areas of focus: Software and Data Engineering, and MLOps Engineering. In the Software and Data Engineering aspect, you will utilize your expertise in Java and Kubernetes to build and maintain robust data solutions. This includes developing DevOps and monitoring solutions, as well as having a strong knowledge of the Software Development Life Cycle (SDLC). You will be responsible for building and managing dozens of data pipelines to source and transform data based on business requirements. In the MLOps Engineering role, you will implement and manage machine learning models in production environments, ensuring the reliability and scalability of ML workflows using tools like MLflow, Kubeflow, and AWS SageMaker. Additionally, you will be expected to quickly learn new technologies by applying your current skills, staying ahead of industry trends and advancements, and self-identifying the need for new skills to be developed and adopted into your skill set within a month's time.

Responsibilities

  • Utilize expertise in Java and Kubernetes to build and maintain robust data solutions.
  • Develop DevOps and monitoring solutions.
  • Build and manage data pipelines to source and transform data based on business requirements.
  • Implement and manage machine learning models in production environments.
  • Ensure the reliability and scalability of ML workflows using tools like MLflow, Kubeflow, and AWS SageMaker.
  • Quickly learn new technologies and apply current skills to stay ahead of industry trends.

Requirements

  • 5+ years of experience in AWS services such as S3, EC2, Lambda, RDS, and Redshift.
  • 5+ years of experience with advanced skills in Python for data processing, automation, and scripting.
  • 5+ years of experience with relational databases and SQL for querying and managing data.
  • 3+ years of experience with MLflow, Kubeflow, and AWS SageMaker for managing ML models in production.
  • Ability to integrate data from various sources using AWS Glue or similar ETL tools.
  • 3+ years of experience in building and maintaining scalable data pipelines using AWS services.
  • 3+ years of experience in using Terraform or CloudFormation for infrastructure as code.
  • 3+ years of experience in setting up monitoring and logging for data pipelines and AWS resources using tools like CloudWatch and ELK stack.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service