QA Engineer

MedlytixRoswell, GA
21h

About The Position

We're seeking an innovative QA Engineer to join our team. This role goes beyond traditional testing - you'll be at the forefront of using automation frameworks and AI-powered tools to ensure the quality, reliability, and performance of our AI solutions. You'll work closely with AI Integration engineers and software developers to build robust testing strategies that handle the unique challenges of AI systems, including model validation, data quality, and non-deterministic outputs.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or related field
  • 3-5 years of experience in software quality assurance and test automation
  • Proven experience building and maintaining automation frameworks from scratch
  • Experience testing cloud-based applications and APIs
  • Experience testing machine learning models, agentic data flows and data-intensive applications
  • Strong proficiency in Python for test automation
  • Hands-on experience with testing frameworks: pytest, unittest, Selenium, or similar
  • Experience with API testing tools: Postman, REST Assured, or Python requests
  • Knowledge of CI/CD tools: GitLab CI, GitHub Actions
  • Familiarity with version control systems (Git)
  • Understanding of SQL and database testing (incl. Data Lake architectures)
  • Experience with containerization (Docker) and orchestration tools
  • Experience with AI-powered testing tools or test generation platforms
  • Experience with cloud platforms: AWS, Azure, or GCP
  • Understanding of microservices architecture and distributed systems

Responsibilities

  • Test Automation Development
  • Design, develop, and maintain automated test frameworks using Python and modern testing tools
  • Build end-to-end test suites for APIs, data pipelines, and AI model/MCP endpoints
  • Implement continuous testing practices integrated with CI/CD pipelines
  • Create reusable test libraries and utilities to accelerate testing across projects
  • AI-Assisted Testing
  • Leverage AI tools (like GitHub Copilot, ChatGPT, or dedicated test generation tools) to accelerate test case creation and code reviews
  • Explore and implement AI-powered testing solutions for test data generation, visual testing, and anomaly detection
  • Experiment with automated test scenario generation using LLMs
  • Use AI tools for log analysis and defect pattern recognition
  • AI-Specific Testing
  • Develop testing strategies for LLMs including performance, accuracy, and bias detection
  • Test data quality, feature engineering pipelines, and model training workflows
  • Validate model output and monitor for model drift in production
  • Create test datasets that cover edge cases and ensure model robustness
  • Quality Strategy & Collaboration
  • Define and implement QA best practices and quality metrics for AI products
  • Collaborate with developers to identify testability requirements early in the development cycle
  • Perform code reviews focused on test coverage and quality
  • Document test strategies, test cases, and quality reports
  • Performance & Security Testing
  • Conduct load and performance testing for AI inference endpoints
  • Identify bottlenecks in data processing and model serving infrastructure
  • Participate in security testing activities focusing on data privacy and model security
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service