About The Position

A systems research internship is for people who love the real-world intersection of systems-engineering and research: you’ll investigate a hard systems problem, build something meaningful, and measure it carefully. The goal is practical impact—making Applied Systems better: more efficient, more scalable, and more reliable. OpenAI is currently recruiting for candidates interested in a 13-week, paid, in-person internship based in our San Francisco office during Summer 2026. In some cases, it may be extended for an additional 13 weeks (for a total of up to 26 weeks), based on team needs, candidate interest, and performance. In this role, you will typically focus on improving real systems in areas like: Distributed systems & storage (throughput, latency, consistency, durability) Compute & scheduling (GPU/accelerator utilization, job orchestration, queuing) Performance engineering (profiling, bottlenecks, scalability, capacity planning) Reliability & observability (fault tolerance, monitoring, incident learning) Networking & data pipelines (data movement, caching, streaming efficiency) Systems for ML (training/inference performance, evaluation infrastructure, tooling) Most projects involve some of these steps: Defining a clear hypothesis (“we think X will reduce latency by Y under Z”) Instrumenting existing production systems, gathering metrics and detailed analysis to validate the hypothesis Building or modifying a real system (prototype or production-quality improvements when appropriate) Running experiments/benchmarks and analyzing results Communicating tradeoffs and recommendations clearly Publishing the research work in technical journals and conferences

Requirements

  • Currently pursuing a PhD in Computer Science, Computer Engineering, or relevant technical field
  • Proficiency with Coding in c++, Java, python, rust, etc
  • Doing ongoing research on systems topics such as DL/ML, information retrieval, systems security and cryptography, databases, networking, distributed systems, and compilers, etc
  • Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines

Responsibilities

  • Defining a clear hypothesis (“we think X will reduce latency by Y under Z”)
  • Instrumenting existing production systems, gathering metrics and detailed analysis to validate the hypothesis
  • Building or modifying a real system (prototype or production-quality improvements when appropriate)
  • Running experiments/benchmarks and analyzing results
  • Communicating tradeoffs and recommendations clearly
  • Publishing the research work in technical journals and conferences
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service