The AIML Multimodal Foundation Model Team is pioneering next-generation intelligent agent technologies that combine multimodal reasoning, tool-use, and visual understanding. Our innovative features redefine how hundreds of millions of people utilize their computers and mobile devices for search and information retrieval. Our universal search engine powers search capabilities across a range of Apple products, including Siri, Spotlight, Safari, Messages, and Lookup. Additionally, we develop cutting-edge generative AI technologies based on multimodal large language models to enable innovative features in both Apple’s devices and cloud-based services. As a member of this team, you will design new architectures for multimodal agents, explore advanced training paradigms, and build robust agentic capabilities such as planning, grounding, tool-use, and autonomous task execution. You will collaborate closely with researchers and engineers to bring cutting-edge agent research into production, transforming Apple devices into intelligent partners that help users get things done. DESCRIPTION As a member of our fast-paced group, you’ll have the unique and rewarding opportunity to shape upcoming products from Apple. We are looking for people with excellent applied machine learning, computer vision, multimodal LLM, and agent training experience and solid engineering skills.
Stand Out From the Crowd
Upload your resume and get instant feedback on how well it matches this job.
Job Type
Full-time
Education Level
Ph.D. or professional degree
Number of Employees
5,001-10,000 employees