Machine Learning
Research at Apple

Research highlights

Vision Language Models (VLMs) enable visual understanding alongside textual inputs. They are typically built by passing visual tokens from a pretrained vision encoder to a pretrained Large Language Model (LLM) through a projection layer. By leveraging the rich visual representations of the vision encoder and the world knowledge and reasoning capabilities of the LLM, VLMs can be useful for a wide range of applications, including accessibility...

Read more

Apple researchers are advancing AI and ML through fundamental research, and to support the broader research community and help accelerate progress in this field, we share much of this research through publications and engagement at conferences. Next week, the International Conference on Machine Learning (ICML) will be held in Vancouver, Canada, and Apple is proud to once again participate in this important event for the...

Read more

Recent publications

Apple believes that privacy is a fundamental human right. As AI experiences become increasingly personal and a part of people's daily lives, it's important that novel privacy-preserving techniques are created in parallel to advancing AI capabilities.

Read more

Apple is presenting new work at the annual Interspeech conference, which takes place in person from August 17 to 21, in Rotterdam, Netherlands. Interspeech focuses on research surrounding the science and technology of spoken language processing.

Below is the schedule of Apple-sponsored workshops and events at Interspeech 2025.

Read more