SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding

We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of video large language models (LLMs) offering a token-efficient solution for long-form video understanding. We incorporate the two-stream SlowFast mechanism into a streamlined training pipeline, and perform joint video-image training on a carefully curated data mixture of only publicly available datasets. Our primary focus is on highly efficient model scales (1B and 3B), demonstrating that even relatively small Video LLMs can achieve state-of-the-art performance on video understanding, meeting the demand for…Apple Machine Learning Research

Checklists Are Better Than Reward Models For Aligning Language Models

Language models must be adapted to understand and follow user instructions. Reinforcement learning is widely used to facilitate this — typically using fixed criteria such as “helpfulness” and “harmfulness”. In our work, we instead propose using flexible, instruction-specific criteria as a means of broadening the impact that reinforcement learning can have in eliciting instruction following. We propose “Reinforcement Learning from Checklist Feedback” (RLCF). From instructions, we extract checklists and evaluate how well responses satisfy each item – using both AI judges and specialized…Apple Machine Learning Research

The “Super Weight:” How Even a Single Parameter can Determine a Large Language Model’s Behavior

A recent paper from Apple researchers, “The Super Weight in Large Language Models,” reveals that an extremely small subset of parameters in LLMs (in some cases, a single parameter) can exert a disproportionate influence on an LLM’s overall functionality (see Figure 1). This work highlights the critical role of these “super weights” and their corresponding “super activations,” offering a new insight into LLM architecture and avenues for efficient model compression. The paper provides full technical details and experimental results; in this post, we provide a high-level overview of the key…Apple Machine Learning Research

Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution

Large language models (LLMs) have achieved impressive performance, leading to their widespread adoption as decision-support tools in resource-constrained contexts like hiring and admissions. There is, however, scientific consensus that AI systems can reflect and exacerbate societal biases, raising concerns about identity-based harm when used in critical social contexts. Prior work has laid a solid foundation for assessing bias in LLMs by evaluating demographic disparities in different language reasoning tasks. In this work, we extend single-axis fairness evaluations to examine intersectional…Apple Machine Learning Research

Rethinking Non-Negative Matrix Factorization with Implicit Neural Representations

This paper was accepted at the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) 2025
Non-negative Matrix Factorization (NMF) is a powerful technique for analyzing regularly-sampled data, i.e., data that can be stored in a matrix. For audio, this has led to numerous applications using time-frequency (TF) representations like the Short-Time Fourier Transform. However extending these applications to irregularly-spaced TF representations, like the Constant-Q transform, wavelets, or sinusoidal analysis models, has not been possible since these representations…Apple Machine Learning Research

Optimal Corpus Aware Training for Neural Machine Translation

Corpus Aware Training (CAT) leverages valuable corpus metadata during training by injecting corpus information into each training example, and has been found effective in the literature, commonly known as the “tagging” approach. Models trained with CAT inherently learn the quality, domain and nuance between corpora directly from data, and can easily switch to different inference behavior. To achieve the best evaluation, CAT models pre-define a group of high quality data before training starts which can be error-prone and inefficient. In this work, we propose Optimal Corpus Aware Training…Apple Machine Learning Research

UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback

Large language models (LLMs) struggle to consistently generate UI code that compiles and produces visually relevant designs. Existing approaches to improve generation rely on expensive human feedback or distilling a proprietary model. In this paper, we explore the use of automated feedback (compilers and multi-modal models) to guide LLMs to generate high-quality UI code. Our method starts with an existing LLM and iteratively produces improved models by self-generating a large synthetic dataset using an original model, applying automated tools to aggressively filter, score, and de-duplicate the…Apple Machine Learning Research

Pitch Accent Detection Improves Pretrained Automatic Speech Recognition

We show the performance of Automatic Speech Recognition (ASR) systems that use semi-supervised speech representations can be boosted by a complimentary pitch accent detection module, by introducing a joint ASR and pitch accent detection model. The pitch accent detection component of our model achieves a significant improvement on the state-of-the-art for the task, closing the gap in F1-score by 41%. Additionally, the ASR performance in joint training decreases WER by 28.3% on LibriSpeech, under limited resource fine-tuning. With these results, we show the importance of extending pretrained…Apple Machine Learning Research

Apple Workshop on Privacy-Preserving Machine Learning 2025

Apple believes that privacy is a fundamental human right. As AI experiences become increasingly personal and a part of people’s daily lives, it’s important that novel privacy-preserving techniques are created in parallel to advancing AI capabilities.
Apple’s fundamental research has consistently pushed the state-of-the-art in using differential privacy with machine learning, and earlier this year, we hosted the Workshop on Privacy-Preserving Machine Learning (PPML). This two-day hybrid event brought together Apple and members of the broader research community to discuss the state of the art in…Apple Machine Learning Research

Eliciting In-context Retrieval and Reasoning for Long-Context Language Models

Recent advancements in long-context language models (LCLMs) have the potential to transform Retrieval-Augmented Generation (RAG) by simplifying pipelines. With their extended context windows, LCLMs can process entire knowledge bases and directly handle retrieval and reasoning. This capability is defined as In-Context Retrieval and Reasoning (ICR2). However, existing benchmarks like LOFT often overestimate LCLM performance because they lack sufficiently challenging contexts. To address this, we introduce ICR2, a benchmark designed for more realistic evaluation and training of LCLMs. This…Apple Machine Learning Research