Apple has published select video recordings from its 2024 Workshop on Human-Centered Machine Learning (HCML), showcasing its research and collaboration with academic experts on responsible AI development. The talks, which are now available on the company’s Machine Learning Research blog, were originally presented at an internal event held in August 2024.
The published sessions cover a range of topics focused on the human-centric aspects of AI, including model interpretability, on-device machine learning, and accessibility. Specific talks highlight the use of AI to create better user interfaces, develop speech technology for people with disabilities, and build AI-powered augmented reality accessibility tools.
The story behind Apple’s new 13+, 16+, and 18+ age ratings
By releasing these videos, Apple is reinforcing its public commitment to a responsible approach to artificial intelligence. The company’s work in this area is guided by a set of core principles:
- Empower users with intelligent tools: We identify areas where AI can be used responsibly to create tools for addressing specific user needs. We respect how our users choose to use these tools to accomplish their goals.
- Represent our users: We build deeply personal products with the goal of representing users around the globe authentically. We work continuously to avoid perpetuating stereotypes and systemic biases across our AI tools and models.
- Design with care: We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.
- Protect privacy: We protect our users’ privacy with powerful on-device processing and groundbreaking infrastructure like Private Cloud Compute. We do not use our users’ private personal data or user interactions when training our foundation models.
A key part of Apple’s privacy strategy involves performing AI tasks with on-device processing whenever possible and not using customers’ private personal data or interactions to train its large-scale foundation models.