DMR News

Advancing Digital Conversations

Apple Invites Security Experts to Hack Its AI Servers with Bounties Up to $1 Million

ByHilary Ong

Oct 26, 2024

Apple Invites Security Experts to Hack Its AI Servers with Bounties Up to $1 Million

Apple is offering security researchers up to $1 million to uncover vulnerabilities in its Private Cloud Compute, a new server system that will handle complex generative AI tasks for its customers.

The platform, part of Apple’s broader AI initiative, Apple Intelligence, is set to debut next week. The Private Cloud Compute system is designed to process AI requests that are too complex for the on-device capabilities of an iPhone, iPad, or Mac, all while ensuring user privacy through stringent security measures.

In a post on its security blog, Apple outlined the rewards for discovering vulnerabilities in the Private Cloud Compute system:

  • Up to $1 million for exploits that enable remote execution of malicious code on Private Cloud Compute servers.
  • Up to $250,000 for vulnerabilities that could expose sensitive user data or AI request information sent to the cloud.
  • Up to $150,000 for exploits discovered from a privileged network position that allow access to user data.

Apple stated that any significant vulnerability impacting user privacy or security, even if outside of a published category, would be considered for a reward.

The Private Cloud Compute system is an extension of Apple Intelligence and is designed to support more complex AI operations while prioritizing user privacy. The servers will delete user requests as soon as the task is completed, and Apple has implemented end-to-end encryption to prevent itself from accessing any user information, despite controlling the server hardware. This focus on privacy ensures that even Apple cannot view the contents of requests made through its AI system.

Apple’s bug bounty program is a continuation of its previous efforts to enhance the security of its products. In recent years, Apple has offered specialized research tools, such as a modified iPhone for security experts, to test and improve the device’s resilience against exploits. The company is now extending this initiative to its AI cloud, inviting the security community to help verify its privacy claims and strengthen its system.

In a significant step toward transparency, Apple is providing access to the source code of Private Cloud Compute’s core components, allowing researchers to analyze its software. The company is also offering a virtual research environment for macOS that simulates the cloud platform, making it easier for security experts to test the system’s defenses. Additionally, Apple has released a comprehensive security guide, offering technical insights into the architecture of its Private Cloud Compute service.

Initially, Apple invited a select group of researchers to vet the system. However, as of Thursday, the company has opened the program to the general public, allowing any security researcher to participate in testing and evaluating Private Cloud Compute. Apple emphasized that vulnerabilities compromising the “fundamental security and privacy guarantees” of the system are of the highest interest.

“We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale, and we look forward to working with the research community to build trust in the system and make it even more secure and private over time,” the company said in its statement.


Featured Image courtesy of Jaap Arriens/NurPhoto via Getty Images

Follow us for more tech news updates.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *