Apple is preparing to launch its Private Cloud Compute next week and is offering up to $1 million to security researchers who can find vulnerabilities in the system. The technology giant is specifically looking for exploits that could compromise the security of its private AI cloud servers.
In a post on Apple’s security blog, the company announced that it will pay a maximum bounty of $1 million for exploits that can remotely execute malicious code on Private Cloud Compute servers. Researchers who privately report exploits capable of extracting sensitive user information or customer prompts could receive up to $250,000 in rewards.
Apple is willing to pay up to $150,000 for vulnerabilities that allow access to sensitive user information from a privileged network position. The company stated that any security issue with significant impact, even if not classified, will be considered for rewards.
According to Apple, the maximum rewards are reserved for vulnerabilities that compromise user data and inference request data beyond the trust boundary of Private Cloud Compute.
Apple’s bug bounty program is part of its efforts to improve security by rewarding hackers and researchers for reporting flaws that could compromise customer devices or accounts. In an attempt to enhance iPhone security, Apple has also developed a researcher-only iPhone for hacking purposes.
Additional information on the security of Private Cloud Compute, including source code and documentation, can be found in a blog post by Apple.
Private Cloud Compute is a service that extends customers’ on-device AI model, Apple Intelligence, allowing for more complex AI tasks while maintaining user privacy, according to Apple.