APL Photonics (Jan 2020)
Unclonable photonic keys hardened against machine learning attacks
Abstract
The hallmark of the information age is the ease with which information is stored, accessed, and shared throughout the globe. This is enabled, in large part, by the simplicity of duplicating digital information without error. Unfortunately, an ever-growing consequence is the global threat to security and privacy enabled by our digital reliance. Specifically, modern secure communications and authentication suffer from formidable threats arising from the potential for copying of secret keys stored in digital media. With relatively little transfer of information, an attacker can impersonate a legitimate user, publish malicious software that is automatically accepted as safe by millions of computers, or eavesdrop on countless digital exchanges. To address this vulnerability, a new class of cryptographic devices known as physical unclonable functions (PUFs) are being developed. PUFs are modern realizations of an ancient concept, the physical key, and offer an attractive alternative for digital key storage. A user derives a digital key from the PUF’s physical behavior, which is sensitive to physical idiosyncrasies that are beyond fabrication tolerances. Thus, unlike conventional physical keys, a PUF cannot be duplicated and only the holder can extract the digital key. However, emerging machine learning (ML) methods are remarkably adept at learning behavior via training, and if such algorithms can learn to emulate a PUF, then the security is compromised. Unfortunately, such attacks are highly successful against conventional electronic PUFs. Here, we investigate ML attacks against a nonlinear silicon photonic PUF, a novel design that leverages nonlinear optical interactions in chaotic silicon microcavities. First, we investigate these devices’ resistance to cloning during fabrication and demonstrate their use as a source of large volumes of cryptographic key material. Next, we demonstrate that silicon photonic PUFs exhibit resistance to state-of-the-art ML attacks due to their nonlinearity and finally validate this resistance in an encryption scenario.