Hackers use AI to plant hidden flaws in computer chips

Researchers have discovered that commercially available artificial intelligence tools can be effectively used to introduce malicious, hard-to-detect flaws into computer chips. A study from the NYU Tandon School of Engineering demonstrated that large language models can assist both experts and novices in creating so-called “hardware Trojans,” which are hidden modifications to chip designs that can compromise security. This development signals a new potential threat in the weaponization of AI, targeting the fundamental hardware that underpins all computing.

The security of the global semiconductor supply chain has long been a concern, with fears that malicious alterations could be made during the complex design and manufacturing process. Hardware Trojans are particularly insidious because they are embedded in the physical circuitry of a chip, making them difficult to find with traditional software-based security scans. The new research shows that AI significantly lowers the barrier to entry for creating these Trojans, enabling attackers to inject vulnerabilities that could leak sensitive data, grant unauthorized access, or even disable entire systems. This emerging threat complicates an already challenging security landscape, where untrusted foundries or third-party intellectual property could serve as vectors for attack.

New Research Exposes AI’s Role

A research team at NYU Tandon School of Engineering has provided concrete evidence of AI’s capability in facilitating hardware attacks. In a study published by IEEE Security & Privacy, the researchers detailed how generative AI models, such as those similar to ChatGPT, can guide an attacker in identifying and exploiting vulnerabilities in hardware description language (HDL), the code that serves as a blueprint for computer chips. The AI can suggest where to insert malicious logic, how to write the code for it, and even help automate the process, making sophisticated hardware attacks more accessible than ever before.

The core issue is that these AI tools can analyze complex chip designs and understand their functionality, allowing them to pinpoint the most effective places to hide a Trojan. An attacker no longer needs to possess deep, specialized knowledge of a particular chip’s architecture. Instead, they can interact with the large language model to gain the necessary understanding and generate the malicious code required to implement a backdoor or other vulnerability.

The Hardware Attack Challenge

To validate their concerns, the NYU Tandon researchers established a two-year competition called the AI Hardware Attack Challenge. As part of CSAW, a major student-run cybersecurity event, participants were tasked with using generative AI to insert exploitable flaws into open-source hardware designs, including modern RISC-V processors and cryptographic accelerators. The goal was to see if these AI-assisted attacks could be practically realized and how effective they would be. The results were alarming, confirming that AI not only simplifies the process but can also lead to the creation of highly effective and stealthy hardware Trojans.

Success of Inexperienced Attackers

One of the most striking findings from the competition was the success of participants who had very little prior experience in hardware design. According to the researchers, two of the successful submissions came from undergraduate teams with minimal background in chip security. Despite their lack of expertise, they were able to produce vulnerabilities that were rated as medium to high severity using standard industry scoring systems. This demonstrates that AI acts as a force multiplier, enabling less-skilled actors to execute attacks that would have previously required a team of specialists.

Circumventing Built-in Safeguards

Many large language models have safety protocols in place to prevent them from being used for malicious purposes. However, the competition participants found that these safeguards were often easy to bypass. One of the winning teams successfully induced an AI model to generate a working hardware Trojan by carefully crafting prompts that framed the malicious request as a purely academic or hypothetical scenario. This highlights a significant loophole in the safety measures designed for commercial AI systems, as they can be manipulated through clever social engineering of the AI itself.

Mechanisms of AI-Generated Flaws

The types of hardware Trojans created during the NYU study varied in their function but were all designed to be damaging. Some of the AI-generated flaws included hidden backdoors that would grant an attacker unauthorized access to a computer’s memory, allowing them to steal or alter information. Other Trojans were designed to leak sensitive cryptographic keys, undermining the security of encrypted communications. A third category of flaws involved logic bombs, where the chip would appear to function normally until a specific, predetermined condition was met, at which point the Trojan would activate to crash the system. In some cases, teams were able to fully automate the process, creating tools that could analyze hardware code, identify a weak spot, and insert a custom Trojan with minimal human intervention.

Global Supply Chain Implications

The findings have serious implications for the global semiconductor supply chain, which is notoriously complex and fragmented. A single chip can be designed by one company, use third-party intellectual property from another, and be manufactured in a foundry in a different country. This distribution of labor creates multiple opportunities for a malicious actor to insert a hardware Trojan. The threat could come from a rogue employee at a design firm, an untrusted foundry, or even a government agency pressuring a company to insert a backdoor into products destined for export. The increasing reliance on third-party IP cores further expands the attack surface, as these pre-designed blocks of circuitry could potentially harbor hidden Trojans.

An Emerging Defense Front

Even as AI is being explored as a tool for offense, other researchers are harnessing it for defense. A separate line of research from institutions like Ruhr University Bochum and the Max Planck Institute for Security and Privacy is focused on using AI to detect hardware Trojans. Their approach involves developing algorithms that can analyze high-resolution photographs of microchips and compare them against the original design files to spot physical alterations. This method has shown promise, successfully detecting the majority of modifications in test chips built on various process nodes, from 40nm to 90nm. However, the technique is not yet perfect and can produce a significant number of false positives. This sets the stage for a technological arms race, with AI being developed both to create and to detect these deeply embedded hardware threats, signaling a new and challenging chapter in the field of cybersecurity.

Leave a Reply

Your email address will not be published. Required fields are marked *