In the News: Developing AI tools that can enhance cybersecurity
Published originally on November 1, 2023 by Cyberwire.

Much discussion of the Executive Order (EO) on artificial intelligence (AI) has focused on the ways in which AI poses potential threats, and on how such threats might be averted before they become realities. But it also discusses the ways in which AI can make a positive contribution to security.

Using AI to enhance software and network security.

The White House Fact Sheet on the EO envisions a national program to harness artificial intelligence in the service of cybersecurity. One of the EO’s goals is to “Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.”

Use AI to enhance security, but take care that it doesn’t open new vulnerabilities.

Ashley Leonard, CEO, of Syxsense, agrees that this aspect of the EO hasn’t received the attention it merits. “From our perspective as an automated vulnerability and endpoint management software developer, one action that hasn’t been widely covered in the mainstream media is the ‘advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software,” Leonard wrote. “It will be very interesting to see how this program is implemented and if those tools will be open source and voluntary or proprietary and government-mandated. Over the last 30 years, we’ve seen how code degrades over time. It’s why we have new vulnerabilities and bugs being released every day. It takes real resources – budget, time, and staff – for even the most advanced companies to keep up with vulnerabilities and bug fixes, so it makes sense to see if we could be using AI tools to find and fix these vulnerabilities. The other side of this directive, though, is whether AI can check AI. We are already seeing broad use of generative AI as part of the software development process. GenAI has enabled everyone to become a coder – but if you use AI to code, can you use it to test as well? How do software companies ensure the security of the code that’s being developed?”

One potential risk Leonard sees is the unnoticed growth of shadow AI. “This program has the possibility of growing into a bit of a beast. Just like we saw the massive growth of ‘shadow IT,’ we will absolutely see ‘shadow AI’ in use across organizations. Finance teams are using AI capabilities to generate new models faster and with more accuracy. Sales and marketing teams are already using AI to streamline several processes and tactics across their programs. And neither department needs to buy an AI tool to do so; the capability is being built into the tools they are already using – all they need to do is flip the switch to turn it on. By 2027, Gartner believes that 75% of employees will be acquiring IT outside of traditional IT buying processes, and this convergence of AI capabilities into traditional IT software is going to exponentially increase the adoption of AI. So if there’s a program being created to develop AI tools to find and fix vulnerabilities, how is it going to do that across the massive set of solutions in use across an organization?”

Read Cyberwire’s full coverage of the story, with additional commentary from Nozomi Networks, BlueVoyant, Checkmarx, and more.