In the News: Why 92% of Security Professionals Worry About Generative AI
Published originally on April 3, 2024 on Techopedia.

 

Password manager 1Password today released a report showing that 9 out of 10 (92%) of security professionals have security concerns about generative AI.

Some of the concerns listed by the research include employees entering sensitive data into AI tools (48%), using AI systems trained with incorrect or malicious data (44%), and falling for AI-enhanced phishing attempts (42%).

The study surveyed 1,500 North American workers, including 500 IT security professionals, to evaluate the state of enterprise security.

***

Remote Working and The Door to Shadow AI

Remote working may have given employees the freedom to work where they’re most productive or comfortable, but it has also introduced some serious security complications.

At a glance, organizations have no way of knowing if employees working remotely are using cybersecurity best practices or acting negligently. Simple actions such as visiting restricted sites, using an unauthorized personal device, or failing to update software can introduce vulnerabilities that an organization is unaware exist.

Ashley Leonard, CEO of Syxsense, told Techopedia:

“Companies should be considering the increased risk of employees using genAI and entering sensitive data into non-approved systems,”

“One way to approach this is to encourage the use of genAI but within specific boundaries and tools. We’ve seen employees leverage non-approved IT tools to get their jobs done, and companies still battle with Shadow IT today. By enabling the use of this new technology – within limits – you can reduce the risk.”

Read the full article on Techopedia.