16 Cybersecurity leaders predict how gen AI will improve cybersecurity in 2024

[ad_1]

Join leaders in San Francisco on January 10 for an exclusive night of networking, insights, and conversation. Request an invite here.


With greater AI power comes greater complexity, especially for CISOs adopting generative AI. Gen AI is the power surge cybersecurity vendors need to reduce the risks of losing the AI war. Meanwhile, adversaries’ tradecraft and new ways of weaponizing AI while combining social engineering have humbled many of the world’s leading companies this year. 

VentureBeat sat down (virtually) with 16 cybersecurity leaders from 13 companies to gain insights into their predictions for 2024. Leaders told VentureBeat that setting the goal of creating a strong collaboration between AI and cybersecurity professionals is essential. 

AI needs human insight to reach its full potential against cyberattacks. MITRE MDR stress tests have provided quantified proof of that point. The combination of human insight and intelligence with AI identifies and crushes breaches before they grow, as Michael Sherwood, chief innovation and technology officer for the city of Las Vegas, told VentureBeat in a recent interview. 

Cybersecurity leaders predict gen AI’s impact on cybersecurity 

VB Event

The AI Impact Tour

Getting to an AI Governance Blueprint – Request an invite for the Jan 10 event.

 


Learn More

Peter Silva, Ericom, Cybersecurity Unit of Cradlepoint. “It could improve by the ability to pick up patterns (like attack patterns or an emerging CVE or just certain behaviors that indicate an attempted breach or even predicting that the L3 DDoS attack is a distraction for the credential stuffing they are missing). I also think that AI will make it more difficult, too. Detectors can’t tell the difference between a human-generated and AI-generated phishing attack, so they’ll get much better,” Silva said. 

Elia Zaitsev, CTO CrowdStrike. Zaitsev said that “in 2024, CrowdStrike expects that threat actors will shift their attention to AI systems as the newest threat vector to target organizations through vulnerabilities in sanctioned AI deployments and blind spots from employees’ unsanctioned use of AI tools.” 

Zaitsev said that security teams are still in the early stages of understanding threat models around their AI deployments and tracking unsanctioned AI tools that have been introduced to their environments by employees. “These blind spots and new technologies open the door to threat actors eager to infiltrate corporate networks or access sensitive data,” Zaitsev said. Employees using new AI tools without oversight from their security team will force companies to grapple with new data protection risks.

“Corporate data that is inputted into AI tools isn’t just at risk of threat actors targeting vulnerabilities in these tools to extract data, the data is also at risk of being leaked or shared with unauthorized parties as part of the system’s training protocol,” Zaitsev said. 

“2024 will be the year when organizations will need to look internally to understand where AI has already been introduced into their organizations (through official and unofficial channels), assess their risk posture, and be strategic in creating guidelines to ensure secure and auditable usage that minimizes company risk and spend but maximizes value,” predicts Zaitsev.

Rob Gurzeev, CEO, CyCognito. “Gen AI will be a net positive for security, but with a large caveat: It could make security teams dangerously complacent. I fear that an overreliance on AI could lead to a lack of supervision in an organization’s security operations, which could easily create gaps in the attack surface,” Gurzeev said. He warned against the assumption that once AI becomes smart enough, it requires less human insight calling it a “slippery slope.” 

Howard Ting, CEO, Cyberhaven.Cyberhaven pulled data earlier this year that revealed that 4.7% of employees had pasted confidential data into ChatGPT. And 11% percent of that data was sensitive in nature. But I do think eventually the tables will turn. As LLMs/gen AI matures, security teams will be able to use it to accelerate defenses,” Ting said.

John Morello, Co-founder and CTO, Gutsy. “Gen AI has great potential to help security teams navigate the overwhelming amount of event data they currently struggle with. Legacy approaches of data lakes and basic SIEMs that simply collect data but do little to make it approachable can be transformed with much greater usability by having a more conversational interface.” 

Jason Urso, CTO, Honeywell Connected Enterprise. “Critical infrastructure has always been a prime target for malicious actors. Prior successful attacks involved substantial complexity beyond the capability of an average hacker.  However, gen AI lowers the bar by enabling less experienced malicious actors to generate malware, initiate sophisticated phishing attacks to gain access to systems, and perform automated penetration testing,” said Urso. 

Orso sees the threatscape evolving to AI defending against AI.  

“Hence, my prediction is that gen AI will be used as a method for closed-loop OT defense – dynamically altering security configurations and firewall rules based on changes in the threat landscape and performing automated penetration testing to highlight changes in risk,” said Urso. 

Srinivas Mukkamala, Chief Product Officer, Ivanti.  “2024 will spark more anxiety among workers about the impact of AI on their careers. For example, our recent research found that nearly two out of three IT workers are concerned that gen AI will take their jobs in the next five years. Business leaders need to be clear and transparent with workers on how they plan to implement AI so that they retain talented employees – because reliable AI requires human oversight,” said Mukkamala. 

Mukkamala also warned that AI will create more sophisticated social engineering attacks. “In 2024, the rising availability of AI tools will make social-engineering attacks even easier to fall for. As companies have gotten better at detecting traditional phishing emails, malicious hackers have turned to new techniques to make their lures more believable. Additionally, the misinformation created by these AI tools by threat actors and those with nefarious intentions will be a challenge and real threat for organizations, governments, and people as a whole,” Mukkamala said.

Merritt Baer, Field CISO at  Lacework, “Don’t worry, the robots aren’t taking over. But I do anticipate the nature of work to change. We’ve seen folks automating repetitive tasks, but what if we can go further? ” Baer said. What if your gen AI agent can not only prompt you to write an automation (‘This is a problem/request you’ve seen X times this week; do you want to automate it?’), but suggest the code it would take to script that remediation or to patch that asset. I anticipate that jobs will reflect what the godmother of computer programming, Ada Lovelace, foresaw: humans are essential for creative and innovative thinking; computers are good at reliable processing, deriving patterns from large datasets, and enforcing actions with mathematical accuracy.”

Ankur Shah, SVP of Prisma Cloud at Palo Alto Networks. “Security teams today cannot keep up with the pace of application development, which leads to countless security risks reaching production environments. This pace isn’t slowing down as AI is expected to grow application development 10X, with developers taking advantage of the technology to write and ship new code faster than ever.  To level the playing field for security teams to keep pace, organizations will turn to AI. That said, AI is primarily a data problem, and if you don’t have robust security data to train AI, then your ability to stop risks is squandered,” predicts Shah. 

Matt Kraning, CTO of Cortex, Palo Alto Networks. “Right now, security analysts have to be this kind of unicorn, able to understand not only how the attackers might get in but also how to set up complex automation and queries that are highly performant over high volumes of data. Now gen AI will make it possible to interact with data more easily,” Kraning said.  

Christophe Van de Weyer, CEO, at Telesign. “Fraudsters are using gen AI to scale up their attacks. As a result, 2023 was a record year for phishing messages, which trick people into sharing their credentials. Gen AI is used by criminals to write the messages in the victim’s language and in the style of a message from a bank, for example. That’s why, in 2024, I believe the ability of consumers to easily decipher legitimate from fraudulent emails and texts will nearly be erased. This will accelerate the actions that businesses are taking to bolster defenses. An increased focus on account integrity will be key. Remember that phishing and other attacks are often used to take over accounts and execute more significant thefts. Companies should use AI to risk-score logins and transactions based on an ongoing analysis of fraud signals. And cybersecurity firms should expand the range of fraud signals that ML can learn, to inform protection measures,” said Van de Weyer.

Rob Robinson, Head of Telstra Purple EMEA.”The number of data points security professionals now have responsibility for monitoring and managing is eye-wateringly high. And with the proliferation of the cloud and intelligent edge deployments, this will only increase in the coming years. Whilst trying to avoid a lot of the guff around AI, the technology is ideally suited to solve some of the security industry’s most difficult problems around threat detection, triage, and response. As a result, in 2024, we’ll see AI transform the necessary skills required of CISOs once again,” Robinson said. 

Vineet Arora CTO of WinWireArora predicts, Gen AI will significantly augment human capabilities in cybersecurity. I foresee gen AI enabling a lot more automation in currently human-managed security workflows in threat intelligence, security hardening, penetration testing, and detection engineering. Many mundane tasks like log analysis, incident response, and security patching can be automated by gen AI, freeing up valuable time for security analysts to focus on more complex cybersecurity problems. At the same time, malicious human actors leverage gen AI to create highly realistic scenarios for social engineering attacks, impersonated software as malware, and sophisticated phishing campaigns.”

Claudionor Coelho, Chief AI Officer, and Sanjay Kalra, VP, Product Management, Zscaler. “Gen AI will have a substantial and far-reaching impact on compliance in the coming year. Historically, compliance has been a time-consuming endeavor encompassing the development of regulations, the implementation of constraints, the procurement of proof, and responding to customer questions. This has primarily been focused on text and procedures, which will now be automated,” Coelho and Kalra said. 

Clint Dixon, CIO of a large global logistics organization. “This is how cybersecurity will work; it will be an AI world. Because it’s moving so fast and the amounts of data there and the models, they’re too complex and too big to expect that teams of individuals will be able to read and interpret it and take actions from it and do that. So it is what’s going to draw I’ve cybersecurity on the go forward,” said Dixon.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

[ad_2]

Source link