Congressional hearings focus on artificial intelligence and machine learning challenges in cybersecurity

Congressional hearings focus on artificial intelligence and machine learning challenges in cybersecurity

Congressional hearings on artificial intelligence and machine learning in cyberspace quietly took place in the US. Senate Armed Services Committee Subcommittee on Cyber ​​in early May 2022. The committee discussed the issue with representatives from Google, Microsoft, and Georgetown University’s Center for Security and Emerging Technology. While work has begun in earnest within industry and government, it is clear that much remains to be done.

Hearing chair Senator Joe Manchin (D-WV) articulated the importance of AI and machine learning to the US military. Additionally, the committee highlighted the “shortage of technically trained cybersecurity personnel across the country, both in government and industry.” This perspective aligns with the report of the Solarium Commission, which was subsequently published in early June 2022.

Google: 3 reasons why machine learning and AI are important for cybersecurity

In the context of the Department of Defense, Dr. Andrew Moore, director of Google Cloud Artificial Intelligence, pointed out the importance of using AI in three ways. The first is the use of AI to defend against attacks from adversaries, the second and third are how to organize data and people. He went on to explain how the AI ​​is capable of processing millions of attacks every second looking for them to occur, something that far exceeds the processing capacity of a human being.

Regarding the human side of the equation, Moore emphasized how with “emerging attacks, people cleverly come up with new methods and artificial intelligence creates new methods, so you have to learn new patterns or detect completely new types of attacks in real situations.” “. -time.” He moved on to insider threats and highlighted the importance of AI in implementing zero trust, where, with AI, human patterns are discernible. Moore clarified that AI without data is “pretty useless.” He highlighted how siled data was the nemesis of AI and full sharing of disparate data sets was required for a more complete picture to evolve.

Microsoft: Cybersecurity staff shortage is problematic

Eric Horvitz, chief scientific officer of Microsoft, shared information about the company Digital Defense Report October 2021 and highlighted his efforts to participate in accordance with President Biden’s Enhancing the Nation’s Cybersecurity Executive Order, EO 14028. In his opening statement, he noted, “The value of leveraging AI in cybersecurity applications is increasingly more clear. Among many capabilities, AI technologies can provide automated interpretation of signals generated during attacks, effective prioritization of threat incidents, and adaptive responses to address the speed and scale of adversary actions. The methods hold great promise for rapidly analyzing and correlating patterns in billions of data points to track a wide variety of cyber threats on the order of seconds.”

Horvitz emphasized that the cybersecurity staffing shortage was problematic, citing the 2021 Cybersecurity Workforce Study, which puts the global number at 2.72 million cybersecurity positions that are vacant. Even when operations teams are running 24/7, there are far more alerts to handle than there are staff, creating a very real threat of teams becoming overwhelmed. AI, according to Horvitz, “enables defenders to effectively scale their protection capabilities, orchestrate and automate complicated, repetitive and time-consuming response actions.”

The usefulness of AI on the security side of the equation can be divided, according to Horvitz, into four groups: prevention, detection, investigation and remediation, and threat intelligence. While AI-driven cyber attacks are also a reality. With criminal/nation-state adversaries using basic automation, authentication-based attacks, an AI-driven social engineering. His discussion of “adversarial AI” served to highlight the need to continue investing in R&D, which raises the level of robustness of systems. He continued, emphasizing the importance of red team drills.

Center for Security and Emerging Technology: Focus on AI system reliability

Georgetown CSET was represented by Dr. Andrew Lohn, Senior Fellow of the CyberAI Project at CSET. He referred to three areas of importance in AI:

  • AI promises to improve cyber defenses.
  • AI can enhance offensive cyber operations.
  • The AI ​​itself is vulnerable.

Within his opening statement, Lohn addressed the reliability of systems with: “The United States is among those deploying autonomously capable systems, but our adversaries may not wait to subvert them. There are many opportunities for interference throughout the design process. AI can be very expensive to train, so instead of starting from scratch, a system is often adapted from existing systems that may or may not be reliable. And the data used to train or adapt the systems may or may not be reliable as well.”

Advice to industry/government

Horvitz’s advice is to “double our attention and investments on threats and opportunities at the convergence of AI and cybersecurity. Significant investments in workforce training, monitoring, engineering, and R&D will be needed to understand, develop, and operationalize defenses for the variety of risks we can expect from AI-driven attacks.”

Moore, for his part, highlighted the need for ongoing investments in “training, technology and management.” He said that “we all have a role to play in preventing and detecting online threats. Being transparent with governments, customers and government entities when it comes to cyber attacks is one of our key principles and is critically important when responding to large-scale incidents.”

Lohn noted how, “Cyber ​​operations remain human-intensive in both attack and defense. And there are few openly reported cases outside of a lab setting where AI algorithms were directly attacked.” He went on to say that the potential for attacks directly on AI systems is no secret and that the reality may be on the horizon.

In short, CSOs/CISOs/CIOs, if they’re not already involved in AI cybersecurity discussions at the “understanding level,” then they should make an adjustment and get involved. For those who understand and are committed, the advice and highlights provided at this hearing have been directed at where you need to ensure your knowledge/capacity is aligned and that channeling new techniques, experiences, and above all, commitments, is wide open to receive mode.

Copyright © 2022 IDG Communications, Inc.

Leave a Comment