Fast forward or free fall? Mastering the rise of AI in cybersecurity

Fast forward or free fall? Mastering the rise of AI in cybersecurity

It’s only been a year and nine months since OpenAI released ChatGPT to the public, and it’s already had a massive impact on our lives. While AI will undoubtedly reshape our world, the exact nature of that revolution is still evolving. With little to no experience, security administrators can use ChatGPT to quickly create Powershell scripts. Tools like Grammarly or Jarvis can turn average writers into confident editors. Some people have even started using AI as an alternative to traditional search engines like Google and Bing. The applications of AI are endless!

Generative AI in cybersecurity – the new gold rush?

Driven by the versatility and transformative potential of AI, a new gold rush is sweeping companies across industries. From healthcare and finance to manufacturing and retail, companies are trying to stake their claim in this uncharted technological territory. The adoption of generative AI in cybersecurity is accelerating, and many companies are actively adding or have already added these capabilities to their platforms. But it raises an important question: Are we doing too much too soon?

I recently attended a think tank whose main topic was Generative AI in security. The event began with a vendor and their MSP partner demonstrating how the vendor’s generative AI capabilities help the MSP optimize threat mitigation for their clients. They bragged about significant time savings that led to the MSP optimizing their analyst team. Specifically, they shifted their hiring strategy from hiring experienced professionals to expanding opportunities for junior analysts while leveraging AI to help train and mentor the new analysts, potentially accelerating their path to cybersecurity competency. They also bragged about how they reduced their analyst headcount from 11 to 4. The reduction in operational costs resulted in lower costs for both the MSP and its clients. There are many pros and cons to this statement. The impact of AI on existing jobs is better left to another topic, as the full extent of its positive job creation potential is still unknown.

To what extent can we trust AI?

Discussions about trust and generative AI often focus on who owns the data provided by users, how the data provided by users contributes to training AI models, and the extent to which AI can share and recommend proprietary data with other users. A critical aspect that is often neglected is the significant threat posed by inaccurate data.

I recently suggested to my son that he use ChatGPT to break down the order of operations for his math homework. After a few hours, he said he still couldn’t solve the problem. I sat him down to go through the AI’s advice, and while the answer was well-crafted and beautifully worded, it was far from accurate. The poor boy was going around in circles, using a flawed method to solve a math problem. This situation immediately came to mind when the manager of the MSP explained that they rely on generative AI to guide junior security analysts.

There are two critical questions regarding generative AI: who is responsible for the accuracy of the data and who is liable for any consequences resulting from inaccurate results?

According to Google Gemini, data accuracy in AI is a shared responsibility involving various stakeholders.

  1. Data provider: These companies collect and provide the data used to train AI models. They are responsible for ensuring that the data they provide is accurate, complete, and unbiased.
  2. AI developer: The developers who design and train the AI ​​models play a role in assessing the quality of the data they use. They should clean and preprocess the data to minimize errors and identify potential biases.
  3. AI Users: Those who deploy and use the AI ​​models also have a certain responsibility. It is crucial to understand the limitations of the model and the data it was trained with (we need transparency in this area).

The answer on liability was not so clear-cut. There is not always a single party held responsible. Depending on the jurisdiction and specific use case, there may be legal and regulatory requirements that dictate liability, but the legal landscape for AI liability is still evolving and will likely evolve as more incidents and case law emerge.

A look into the past to see the future:

Looking at the past can often provide insights for the future. The potential of AI may share some similarities with the history of search engines. Google’s PageRank method is a good example. The algorithm greatly improved the relevance of search results. Personalization and location features increased the utility for the user. However, the addition of personalization led to unintended consequences such as the filter bubble, where users only encounter information that reinforces their beliefs. SEO manipulation and privacy concerns have also impacted the utility and relevance of search engines.

Much like search engines struggle with bias, generative AI models trained on massive datasets can also reflect that bias in their results. Both platforms will be a battleground for misinformation, making it difficult for users to distinguish truth from lies. In both cases, users should always verify the accuracy of results. From a personal and business perspective, every person using generative AI should create a process for verifying the information they receive. I like to ask the AI ​​to provide reference links to the sources it got the information provided in its answer. Depending on the topic, I may even verify other sources.

Another aspect that affects search relevance is ads. While I don’t think Generative AI tied to cybersecurity platforms will include ads, I can imagine a world where the Generative AI platform upsells and cross-sells other products. Want to improve visibility? Try our new widget or our partner’s. Another factor to consider is whether AI will be able to identify its technology as a problem, and if so, will it tell you that?

Concluding remarks

When using AI to create a macro-based diet plan or manage your cybersecurity posture, it’s important to be aware of its flaws and limitations. Always think critically when evaluating AI outputs and never base your decisions or inputs solely on the information it provides.

Living in the age of AI feels like a thrilling rollercoaster ride – exciting, full of potential, but also a bit nerve-wracking. While the future holds enormous promise, it’s important that we’re safely buckled in. Transparency from vendors and a robust regulatory framework from legislators are essential safeguards. These measures will help us navigate the ups and downs, minimize risks, and maximize the benefits of AI. But one concern remains. Are we pushing the envelope too quickly? Open dialogue and collaboration between developers, users, and policymakers is critical. By working together, we can establish responsible practices and ensure AI becomes a force for positive change, not just a wild ride.

To learn more about the challenges and opportunities of generative AI with Fortra, you can read this blog by Antonio Sanchez.

Leave a Reply

Your email address will not be published. Required fields are marked *