Artificial Intelligence (AI) offers unrivaled potential for innovation. However, risk awareness is a must in order to adopt cybersecurity best practices starting today and leverage the full benefits of this technology. Let’s focus on the immediate steps required.
ChatGPT has shaken the foundations of the AI ecosystem, sparking BigTech players and myriad of multinationals into a frantic frenzy of initiatives and announcements.
Most recently, cybercriminals even began using several versions of ‘DarkGPT’ to step up their cyberattacks. The technology’s dark web developments made the headlines with their expedited execution of cyberattacks in the form of scam emails, fake websites that look real, and malware.
With September behind us, along with the first few exciting weeks, we need to critically assess the challenges posed by cybersecurity before providing the right structured solution.
Analyzing your own organization's response
First and foremost, how does your organization deploy ChatGPT and other large language models (LLMs)? Given the spike in uptake, many of your employees probably use these technologies already. The key is to understand types of use and closely observe the related risks.
We reviewed a number of use cases, ranging from least consequential (e.g., content translation) to near-catastrophic (e.g., client file exchange for analytical purposes), not to mention instances of handling by software developer teams.
One of your top priorities is to guarantee risk awareness among all your employees whenever they send internal, confidential and personal data. The most developed structures will determine use cases for users as well as standardize a framework for action. Practical solutions are emerging, designed to supervise use through risk mitigation. A case in point is the recently launched Bing Chat Enterprise which claims to power the capabilities of Microsoft’s chatbot while protecting confidential company data. Depending on your situation, the roll-out of this service with internal communication can certainly reduce risks while encouraging access to innovation.
Championing trust and cybersecurity in your AI workflow
Nonetheless, the above strategy hides the bigger picture. Organizations are accelerating their AI initiatives, with some geared towards a specific task force and others intended for the company as a whole. Each project taps into a different model, handling different sets of data and targeting different audiences. Simply put, no all-purpose solution exists. For cybersecurity managers, the challenge ahead lies in conducting risk analyses that identify the need for supervision with the implementation of cybersecurity management.
Large corporations often have long-established methods, but these need repurposing to become more AI-friendly: Which data is used for training? What model is deployed? How much do we trust the model and find it reliable? What’s the intended goal? Will it automate decision-making? How about its exposure to risks? Does the model require third-party use or hosting? How will my model resist AI attacks such as poisoning, inference and evasion. By asking a few basic questions, you can categorize initiatives and prioritize those which are high-risk.
Evidently, this cybersecurity management implements tried-and-tested measures (secure infrastructure, data filtering, input/output restrictions) and is supported by new AI-specific techniques including adversarial reinforcement (training the model to recognize a cyberattack made by AI). To date, a cluster of startups have addressed this issue, introducing a range of alternative applications.
Such solutions include measures to encrypt models and securely entrust them to third parties (e.g., Skyld, Zama and Cosmian) in addition to AI Security Operations Centers (SOCs), such as Cranium, which detect discrepancies in model responses and datasets to pinpoint poisoning attacks. We were fortunate to tackle these issues at length thanks to our involvement in drafting the Securing Machine Learning Algorithms report for the European Union Agency for Cybersecurity (ENISA) which outlines the main actionable security controls.
All the above measures need to be immediately discussed and assessed to avoid problems further down the line before they can be included in traditional risk analysis methodology as part of tomorrow’s cybersecurity management. The future is fast-moving, both in terms of use and regulation, against a background of discussions in Europe centered on the AI Act and initiatives led in the United States. The latter include the National Institute of Standards and Technology (NIST) AI Framework and the White House’s attempts to regulate the AI industry.
Successfully applying these innovations to cybersecurity
Lastly, remember that your cybersecurity teams can also benefit from the innovations driven by apps such as ChatGPT. The most enhanced innovations seek to test particular use cases. For instance, this involves streamlining access to security policies by querying them in natural language, verifying contractual details in third-party relationships and classifying documentation.
It goes without saying that each and every one of us expects to be able to upgrade information system (IS) detection and monitoring systems so as to keep pace with AI, maximize automation and ‘upskill’ our existing teams. Although much of what is said in today’s market still leaves a lot to be desired, the stakes are extremely high and cybersecurity teams must pay close attention.
In the years ahead, AI will markedly change the cybersecurity landscape. It presents us with a golden opportunity, which we must grasp, securely, starting today! The time is now!
Let's continue our exploration:
Get your questions answered by our Data & AI experts. Find out how we can help you realize the full potential of AI for your organization.
Read our “Road to AI” series to explore the many facets of artificial intelligence and unlock its power to grow your business.