Secure your seat for our upcoming NIS2 and segmentation webinar here

Securing tomorrow: What’s next in cybersecurity? Part I

Securing tomorrow: What’s next in cybersecurity? cover

Navigating AI implementation in classified environments

As organizations rush to embrace AI technologies, the challenge of implementing these tools in classified or confidential environments becomes increasingly complex. While the potential benefits are significant, organizations must carefully consider how to leverage AI capabilities without compromising sensitive, confidential or classified information.

In this article, we discuss the challenges surrounding AI and classified environments, look at practical approaches and solutions, in order to answer the main question: how do we balance innovation and security?

Understanding the data ownership challenge

A fundamental challenge with AI implementation lies in data ownership and control. When organizations feed information into commercial AI models, that data potentially becomes part of the model's training set. "Everything you input is essentially used by the model," explains Sander Dorigo, Senior Security Architect at Fox Crypto. "While some providers like Microsoft promise that your data won't be incorporated into their larger model, the risks remain significant."

This is particularly concerning for organizations dealing with confidential information, whether it's unpublished content, customer contracts, or sensitive business data. The challenge extends beyond immediate data security: once information becomes part of an AI model, it's practically impossible to fully remove or control its future use.

Striking the balance: practical approaches to AI implementation

Does that mean organizations dealing with sensitive information should never use any AI tools or solutions? The key lies in selective implementation and careful control of data access, Sander says. "For example, you can use large language models as translation engines, by running them offline within your organization," he suggests. "This allows employees to process sensitive texts without risking exposure to external systems." Other safe applications include using AI for code completion and development support, provided the models are properly isolated from proprietary code.

Building custom, offline AI solutions is the first solution for organizations with sensitive information. However, organizations can also choose to use pre-trained models that can be run locally, or to implement strict data governance policies for cloud-based AI services.

Mitigating risks through careful design

The implementation of AI in classified environments requires careful consideration of security architecture. One promising approach involves building personalized models that only access information specific to individual user permissions. "The key is ensuring the AI engine only uses the data you have access to," Sander emphasizes. "Rather than trying to instruct the AI not to reveal certain information, ensure it never has access to that information in the first place."

This approach acknowledges that AI models aren't inherently good at keeping secrets and can be manipulated through various prompt injection techniques to reveal information they shouldn't. Therefore, the focus should be on controlling what information the AI has access to, rather than trying to control how it uses that information.

Organizations must also consider the broader implications of AI use, including the potential for unintended information disclosure through casual interactions with AI systems. For instance, if an executive uses an AI system to analyze future business projections, that information could potentially be accessed by other employees through carefully crafted prompts.

The path forward for organizations lies in treating AI as a tool for specific, controlled use cases rather than a universal solution. By carefully defining the scope of AI implementation and maintaining strict control over data access, organizations can harness the benefits of AI while protecting their sensitive information.