The Virtual Digital Stranger: What ChatGPT Means for Network Security

The advent of ChatGPT has garnered significant attention recently and has dramatically reduced the number of AI skeptics—rightly so, considering it is one of the most quickly adopted technologies of all time. OpenAI’s ChatGPT is an inflection point in the evolution of natural language processing (NLP) and has captured the imagination of both businesses and laypersons. We are now within striking distance of the Turing Test, a benchmark that evaluates a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

But like any new technology, ChatGPT has fueled concerns about privacy, network security and reliability. Researchers at MIT are still trying to understand how large language models (LLMs) are able to learn new tasks without being retrained. This lack of clarity has given rise to fears of intelligent digital overlords or manipulation to produce new forms of security breaches, data theft and other cybercrimes. Similar to questions about nuclear technology, ChatGPT has raised concerns about its future: Will it bring mankind to the brink of self-destruction or bring us to the next level of automation and help society solve some of its toughest problems? I believe it’s the latter.

AWS Builder Community Hub

The Data Challenge

In addition to the fears of world destruction, these LLMs create challenges by the very nature of how they are trained and operated. For one thing, they require vast amounts of data to produce a suitable outcome, and this raises concerns about privacy and copyright protection, particularly when this data is pulled from the public internet. Even on private networks, this problem persists, given the way data is distributed across multiple third-party operators all using different privacy and data protection platforms and policies. This results in the unwitting disclosure of sensitive data, even in highly automated environments.

While it is clear that artificial intelligence is not the same as human (biological) intelligence, it nevertheless has the capability to make decisions in the pursuit of optimizing some functions which can closely resemble human behavior. But just as we exercise caution around human strangers until they have gained our trust, we should approach these new, AI-based virtual digital strangers the same way.

How can this be done? For the enterprise, we need look no further than the security solutions and protocols in existence today, since this ‘stranger’ inhabits the digital ecosystem. After all, no organization would, or should, give unknown entities of any kind access to critical data or systems, so there is no reason AI models should get a free pass. Of course, the size and scale of the AI data footprint is so vast that data protection tools must expand their scope, as well. Fortunately, this can be done with AI.

Here, then, is a look at how two of the leading network security tools can be implemented to effectively protect the enterprise from both the intentional and unintentional harm that AI has the potential to cause:

Zero-Trust Network Security

In the zero-trust paradigm, all requests for access to information are guided by the “never trust; always verify” principle. This means you always assume the entire network is potentially hostile. It doesn’t matter if it’s the public internet or a closed intranet behind a firewall, every entity is blocked until it can prove its validity.

Viewed in this light, not even an intelligent threat can gain access to critical data because all elements of processes and technology–everything from infrastructure to analytics and right down to the data itself–are housed within secure micro-perimeters that limit the risks of user access and privilege while at the same time augmenting detection and response through automation.

On the network, zero-trust can be implemented at key points to ensure that only minuscule portions of overall data and resource sets can be compromised at any one time. At the gateway, for example, network segmentation can be maintained with separate firewalls, threat management and encryption. At the switch level, parallel micro-perimeters can be aggregated into a unified fabric while still maintaining the segmentation created at the gateway. From there, security can be deployed at all layers of the network, including Wi-Fi access–all of which can be controlled by a centralized management platform.

Threat Assessment

AI models like LLM up the ante in two major ways when it comes to network security. They increase the scale and scope at which it can attack data and resources and it can disguise those attacks to fool network monitoring tools. But what is good for the goose is good for the gander.

By implementing AI tools on security platforms, the enterprise is able to increase its threat assessment and response capabilities to provide fine-grain analysis of data patterns and usage. Security has been a game of one-upmanship since the advent of network computing; with AI tools now readily available, the only proper response is to implement better AI defenses. And this applies regardless of whether a potential threat comes from cybercrime or simply an AI algorithm performing outside of its mandate.

Using these two tools, organizations can safely harness the full potential of ChatGPT and other platforms without compromising the safety and privacy of their customers. But the time to implement a robust management strategy is now—before business units become too dependent on intelligent systems and processes.

Like any technology, AI can do a lot of good when managed and optimized properly. But it can do a lot of harm if left to its own devices.

Network Reliability

As mobile internet connectivity becomes an economic necessity for countries, on par with water and electricity, AI will increasingly play a more important role in the flow of data across a more complex distributed network. AI is the next step in the automation of network reliability, automating tasks that typically require cognitive human skills and enabling AI solutions to solve complex problems on par with human domain IT experts, and often faster. Large language models will be a key component of the AI-driven network of the future as the next-generation user interface, providing IT teams with instant access to decades of networking knowledge and allowing them to communicate with networks. AI in the enterprise is having its moment.

Avatar photo

Bob Friday

Bob Friday is group vice president and chief AI officer at Juniper Network‘s AI-Driven Enterprise business unit that develops self-learning wireless networks using artificial intelligence. He was a co-founder, vice president, and chief technology officer at Mist, a part of Juniper Networks.

bob-friday has 1 posts and counting.See all posts by bob-friday