Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
internet solutions company cloud flare today introduced Cloudflare One for AI, its latest set of zero-trust security controls. The tools allow companies to securely use the latest generative artificial intelligence tools while protecting intellectual property and customer data. The company believes that the suite’s features will offer a simple, fast, and secure means for organizations to adopt generative AI without compromising performance or security.
“Cloudflare One gives teams of any size the ability to use the best tools available on the Internet without facing management headaches or performance challenges. Additionally, it allows organizations to audit and review the AI tools that their team members have started using,” Sam Rhea, Cloudflare’s vice president of products, told VentureBeat. “Security teams can then restrict usage to only approved tools and, within those that are approved, monitor and control how data is shared with those tools using policies built around [their organization’s] sensitive and unique data.
Cloudflare One for AI provides enterprises with end-to-end AI security through features including visibility and measurement of AI tool usage, data loss prevention, and integration management.
Cloudflare Gateway allows organizations to track the number of employees experimenting with AI services. This provides context for budgeting and enterprise license plans. Service tokens also give administrators a clear record of API requests and control over specific services that can access AI training data.
Cloudflare Tunnel provides an outbound-only encrypted connection to the Cloudflare network, while the Data Loss Prevention (DLP) service offers protection to bridge the human gap in how employees share data.
“AI holds incredible promise, but without proper security measures, it can create significant business risks. Cloudflare’s Zero Trust products are the first to provide security measures for AI tools so that enterprises can seize the opportunity that AI opens up and ensure only the data they want to expose is shared,” said Matthew Prince , co-founder and CEO of Cloudflare, in a written statement.
Generative AI risk mitigation through zero trust
Organizations are increasingly adopting generative AI technology to improve productivity and innovation. But the technology also poses significant security risks. For example, major companies have banned popular AI generative chat apps due to leaks of sensitive data. In a recent KPMG US survey, 81% of US executives expressed cybersecurity concerns around generative AI, while 78% expressed concerns about data privacy.
According to Cloudflare’s Rhea, customers have expressed great concern about inputs from generative AI tools, fearing that individual users could inadvertently upload sensitive data. Organizations have also raised concerns about training these models, posing a risk of granting too broad access to data sets that shouldn’t leave the organization. By opening up data for these models to learn, organizations can inadvertently compromise the security of their data.
“The number one concern for AI services CISOs and CIOs is oversharing: the risk that individual users, understandably excited about the tools, end up accidentally leaking sensitive corporate data to those tools,” Rhea told VentureBeat. . “Cloudflare One for AI gives those organizations a comprehensive filter, without slowing down users, to ensure data sharing is allowed and unauthorized use of unapproved tools is blocked.”
The company claims that Cloudflare One for AI equips teams with the necessary tools to thwart such threats. For example, by scanning the data being shared, Cloudflare One can prevent the data from being uploaded to a service.
Additionally, Cloudflare One makes it easy to create secure pathways to share data with external services, which can record and filter how that data is accessed, thereby mitigating the risk of data breaches.
“Cloudflare One for AI gives companies the ability to control every interaction their employees have with these tools or these tools have with their sensitive data. Clients can start cataloging which AI tools their employees are using effortlessly by relying on our pre-built analytics,” Rhea explained. “With just a few clicks, they can block or control what tools their team members use.”
The company claims that Cloudflare One for AI is the first to offer security measures around AI tools, so that organizations can benefit from AI while ensuring they share only the data they want to expose, without putting your intellectual property or customer data at risk.
Keep your data private
Cloudflare’s DLP service scans content when it leaves employee devices for potentially sensitive data during upload. Administrators can use pre-provided templates, such as social security or credit card numbers, or define sensitive data terms or expressions. When users attempt to upload data that contains one or more such instances, the Cloudflare network will block the action before the data reaches its destination.
“Customers can tell Cloudflare the types of data and intellectual property they manage and [that] they can never leave your organization, as Cloudflare will scan every interaction your corporate devices have with an AI service on the Internet to filter and block that data from leaving your organization,” Rhea explained.
Rhea said that organizations are concerned that external services will access all the data they provide when an AI model needs to connect to training data. They want to make sure that the AI model is the only service that has access to the data.
“Service tokens provide sort of an authentication model for automated systems in the same way that passwords and second factors provide validation for human users,” Rhea said. “Cloudflare’s network can create service tokens that can be provided to an external service, such as an AI model, and then act as a bouncer that checks each request to get to internal training data to detect the presence of that token. of service”.
What’s next for Cloudflare?
According to the company, Cloudflare’s Cloud Access Security Broker (CASB), a security enforcement point between a cloud service provider and its customers, will soon be able to scan the AI tools used by enterprises and detect configuration errors and misuse. The company believes that its platform approach to security will enable businesses around the world to embrace the productivity enhancements offered by evolving technology and new tools and plug-ins without creating bottlenecks. Furthermore, the platform’s approach will ensure that companies comply with the latest regulations.
“Cloudflare CASB scans the software-as-a-service (SaaS) applications where organizations store their data and complete some of their most critical business operations for potential misuse,” Rhea said. “As part of Cloudflare One for AI, we plan to create new integrations with popular AI tools to automatically scan for misuse or incorrectly configured defaults to help administrators trust that individual users are not accidentally creating open doors in their work spaces.
He said that, like many organizations, Cloudflare anticipates learning how users will adopt these tools as they become more popular in the enterprise and is prepared to adapt to challenges as they arise.
“One area where we have seen particular concern is the retention of data from these tools in regions where data sovereignty obligations require more oversight,” Rhea said. “Cloudflare’s network of data centers in more than 285 cities around the world gives us a unique advantage in helping customers control where their data is stored and how it is transferred to external destinations.”
VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.