---
title: Concepts
description: Monitor and secure generative AI usage.
image: https://developers.cloudflare.com/cf-twitter-card.png
---

> Documentation Index  
> Fetch the complete documentation index at: https://developers.cloudflare.com/learning-paths/llms.txt  
> Use this file to discover all available pages before exploring further.

[Skip to content](#%5Ftop) 

# Concepts

The goal of this learning path is to provide Cloudflare One users with the strategy and tools to securely adopt generative AI within their organizations. This guide will help address new security challenges and mitigate risks like shadow AI and data loss.

## Objectives

* Determine risk tolerance: Identify areas of concern and risk tolerance for AI use to establish a baseline for your organization's AI security strategy.
* Monitor AI usage: Utilize Cloudflare One's tools, such as the Shadow IT dashboard and API CASB integrations, to gain visibility into both sanctioned and unsanctioned AI application usage.
* Build security policies: Create granular security policies using Cloudflare Gateway to control AI usage, prevent data loss with DLP, and manage user behavior through actions like blocking or redirecting.
* Secure sanctioned models: Apply Zero Trust principles to sanctioned AI models and internal services like Model Context Protocol (MCP) servers to ensure secure access and protect sensitive data from being exposed.

```json
{"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"/directory/","name":"Directory"}},{"@type":"ListItem","position":2,"item":{"@id":"/learning-paths/","name":"Learning Paths"}},{"@type":"ListItem","position":3,"item":{"@id":"/learning-paths/holistic-ai-security/concepts/","name":"Concepts"}}]}
```
