AI Security for Apps fields
When enabled, AI Security for Apps populates the following fields:
| Field | Description |
|---|---|
LLM PII detected cf.llm.prompt.pii_detected Boolean | Indicates whether any personally identifiable information (PII) has been detected in the LLM prompt included in the request. |
LLM PII categories cf.llm.prompt.pii_categories Array<String> | Array of string values with the personally identifiable information (PII) categories found in the LLM prompt included in the request. Category list |
LLM Content detected cf.llm.prompt.detected Boolean | Indicates whether Cloudflare detected an LLM prompt in the incoming request. |
LLM Unsafe topic detected cf.llm.prompt.unsafe_topic_detected Boolean | Indicates whether the incoming request includes any unsafe topic category in the LLM prompt. |
LLM Unsafe topic categories cf.llm.prompt.unsafe_topic_categories Array<String> | Array of string values with the type of unsafe topics detected in the LLM prompt. Category list |
LLM Injection score cf.llm.prompt.injection_score Number | A score from 1–99 that represents the likelihood that the LLM prompt in the request is trying to perform a prompt injection attack. Lower scores indicate higher risk. |
LLM Token count cf.llm.prompt.token_count Number | An estimated token count for the LLM prompt in the request. Refer to Token counting for details. |
LLM Custom topic categories cf.llm.prompt.custom_topic_categories Map<Number> | A map of custom topic labels to relevance scores (1–99). Lower scores indicate the prompt is more relevant to that topic. Only populated when custom topics are configured. |