Browse Docs
ONLINE DOCUMENTATION
|
||||
LLM Prompt LoggerIn this document
OverviewLLM Prompt Logger allows organizations to capture and store all user prompts from supported LLM applications. Once enabled, all interaction users have with ChatGPT, Google Gemini, and Claude is logged and stored for compliance, security monitoring, and auditing purposes. This feature helps organizations track AI usage across the organization, detect potential data leaks, and maintain comprehensive audit records of AI interactions.
ConfigurationNavigate to Settings → LLM Prompt Logger to manage the following options.
Enable AI Prompt LoggerToggle this ON to begin logging all user prompts from supported LLM applications. When disabled, no prompts are captured or stored.
Show Prompt Logging NotificationControls whether users see a notification banner inside supported LLM applications informing them that their prompts are being logged. Available options:
Prompt Logging Notification MessageIf Show Prompt Logging Notification is set to any option other than never, this is the message users will see when they open a supported LLM application. You can customize this text to reflect your organization’s policy language. Default message: "LLM prompts are logged in accordance with the applicable company policy."
Viewing AI Prompt LogsCaptured prompts are accessible under Logs → AI Prompt Logs.
Each log entry captures details about the interaction, including the user, the LLM service used, a preview of the prompt, and relevant metadata such as timestamps. Any sensitive content detections (such as PII, code, or non-business content) and attachment presence are surfaced directly in the log view, making it easy to spot activity that may require attention. |
||||