Eliminate Toxic and Unsafe Prompts
Toxic or unsafe prompts are blocked. You get alerted and your staff get protected from Insulting language, hate speech, harassment or abuse, profanity, violence or threat, sexual explicit or graphic prompts. This avoids unwanted surprises in any LLM responses. This is in addition to the policy and other guardrails that vendors also employ.