Some LLM / AI requests are falsy triggering guardrails
Resolved
May 13, 2025 at 11:24am UTC
A fix has been implemented.
The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended.
This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment.
Affected services
Updated
May 13, 2025 at 10:30am UTC
We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information.
Affected services
Created
May 13, 2025 at 9:24am UTC
Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation.
Affected services