Back to overview
Degraded

Some LLM / AI requests are falsy triggering guardrails

May 13 at 11:24am CEST
Affected services
eu.platform.blits.ai
us.platform.blits.ai
uae.platform.blits.ai

Resolved
May 13 at 01:24pm CEST

A fix has been implemented.

The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended.

This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment.

Updated
May 13 at 12:30pm CEST

We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information.

Created
May 13 at 11:24am CEST

Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation.