Incidents | Blits.ai Incidents reported on status page for Blits.ai https://status.blits.ai/ https://d1lppblt9t2x15.cloudfront.net/logos/741e45ce62612b7f2ce70ff2635bed0a.png Incidents | Blits.ai https://status.blits.ai/ en Blits.ai recovered https://status.blits.ai/ Fri, 23 Jan 2026 15:35:37 +0000 https://status.blits.ai/#33a285db3fb19255c940dc90b188f735d91c2ce83b24364b2768ef18a60992e4 Blits.ai recovered Blits.ai went down https://status.blits.ai/ Fri, 23 Jan 2026 15:32:41 +0000 https://status.blits.ai/#33a285db3fb19255c940dc90b188f735d91c2ce83b24364b2768ef18a60992e4 Blits.ai went down Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 11:24:00 -0000 https://status.blits.ai/incident/562158#458f01a344ed3bb75c9a4f77ddd27bc65e951b0023212ea1332fa27f77a8dd04 A fix has been implemented. The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended. This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 11:24:00 -0000 https://status.blits.ai/incident/562158#458f01a344ed3bb75c9a4f77ddd27bc65e951b0023212ea1332fa27f77a8dd04 A fix has been implemented. The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended. This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 11:24:00 -0000 https://status.blits.ai/incident/562158#458f01a344ed3bb75c9a4f77ddd27bc65e951b0023212ea1332fa27f77a8dd04 A fix has been implemented. The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended. This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 10:30:00 -0000 https://status.blits.ai/incident/562158#0628c8fcc1aa5b7f55604f620c07a54810fae8b4a8d2457c872751484959e414 We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 10:30:00 -0000 https://status.blits.ai/incident/562158#0628c8fcc1aa5b7f55604f620c07a54810fae8b4a8d2457c872751484959e414 We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 10:30:00 -0000 https://status.blits.ai/incident/562158#0628c8fcc1aa5b7f55604f620c07a54810fae8b4a8d2457c872751484959e414 We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 09:24:00 -0000 https://status.blits.ai/incident/562158#63e0140ce68cce77c52e6d56d4017c4fba0f10c8f004a20e08cce83c44e69b71 Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 09:24:00 -0000 https://status.blits.ai/incident/562158#63e0140ce68cce77c52e6d56d4017c4fba0f10c8f004a20e08cce83c44e69b71 Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 09:24:00 -0000 https://status.blits.ai/incident/562158#63e0140ce68cce77c52e6d56d4017c4fba0f10c8f004a20e08cce83c44e69b71 Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation.