Incidents | Blits.ai Incidents reported on status page for Blits.ai https://status.blits.ai/ https://d1lppblt9t2x15.cloudfront.net/logos/114178d4253e7f22cf1b14abb88e054e.png Incidents | Blits.ai https://status.blits.ai/ en Blits.ai recovered https://status.blits.ai/ Mon, 20 Oct 2025 09:03:48 +0000 https://status.blits.ai/#6c0104a883cc20980bffdfbd1de921ad2e3e5ec352e95c2bd5547a30bf237aa0 Blits.ai recovered Blits.ai went down https://status.blits.ai/ Mon, 20 Oct 2025 07:54:41 +0000 https://status.blits.ai/#6c0104a883cc20980bffdfbd1de921ad2e3e5ec352e95c2bd5547a30bf237aa0 Blits.ai went down uae.platform.blits.ai recovered https://status.blits.ai/ Thu, 16 Oct 2025 08:00:21 +0000 https://status.blits.ai/#7889ec8be85e7e6fb04b228c5e66878dd7d3fddfb9219353f270dddbe7da3af4 uae.platform.blits.ai recovered uae.platform.blits.ai went down https://status.blits.ai/ Thu, 16 Oct 2025 07:48:17 +0000 https://status.blits.ai/#7889ec8be85e7e6fb04b228c5e66878dd7d3fddfb9219353f270dddbe7da3af4 uae.platform.blits.ai went down Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 11:24:00 -0000 https://status.blits.ai/incident/562158#458f01a344ed3bb75c9a4f77ddd27bc65e951b0023212ea1332fa27f77a8dd04 A fix has been implemented. The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended. This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 11:24:00 -0000 https://status.blits.ai/incident/562158#458f01a344ed3bb75c9a4f77ddd27bc65e951b0023212ea1332fa27f77a8dd04 A fix has been implemented. The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended. This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 11:24:00 -0000 https://status.blits.ai/incident/562158#458f01a344ed3bb75c9a4f77ddd27bc65e951b0023212ea1332fa27f77a8dd04 A fix has been implemented. The issue was caused by a human error during the deployment of new guardrail settings for the system. As a result, chat history was inadvertently included in all guardrail checks. Since some bot responses contain JSON characters, the guardrails incorrectly flagged these as inappropriate, causing them to trigger far more frequently than intended. This problem affected a small subset of customers. It was dependent on a specific combination of the model in use and the output instructions applied. Because of this, the issue did not appear in our regression tests in the test environment. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 10:30:00 -0000 https://status.blits.ai/incident/562158#0628c8fcc1aa5b7f55604f620c07a54810fae8b4a8d2457c872751484959e414 We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 10:30:00 -0000 https://status.blits.ai/incident/562158#0628c8fcc1aa5b7f55604f620c07a54810fae8b4a8d2457c872751484959e414 We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 10:30:00 -0000 https://status.blits.ai/incident/562158#0628c8fcc1aa5b7f55604f620c07a54810fae8b4a8d2457c872751484959e414 We identified and fixed the issue. The team is initiating a hotfix deploy (zero-downtime) within the next hour and a half. Once deployed and verified I will follow-up with more information. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 09:24:00 -0000 https://status.blits.ai/incident/562158#63e0140ce68cce77c52e6d56d4017c4fba0f10c8f004a20e08cce83c44e69b71 Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 09:24:00 -0000 https://status.blits.ai/incident/562158#63e0140ce68cce77c52e6d56d4017c4fba0f10c8f004a20e08cce83c44e69b71 Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation. Some LLM / AI requests are falsy triggering guardrails https://status.blits.ai/incident/562158 Tue, 13 May 2025 09:24:00 -0000 https://status.blits.ai/incident/562158#63e0140ce68cce77c52e6d56d4017c4fba0f10c8f004a20e08cce83c44e69b71 Some requests on production are triggering guardrails while it shouldn't. It seems to happen on a selection of the request. Root-cause under investigation.