All Systems Operational

API Operational
90 days ago
99.98 % uptime
Today
Console Operational
90 days ago
99.98 % uptime
Today
Onboard Operational
90 days ago
99.98 % uptime
Today
Payments Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
May 10, 2026

No incidents reported today.

May 9, 2026

No incidents reported.

May 8, 2026

No incidents reported.

May 7, 2026

No incidents reported.

May 6, 2026

No incidents reported.

May 5, 2026

No incidents reported.

May 4, 2026

No incidents reported.

May 3, 2026
Completed - The scheduled maintenance has been completed.
May 3, 05:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 3, 05:00 UTC
Scheduled - We will be undergoing scheduled maintenance to update our API infrastructure on 4/3/26 @ 1:00 AM EDT. During a 30 minute window from 1 AM EDT to 1:30 AM EDT, we expect 5 minutes of downtime for Check API, Console, and Components.
Apr 27, 23:53 UTC
May 2, 2026

No incidents reported.

May 1, 2026

No incidents reported.

Apr 30, 2026

No incidents reported.

Apr 29, 2026

No incidents reported.

Apr 28, 2026
Resolved - 4/28 — Resolution Update and RCA

Impact and duration: From approximately 1:05 AM PT to 1:44 AM PT on Tuesday, April 28 (~40 minutes), API endpoints across the platform returned elevated error rates and Console pages were intermittently unavailable. Health checks returned to green at 1:44 AM PT and the system has been stable since.

What happened: Two scheduled background jobs ran simultaneously in the early-morning window and pushed our cache layer past its memory limit. Once the cache filled, normal cache writes began failing globally, which surfaced as the broad API degradation.
Today's root cause is distinct from the SEV-2 on Friday, 4/24, which was driven by an inbound webhook volume spike from a banking partner. We traced today's failures directly to cache memory pressure, not webhook volume.

Going forward: We are taking immediate steps to prevent recurrence, including reducing the load these scheduled jobs place on the cache layer and improving how the system handles cache pressure so it degrades gracefully rather than producing errors. We are also strengthening monitoring on cache health and related signals so this class of issue is caught before it impacts API availability.

Apr 28, 16:01 UTC
Monitoring - The system is recovering. We are actively monitoring to ensure it remains recovered. Root cause analysis to follow.
Apr 28, 08:45 UTC
Investigating - We've identified an event impacting AVI availability and performance inconsistency. Investigation in progress.
Apr 28, 08:37 UTC
Apr 27, 2026

No incidents reported.

Apr 26, 2026

No incidents reported.