Severity Level: [High] Our Engineers and DCops team were able to work through the remaining issues affecting several hypervisors and instances and now all services and instances are reporting healthy again. As such, we consider this incident resolved. The root cause was a routine UPS related maintenance affected power to our cooling, causing networking and other hardware to power off or be throttled before power was restored. First reports of an issue were 3-25-25 @11:45PM CST. While some networking and services were partially operational during some of this time, the entire incident lasted ~16 hours. In the event you’re unsure as to whether your service is still impacted by this incident, please open a support ticket. We’ll get back to you as soon as possible.
To be automatically notified when the status of this incident changes, please click on the “Subscribe to updates” button. Thank you for your patience and understanding during this time.