Get notifications in realtime for CloudRepo and 400+ services.
Sign up via Google Add to Slack
No credit card required / 14 days trial
CloudRepo

CloudRepo Status

Everything seems OK
21/12
22/12
23/12
24/12
25/12
26/12
27/12
28/12
29/12
30/12
31/12
01/01
02/01
03/01
04/01
05/01
06/01
07/01
08/01
09/01
10/01
11/01
12/01
13/01
14/01
15/01
16/01
17/01
18/01
19/01
20/01

Latest Incidents

Resolved API Outage

During a server upgrade, there was approximately 1-2 minutes of API downtime. This was unintentional and we are looking into why this happened so we will avoid future outages in the future as we upgrade hardware.

over 2 years ago Official incident report

Resolved Package Repository Outage

Customer Impact: Access to our storage APIs (publishing/reading packages) was returning 500 errors for some partners. This is a repeat of the May 9th outage - please refer to the incident summary for more details. Resolution: After we were alerted of this issue we were able to restore functionality to all partners. Duration: Approximately 45 minutes at approximately 11:00 CST and 20 minutes around 18:00 CST. Future Mitigation: To prevent this from happening again, we will be implementing several changes: 1) Improve our monitoring to detect 500 errors as soon as they occur. 2) Increased the size of our cluster to give us more headroom in our connection pools. 3) Continue to investigate root cause and fix anything that may be holding on to connections.

over 2 years ago Official incident report

Resolved Package Repository Outage

Customer Impact: Access to our storage APIs (publishing/reading packages) was returning 500 errors for some partners. Root Cause: Our servers exhausted their connections to the storage layer and our monitoring system did not alert us to this degraded state - a partner alerted us instead. Resolution: After we were alerted of this issue we were able to restore functionality to all partners. Duration: Approximately two hours Future Mitigation: To prevent this from happening again, we will be implementing several changes: 1) Improve our monitoring to detect 500 errors as soon as they occur. 2) Increased the size of our cluster to give us more headroom in our connection pools. 3) Continue to investigate root cause and fix anything that may be holding on to connections.

over 2 years ago Official incident report
Report a problem with CloudRepo

Sources

Official status page

Stats

0 incidents in the last 7 days

0 incidents in the last 30 days

Automatic Checks

Last check: 2 minutes ago

Last known issue: over 2 years ago

Get the incidents in realtime! Add to Slack

Don't miss another incident in CloudRepo!

Ctatus aggregates status page from services so you don't have to. Follow CloudRepo and hundreds of services and be the first to know when something is wrong.

Get started now
14 days of trial / No credit card required