Get notifications in realtime for Buttondown and 400+ services.
Sign up via Google Add to Slack
No credit card required / 14 days trial
Buttondown

Buttondown Status

Everything seems OK
21/12
22/12
23/12
24/12
25/12
26/12
27/12
28/12
29/12
30/12
31/12
01/01
02/01
03/01
04/01
05/01
06/01
07/01
08/01
09/01
10/01
11/01
12/01
13/01
14/01
15/01
16/01
17/01
18/01
19/01
20/01

Latest Incidents

Resolved General availability outage

For around an hour this morning, Buttondown had significantly degraded availability. ## What happened? New hosts refused to spin up and were "correctly" throwing 500s for around 30% of requests (this was only impacting hosts that were automatically cycling in and out, which is why it wasn't all requests.) ## Why did this happen? I'm using an undocumented Notion API to power documentation search, and the token that I was using to power that API expired in a way that I was not defensively programming against. This meant that each time the server tried to restart it would hit the API, fall over, and then pass that failure onto the client. As soon as this happened widespread enough, I got an alert for it... but I was out on a run. As soon as I got back, I hit the circuit breaker for that codepath and things got back to normal. ## Why won't this happen again? That circuit breaker is gonna stay off for a little, but I plan on moving all of that compilation to a build-time step anyway, removing the Notion codepath from the critical path of the application! ## Any questions? Email me: [email protected]

about 2 months ago Official incident report

Resolved General availability outage

For around five hours (meaning the early morning of 11/13, Pacific time) Buttondown's availability was heavily degraded. ## What happened? Around 50-70% of requests timed out. It wasn't _quite_ a complete DDOS, but essentially so. ## Why did this happen? This is... actually fairly silly, as far as these things go. An old third-party log handler that Buttondown was using shut off access at the logdrain I was using. (This is a totally reasonable thing to do!) _Unfortunately_, that clobbered a huge amount of the requests being served, to the point where all the active dynos on my infrastructure were busy complaining and throwing errors because they couldn't emit logs. The irony of this does not escape me. ## Well, why did it take so long to fix? I was asleep. No, really! That's the reason. I've got two thresholds for Buttondown outages: 1. The server is down for a little, which texts me. 2. The server is hard down for all requests, which calls and pages me. This was an exceptionally long bout of the former, which meant I woke up to like seventy outage texts but no outright pages. ## Why won't this happen again? First: I've upped (or lowered, depending on how you look at it) the threshold for what constitutes an outage. I made a lot of these alerts two years ago when Buttondown was a fraction of a fraction of its current size; thankfully, things are generally stable, but its still time to be more alert. Any non-trivial breakage of traffic pages little old me. Second: to fix the _actual_ issue, I'm spending some time this weekend messing around with the logging & error infrastructure Buttondown uses to more gracefully degrade. Have any questions? Email me: [email protected]

Resolved Broken tracking links

I tried to switch over to SSL for our tracking links and it looks like something got awry. I need to follow up with CloudFlare as to what the issue was (it was likely a misconfiguration on my end, in all honesty, but hard to say) — but for around two hours all tracked links were broken because they pointed to https and not http. I've reached out to the folks who have sent outbound emails during that time, and am in the process of backfilling the transactional emails (such as subscriber confirmations) that were sent during that window!

Resolved 500s on Frontend Application

GitHub is running into [issues](https://www.githubstatus.com/incidents/80d0cs6kpsps) and Buttondown's CI pipeline wasn't running properly, leading to a feature branch getting deployed to production which caused breakages for a few minutes.

10 months ago Official incident report
Report a problem with Buttondown

Sources

Official status page

Stats

0 incidents in the last 7 days

0 incidents in the last 30 days

Automatic Checks

Last check: 4 minutes ago

Last known issue: about 2 months ago

Get the incidents in realtime! Add to Slack

Don't miss another incident in Buttondown!

Ctatus aggregates status page from services so you don't have to. Follow Buttondown and hundreds of services and be the first to know when something is wrong.

Get started now
14 days of trial / No credit card required