May 4, 2026
· 8 min readThe GitHub Exodus: Why Developers Are Walking Away in 2026
GitHub uptime has collapsed to roughly 86% in April 2026, 292 pull requests vanished from a Merge Queue bug, and Mitchell Hashimoto — user #1,299 — has packed up Ghostty and left. Here's what's actually happening, why AI agents are part of the story, and where developers are migrating.

TL;DR
- GitHub's third-party measured uptime has dropped to ~86% in April 2026 — far below its officially reported 99%+.
- On April 23, the Merge Queue silently unmerged 292 pull requests across 658 repos.
- On April 27, an Elasticsearch botnet attack took down GitHub search for hours.
- On April 28, GitHub disclosed a critical remote code execution vulnerability where
git pushcould execute code on its servers. - Mitchell Hashimoto (HashiCorp co-founder, GitHub user #1,299) migrated his 50K-star Ghostty project off the platform the same day.
- Root cause hint from GitHub's own CTO: AI agentic workflows have hammered the platform with a load profile it wasn't built for.
Why this matters
GitHub isn't just a website. It's the public record of software. If a project isn't on GitHub, for most developers it might as well not exist. Over 100 million developers depend on it for source control, issues, pull requests, CI/CD via Actions, package hosting, and — increasingly — as a professional resume.
That's a lot of weight resting on one platform. Which is exactly why, when the platform starts dropping merges and serving 500s on search, developers don't just complain. They start packing up and leaving.
How bad is it, really?
There's a strange disconnect happening between what developers experience and what GitHub reports.
| Metric | Third-party monitoring | GitHub status page |
|---|---|---|
| 2025 uptime | Below 90% | 99%+ |
| April 2026 uptime | ~86% | 99%+ |
| AWS S3 (for comparison) | 11 nines (99.999999999%) | — |
For context: AWS S3 promises eleven nines of durability. GitHub is currently operating at one nine. That's not a service-level agreement — that's a coin flip with extra steps.
⚠️ Warning: "Operational" on a status page often means the homepage loads. It doesn't always mean Actions are running, search returns results, or your Merge Queue isn't quietly dropping PRs in the background.
The April 2026 timeline of pain
The last week of April was a uniquely bad stretch — three distinct, overlapping failures in five days.
Each of these alone is recoverable. Stacked back-to-back, they read like a platform under genuine structural stress.
The Merge Queue incident
The Merge Queue is supposed to be the boring part of GitHub — a serial pipeline that takes approved PRs, rebases them, runs CI, and merges them in order. Boring is the goal. Boring means your code lands.
On April 23, the queue silently un-merged 292 pull requests across 658 repositories. For teams using merge queues to enforce a clean main branch, this meant code that was reported as merged simply… wasn't, anymore. The fix was straightforward (re-merge), but the trust damage runs deeper than a single incident.
The search outage
Two days later, GitHub's Elasticsearch cluster — which powers code search, issue search, and most of the navigation autocomplete — was hit hard enough by a botnet that search went down for hours. If you've ever tried to navigate a large repo without working search, you already know how much of the developer workflow lives on top of that subsystem.
The RCE disclosure
On April 28, GitHub published two blog posts the same morning. One was the CTO apologizing for reliability. The other disclosed a critical remote code execution vulnerability where a crafted git push could execute code on GitHub's own servers. Patched, but the dual-post timing made the broader picture impossible to spin.
The Hashimoto departure
If you only follow one signal in this story, follow this one.
Mitchell Hashimoto is GitHub user #1,299. He joined in 2008, the year the site launched. He co-founded HashiCorp, took it public, and despite enough money to never write code again, still ships daily — currently on a terminal emulator called Ghostty that sits at over 50,000 stars.
On April 28, he published a breakup letter. The line that hit hardest:
"I want to ship software, and it doesn't want me to ship software."
He'd kept a journal for a month, marking an X on every day a GitHub outage blocked his work. Almost every day got an X. Ghostty is now leaving the platform.
This isn't a random user rage-quitting. This is a power user — one of the first power users — voting with his repository.
💡 Tip: When the people who built the developer-tools era start migrating, that's not noise. That's the leading edge of a platform shift.
Hashimoto isn't alone, either. The Zig project migrated earlier. Several other large open-source repos have started mirroring or moving outright.
Why is this happening now?
GitHub used to be reliable. After Microsoft's 2018 acquisition for $7.5 billion, the platform actually got better for a few years — Actions launched, Codespaces shipped, Copilot was integrated. So what changed?
Here's the part GitHub's own CTO put in writing: since 2025, agentic development workflows have accelerated sharply.
Translated out of corporate-speak: AI coding agents are slamming GitHub like a free buffet.
A traditional developer might:
- Clone a repo a few times a week
- Push 5–20 commits a day
- Open a handful of PRs per week
- Trigger Actions on each push
A swarm of AI agents working on the same codebase will:
- Clone constantly across parallel sandboxes
- Push hundreds of commits an hour
- Open dozens of speculative PRs per session
- Trigger Actions runs for every micro-iteration
GitHub wasn't sized for this load profile. The architecture that handled human-paced development is now serving machine-paced development at the same price points and largely the same SLAs.
Important: GitHub isn't just hosting developers anymore. It's hosting the agents that work for developers. The traffic curves don't even live on the same chart.
The alternatives
The good news for developers: the exit doors are well-marked.
| Platform | Best for | CI/CD | Self-host? | AI features | Vibe |
|---|---|---|---|---|---|
| GitLab | Teams wanting GitHub parity | ✅ Built-in | ✅ | ✅ Duo | Reliable, slightly boring |
| Codeberg | Open source projects | ✅ via Forgejo Actions | ✅ (Forgejo) | ❌ | German nonprofit, principled |
| Sourcehut | Minimalists, mailing-list workflows | ✅ builds.sr.ht | ✅ | ❌ Explicitly none | Hacker-pure, fast |
| Gitea / Forgejo | Self-hosted small teams | ✅ Actions-compatible | ✅ | Optional | Lightweight, community-run |
| Bitbucket | Atlassian shops | ✅ Pipelines | ✅ DC | ✅ | Enterprise-flavored |
None of these have GitHub's network effect. None of them have its discoverability. But all of them are running, paid for, and accepting refugees.
💡 Tip: You don't have to migrate to hedge. Mirror critical repos to a second host with a simple cron job. If GitHub has another bad week, you flip your CI deploy targets and keep shipping.
A minimal mirror script:
#!/usr/bin/env bash
# Mirror a GitHub repo to a backup remote (GitLab, Codeberg, etc.)
REPO="$1" # e.g. git@github.com:you/yourrepo.git
BACKUP="$2" # e.g. git@codeberg.org:you/yourrepo.git
git clone --mirror "$REPO" /tmp/mirror.git
cd /tmp/mirror.git
git remote set-url --push origin "$BACKUP"
git push --mirror
cd / && rm -rf /tmp/mirror.gitBreaking it down:
--mirrorclones every ref — branches, tags, notes — not justmain.set-url --pushrewrites the push target without touching the fetch URL.git push --mirrorsyncs everything in one shot.- Drop this in a nightly cron and you have a working escape hatch.
What this means for you
You probably don't need to abandon GitHub today. The network effect is real, the integrations are deep, and Microsoft is — almost certainly — going to throw engineering hours at the reliability problem. The platform has recovered from worse before.
But the assumptions that held for the last decade need an update:
- Treat GitHub as critical infrastructure, not free magic. Have a backup plan for source, CI, and releases.
- Decouple deploys from GitHub uptime. If your production deploy pipeline can't ship when GitHub is degraded, that's a fixable architectural choice.
- Mirror your important repos. Even passively. The cost is near zero.
- Watch the load you generate with AI agents. Caching, batching, and rate limits aren't just polite — they're how you avoid being part of the problem.
- Keep an eye on the migrations. When more high-signal projects start moving, the calculus changes for everyone.
Production checklist
- Mirror every production-critical repo to at least one non-GitHub remote, weekly minimum.
- Cache CI dependencies locally or in your own object storage — don't rely on Actions runners pulling fresh from the internet on every job.
- Pin Action versions by SHA, not by tag. Tag re-pushes have caused multi-org outages before.
- Log Merge Queue events so you can detect silent un-merges quickly.
- Document a fallback runner. If GitHub Actions goes dark, can your pipeline run on self-hosted runners or a secondary CI?
- Throttle agent traffic. If you're orchestrating AI coding agents, rate-limit their Git operations and PR creation.
Conclusion
I'm not packing my repos up tomorrow. GitHub is still where the contributors are, where the discovery is, where the muscle memory lives. But the last few weeks have changed how I think about it — less as a permanent home, more as a current home.
Hashimoto wrote that he wants to ship software, and the platform doesn't want him to. That's a sentence worth sitting with. The platform we depend on is straining under a load profile no one designed it for, and the people who built the developer-tools era are starting to vote with their git remote.
If you're a developer in 2026, the right move isn't panic. It's having an exit plan you never need to use, and writing your code as if the platform underneath you is just another vendor — one of many — instead of the air you breathe.
FAQ
Is GitHub really down right now?
Third-party monitoring puts GitHub uptime around 86% in April 2026, while GitHub's own status page reports above 99%. The gap is the story — partial degradations, slow Actions, broken search, and silent Merge Queue failures don't always trip GitHub's official status checks, but they absolutely block your work.
What happened with the 292 pull requests on April 23rd?
GitHub's Merge Queue silently unmerged 292 PRs across 658 repositories. The platform whose entire job is to never lose your code briefly lost it. Most teams recovered by re-merging, but it shook trust in the core merge pipeline.
Why did Mitchell Hashimoto leave GitHub?
The creator of Vagrant, Terraform, and Ghostty kept a journal for a month and marked an X on every day a GitHub outage blocked his work. Almost every day got an X. He's been on GitHub since 2008 as user #1,299, and he wrote that he wants to ship software but GitHub doesn't want him to.
What are the best alternatives to GitHub?
GitLab is the closest feature parity option with built-in CI/CD. Codeberg is a German nonprofit running Forgejo, ideal for open source. Sourcehut is minimal, fast, and explicitly AI-free. For self-hosted setups, Gitea and Forgejo are mature choices.
Are AI coding agents really to blame for GitHub's load problems?
Partly. GitHub's CTO publicly acknowledged that agentic development workflows have accelerated sharply since 2025. Agents clone, push, open PRs, and run Actions at machine speed — multiplying the load profile GitHub was originally designed for.
Should I migrate my projects off GitHub right now?
For most teams, no — not yet. The network effect (discoverability, contributors, integrations) still wins. But mirroring critical repos to a second host is now a reasonable disaster-recovery move, especially for production deployment pipelines.