Server Bandwidth Crisis — Village of Maple Bluff (2025-12-12)
Overview
During the [1], Mark identified the root cause of recurring server-wide 500 gateway errors: the Village of Maple Bluff website was consuming 131 GB/month of bandwidth due to heavy bot traffic. Because Asymmetric runs ~60 client sites on a single shared server, this bandwidth spike was starving other sites of resources at peak moments, causing gateway failures across the board.
Mark implemented a fix during or shortly before the call. Karly was asked to report any recurrence.
Incident Details
| Field | Value |
|---|---|
| Affected site (source) | Village of Maple Bluff |
| Bandwidth consumed | 131 GB/month |
| Root cause | Bot traffic generating tens of thousands of visits |
| Server-wide impact | 500 gateway errors on all co-hosted client sites |
| Status at call | Fix applied by Mark; monitoring ongoing |
| Secondary impact | Likely contributed to duplicate order issue on Doodla Farms (inconclusive) |
Technical Context
The shared server hosts approximately 60 client sites. When the Village of Maple Bluff site received large bot traffic bursts, bandwidth peaked sharply. Any user or client attempting to load a page or log in during those peaks encountered 500 errors — even if their own site was healthy.
Mark noted the distinction between bandwidth (the traffic-driven problem) and storage (which was fine at 125 GB of 2 TB used). The 500 errors were a resource exhaustion issue, not a disk or code problem.
"The thing about having a server like this is we've got 60 clients on one big server. And so if one of our clients starts getting messed with, it can affect the others." — Mark Hope
Related Issues Surfaced
Asymmetric Applications Site — 400 Errors & Traffic Drop
While reviewing server logs, Mark identified a separate but critical issue on the Asymmetric Applications company website:
- 86,000 400 errors ("page not found") in server logs
- Google Search Console impressions down ~25% in the last 30 days
- Previously receiving ~9,000 impressions/day; now near zero
- Suspected causes: broken internal links, de-indexed pages, or a structural site change
This is flagged as a top priority for both Mark and Karly. See [2].
Doodla Farms — Duplicate Orders
A possible (unconfirmed) connection between the bandwidth spikes and 2–3 duplicate orders on the Doodla Farms WooCommerce store. eShok investigated and found no definitive cause; updated plugins and disabled checkout page caching as a precaution.
Server Health Notes (General)
Mark reviewed the server dashboard during the call and flagged several broader concerns:
- Multiple sites showing poor health scores and high cache miss rates
- Some sites generating elevated 500 error counts beyond the Maple Bluff incident
- Action needed: coordinate with eShok to audit and remediate low-health sites
Action Items
- [x] Mark — Apply fix for Village of Maple Bluff bot traffic bandwidth spikes (done during/before call)
- [ ] Karly — Report any recurrence of 500 gateway errors
- [ ] Mark + Karly — Investigate Asymmetric Applications 400 errors and traffic drop; fix indexing/redirects (top priority)
- [ ] Mark — Coordinate with eShok to review and remediate other low-health sites on shared server
Related Pages
- [3]
- [2]
- [4]
- [5]
Sources
- 2025 12 12 Weekly Ops Call|2025 12 12 Weekly Ops Call
- 2025 12 12 Traffic Drop Investigation|Asymmetric Applications Traffic Drop Investigation
- Index|Village Of Maple Bluff — Client Index
- 2025 12 12 Weekly Ops Call|Weekly Ops Call — 2025 12 12
- Shared Hosting Risk|Shared Hosting Risk — Single Client Impact On All Sites