I Gave Claude Code a VPS and Asked It to Build My Own VPN
The Setup
I'd been using commercial VPN services for years. At some point it started feeling odd — all my traffic routed through a company I know nothing about, trusting them not to do anything with it. After building servers for PanPanMao and spending my days in Claude Code, I figured: I run servers now. I can just host my own.
I rented a VPS — Atlas Networks, LA, 9929 routing, static residential IP. Then I handed Claude Code the SSH credentials and a single goal: build me a complete proxy service I can use myself and share with friends.
This is the story of what happened. A few things went sideways. What came out the other end was more complete than I expected.
Shadowsocks Lasted Four Hours
Claude Code's first deployment was Shadowsocks. Industry standard, proven, the obvious starting point.
It lasted four hours before the IP was fully blocked.
Not throttled — blocked. Traffic cut entirely. I contacted the hosting provider; they confirmed active blocking by the GFW.
This was my first real lesson in how GFW detection actually works. It's not a static blocklist — it's an active probing system. When it sees traffic that doesn't match known protocols, it probes the server directly: sends malformed packets, checks response behavior. Shadowsocks was once difficult to fingerprint, but the GFW has been trained on it for years. In 2025, deploying Shadowsocks from a fresh IP is essentially flagging yourself.
The core problem isn't encryption — the GFW can't read your payload. The problem is traffic fingerprinting. High-entropy ciphertext, non-standard handshakes, and response patterns that match proxy behavior are all signals. Shadowsocks hits all of them.
New VPS. Different approach.
Reality: Hiding in Plain Sight
The new strategy was sing-box with VLESS-Reality as the primary protocol.
Claude Code explained how Reality works, and the design logic stuck with me: instead of making your traffic look like nothing, you borrow the TLS fingerprint of a real website — say, microsoft.com. Your TLS handshake is indistinguishable from a browser visiting Microsoft. The GFW has to choose between letting you through and blocking everyone who connects to Microsoft. It always chooses to let Microsoft through.
This is adversarial design in the good sense — working with the constraint instead of against it. Traditional proxies try to hide by appearing as nothing; Reality hides by appearing as something real and large.
Alongside Reality, Claude Code configured three fallback protocols: Hysteria2 and TUIC-v5 (both QUIC-based, fast but UDP-dependent), and Vmess-WS (WebSocket-based, weakest, last resort only). Four protocols, four ports. Firewall rules opened exactly those ports — everything else closed.
In the Clash Verge config, the auto-select proxy group only included VLESS-Reality. The other protocols exist in the config but don't participate in normal traffic routing — Reality handles everything unless it goes down.
Building the Stack
With the protocol choice settled, Claude Code worked through the rest systematically.
SSL certificate via acme.sh with Cloudflare DNS API validation. No port 80 required — you give acme.sh a Cloudflare API token with DNS edit permissions, it creates a TXT record to prove domain ownership, then cleans it up automatically. Clean approach; the server doesn't need to expose any additional ports.
The certificate part hit a snag. The sing-box setup script (a one-click tool by yonggekkk) runs acme.sh internally, and it cached the domain name in an email field. When I ran acme.sh separately to issue a cert for the subscription server, it reused that cache. Let's Encrypt received a certificate request with what looked like a domain name in the email field — and rejected it. The error message was ambiguous. It took three failed issuance attempts to diagnose: delete the CA cache directory entirely, re-run with an explicit --accountemail flag. That fixed it.
Firewall rules, BBR TCP optimization, and service autostart were handled in parallel. Nothing that required me to intervene.
The Subscription Server (The Part I Liked Most)
A lot of self-hosted proxy setups just give you a base64 string or a QR code to scan. You import it, it works, end of story.
I wanted something slightly more complete: a real subscription URL that serves traffic statistics to the client. Clash Verge reads a Subscription-Userinfo response header and renders a traffic bar — showing how much bandwidth you've used and how much remains. It's a small detail, but it makes a self-hosted setup feel deliberate rather than cobbled together.
Claude Code built this as a Python HTTP server running behind nginx:
- nginx handles TLS on port 443 and proxies to the Python service on localhost
- The Python service reads the subscription YAML file, calls
vnstat --jsonto get real upload/download stats from the VPS network interface, and builds the response header dynamically - Header format:
upload=X; download=Y; total=Z; expire=T - Clash Verge picks this up and renders the traffic bar automatically — no client-side configuration needed
The token is a random 16-character hex string stored on the server. The subscription URL is just https://your.domain.com/{token}.
One small bug surfaced here: the Python service initially didn't handle HEAD requests. Clash Verge uses HEAD to fetch just the headers (for traffic stats) without pulling the full config body. The server was returning an error on HEAD, which broke the traffic bar. Adding a do_HEAD method fixed it. The kind of thing you only discover by running it.
Sharing
Once the subscription URL was working, sharing was straightforward.
Clash Verge: paste the URL, click import, activate the profile, enable TUN mode. Done. Shadowrocket on iOS: add the URL as a subscription or scan a QR code.
Friends see the traffic bar in their client — how much of the 3TB monthly quota remains, the expiry date. It looks like a commercial subscription service. It costs about $20/month in VPS fees.
What I Learned
Looking back at the whole session, what stood out wasn't what Claude Code built — it was how it handled the failures.
Cert issuance failing three times because of a stale cache: it didn't just retry the same command. It checked acme.sh's state, found the corrupt cache entry, cleared it, re-ran. nginx proxy config intermittently not forwarding headers: it walked the full chain — client → nginx → Python service → response — and checked what each hop was and wasn't passing through.
Each failure loop was about ten to fifteen minutes. But it never tried to route around the problem — it kept looking for the actual cause.
The one exception was Clash Verge's profile management. It deletes files you've manually placed in its config directory when it restarts, only preserving profiles imported via subscription URL. This is client-specific behavior that Claude Code didn't know about. We figured it out through testing. That's the boundary: anything with clear success criteria ("this URL should return a YAML with a proxies: key") Claude Code can debug to completion on its own. Anything involving client-side behavior with undocumented edge cases needs a human to observe and reason about.
Clear division of labor. High efficiency.
One afternoon. A fully self-hosted proxy service. Friends get a URL and they're done.