Is Go Faster Than JavaScript? (Golang vs TypeScript Video Review)
In this article we’ll provide a detailed, but structured, summary and review of the YouTube video Is TypeScript (NodeJS) Faster than Go?? | A server comparison by @ThePrimeagen.
Is TypeScript (Node.js) Faster than Go?? — Video Summary
The following is an overview or summary of the “Go vs TypeScript (JavaScript/Node)” backend benchmark video.
- On Twitter, JavaScript often gets hyped as the “greatest language ever.”
- Many JS libraries market themselves as “blazingly fast.”
- The creator questions whether that’s really true, so he sets up an experiment to compare TypeScript (Node.js) against Go for backend performance.
- Rust gets mentioned, but dismissed for this test (too low-level / not fun for backend chat servers).
Experiment Setup
-
Server type: Chat-like system built around WebSocket connections.
-
Design:
- Sockets expose an event-emitter—style interface.
- Clients send messages; server broadcasts them back.
- 20 “rooms” are simulated; clients are distributed across them.
- Messages are JSON objects with properties; server increments a property (from “68” → “69”), then stringifies it and sends back.
-
Implementation parity:
- In Go → used channels.
- In TypeScript → used event emitters + functions.
- Both servers stringify once before broadcasting (to avoid unfair inefficiency).
-
Fairness: Tried to make implementations as close as possible; no unfair optimizations. Source code is published for validation.
-
Client: Written in Rust. Every 100 ms, it sends a batch of messages (with timestamp, string, and increment number).
- Clients measure round-trip latency (send → receive same message back).
- Output saved in CSV for later statistical analysis.
First Results — 500 Clients
- 500 WebSocket connections tested in both Go and TypeScript.
- Mean latency: Go was significantly lower.
- Even at the 75th percentile, Go beats TS by a wide margin (~20 ms better).
Scaling to 1,000 Clients
-
Go: Mean latency ~66 ms.
-
TypeScript: Mean latency ~2 seconds (!).
-
Median latency:
- Go: 67 ms.
- TypeScript: 1.65 seconds.
-
TypeScript’s latency variance was large (inconsistent tail behavior).
Scaling to 1,500 Clients
- TypeScript: Server becomes unstable, crashes after ~3 minutes, memory consumption doubling rapidly.
- Go: Still stable, only slight latency increase, continues running smoothly.
Go vs TypeScript Graph Analysis
- Plots generated from averaged results (100-message chunks).
- At 500 connections: Go (red line) is consistently lower/faster than TS (blue line).
- At 1,000 connections: Go flatlines near the bottom (low latency), while TS spikes massively (up to 6 seconds at times, then drops, then spikes again).
- At 1,500 connections: TS collapses completely; Go still efficient.
Hardware / Environment
- ThePrimeagen tested the servers on Linode’s smallest instance: single core, 1 GB RAM, $5/month.
- No multicore advantage for Go. If anything, running many goroutines on a single core may have disadvantaged it slightly — yet Go still crushed TS.
Developer Experience Commentary
-
Go is actually nicer to write servers in than TypeScript:
- Opinionated formatting (all projects look the same).
- Unified tooling (no npm vs yarn mess).
- Cross-platform executables.
- Channels > Node’s event emitter pattern.
-
Feels like the “best of all worlds”: not as fast as Rust, but far easier to write; roughly as easy as TypeScript.
-
Increasingly popular choice because of this balance.
Limitations / Next Steps
- Current benchmark is too simple (mostly testing system calls + message passing).
- No async/await used in TS implementation (to avoid bias).
- Plans to do Part 2 with more complex server workloads.
Video Conclusion
- Go crushed TypeScript in raw backend WebSocket performance — lower latency, better scaling, more stability.
- Author argues Go may actually be simpler and quicker to learn than TypeScript, and is worth considering even for teams with many junior devs.
- TL;DR: TypeScript is fine for front-end/UI, but “a disaster on the server” compared to Go.
Go vs TypeScript Benchmarks
Here’s the blunt markdown table—TypeScript gets absolutely smoked:
Clients | Go (Mean Latency) | TypeScript (Mean Latency) | Notes |
---|---|---|---|
500 | ~tens of ms (≈ <100 ms) | Noticeably higher (~hundreds of ms) | Go consistently faster, tighter distribution |
1,000 | ~66 ms | ~2,000 ms (2 seconds) | TS latency ~30x worse; huge variance |
1,500 | ~70–80 ms, stable | Crashed after ~3 minutes | TS memory doubled rapidly, unstable |
TL;DR: Go barely flinches. TypeScript collapses under load.
Thoughts on the TypeScript vs Golang Performance Claims in the Video
Is Go ~30x faster than TypeScript/Node for web servers?
Short answer—No, not in general.
In the video’s specific test (a WebSocket broadcast server on a tiny 1-core VM), Go was dramatically faster and more stable. But for many everyday web APIs, the gap is usually much smaller—often 2—10x on throughput or at the worst-case latency. It can shrink further with the right Node setup.
Why Go Won (in that test)
- Concurrency model: Go runs many lightweight “goroutines” and schedules them efficiently. You write blocking code, but the runtime parks/unparks work under the hood.
- Node’s event loop: Node processes work in a single main loop. With thousands of messages to fan out, that loop queues up work and delays grow.
- Allocations & garbage collection: Repeatedly building JSON strings for lots of clients creates many short-lived objects. That triggers GC. Go’s memory profile and libs tend to handle this churn with less overhead.
- Closer-to-the-metal I/O: Go’s networking stack + WS libraries sit nearer to the OS, adding less user-space overhead per message.
Result: at 1,000—1,500 concurrent WebSocket clients all broadcasting, Go kept latencies low; Node’s latencies spiked and stability suffered.
Why “30x” Isn’t a Universal Rule
-
Workload sensitivity: Change the task and the gap changes. For example:
- Use a high-performance WS server in Node like uWebSockets.js (C++ core).
- Turn off compression, reuse buffers, and prefer binary formats (MessagePack/Protobuf) over JSON for hot paths.
- Scale across
worker_threads
or multiple processes with sharding.
-
Different bottlenecks: For typical HTTP APIs where time is spent in the database or external services, Go and Node may look quite similar for averages. The big wins for Go show up under heavy concurrency and in the slowest requests.
Caveats in the Video Setup
- 1 core, 1 GB VM: Starves Node of ways to parallelize and makes any GC hiccup more visible.
- Broadcasting JSON to many sockets: This is a worst case for Node; switching to binary frames + prebuilt buffers helps a lot.
- WebSocket broadcast pattern: This is almost tailored to Go’s strengths (tons of concurrent I/O with small bits of CPU).
”Survivorship Bias” in the Video
- Ecosystem stories: You hear many “we switched to Go and it was faster” stories because systems that need high concurrency tend to move to Go. That doesn’t mean every server would see the same gains.
- Benchmark scope bias: A test that highlights Node’s weak spot (fan-out JSON over WebSockets) can’t be stretched into “Node is bad for servers.” It’s true for that shape of work, not all work.
Practical Guidance When Choosing Go vs Node/TypeScript
Choose Go if you:
- Need to handle very high numbers of concurrent connections (chat, pub/sub, gateways).
- Care about tight tail latencies (see glossary) and predictable performance under load.
- Want simple deploys (single static binary) and straightforward concurrency primitives (channels).
Stay with Node/TypeScript if you:
-
Are building CRUD/HTTP APIs where the DB is the main cost.
-
Want the NPM ecosystem, SSR frameworks, or lots of shared front-/back-end code.
-
Can apply performance hygiene:
- Prefer uWebSockets.js for WebSockets.
- Disable compression unless required.
- Use binary serialization for hot paths.
- Batch work, apply backpressure (never buffer unbounded), shard across workers/processes.
Realistic expectation: In many backends, Go might be 2—10x better on throughput or p95/p99 latency. In broadcast-style WebSocket loads, 30x can happen—especially on small machines and default Node stacks.
Glossary of Terms Used in These Benchmark Claims
-
p50 / p95 / p99 latency: If you sort request times from fastest to slowest:
- p50 (median): Half the requests are faster, half slower.
- p95: 95% are faster; the slowest 5% are at or above this number.
- p99: 99% are faster; the slowest 1% are at or above this number.
- p99 target: A performance goal for that worst-case tail (e.g., “keep p99 < 200 ms”). Users feel p95/p99 more than the average.
-
Tail latency: Those slowest requests (p95/p99). They cause timeouts and bad UX.
-
Event loop (Node): A single main thread that runs your JS tasks one after another. If a task takes time, everything behind it waits.
-
Goroutine (Go): A super-lightweight thread managed by Go’s runtime. You can spawn thousands without much overhead.
-
Backpressure: A way to stop producers from overwhelming consumers (e.g., don’t accept more messages if the socket can’t keep up).
-
Sharding: Splitting traffic across multiple processes/threads/machines so no single one is overloaded.
Golang vs TypeScript: When Each Shines for Backend Work
Golang (Go) is a compiled, statically typed systems-and-services language with lightweight concurrency (goroutines, channels) and a minimal standard toolkit. TypeScript is a typed superset of JavaScript that compiles to JS and typically runs on Node.js (or Deno/Bun) with a huge ecosystem and first-class full-stack ergonomics.
Here’s a quick contrast comparing Node/TypeScript/JavaScript to Go/Golang:
Dimension | Golang | TypeScript (Node) |
---|---|---|
Runtime model | Precompiled binary; GC; goroutines + channels | JIT VM (V8); event loop + async I/O |
Concurrency & scaling | Excellent at high connection counts; predictable tail latency | Great for I/O-bound tasks; event loop can struggle at very high fan-out without careful design |
Performance | Often faster (2–10x typical; more on WS fan-out) | “Fast enough” for many APIs; can close gaps with uWebSockets.js, workers, binary protocols |
Tooling & DX | One toolchain; opinionated fmt/test; simple deploys (single binary) | Rich ecosystem (NPM); TS types, ESLint/Prettier; many framework choices |
Deployment | Static binaries; small images; easy cross-compile | Node runtime required; easy serverless support; wide PaaS support |
Ecosystem focus | Services, CLIs, networking, infra, cloud tooling | Web apps, SSR, serverless, edge, anything sharing code with the frontend |
Learning curve | Small language, strong conventions | Language is familiar to JS devs; typing discipline varies by team |
When Go is the better choice
- You need lots of concurrent connections (chat, pub/sub, gateways) and stable p95/p99 latency.
- You want simple ops: single static binary, low memory, predictable CPU.
- You’re building infra, networking services, or high-throughput APIs.
When TypeScript is the better choice
- Your app is CRUD/HTTP + DB-bound, where the database dominates latency.
- You value shared types/code across front- and back-end, or you live in the JS ecosystem.
- You want rapid iteration with familiar tooling and frameworks (Express/Fastify/Nest, Next/Nuxt, Astro).
Node Runtime and Libraries
- Node Runtime: The environment in which your JavaScript or TypeScript code executes. Node.js is built on the V8 JavaScript engine, and its performance depends on how efficiently it handles asynchronous operations, memory management, and the event loop.
- Your Libraries: Library choices can significantly impact performance. Some are optimized for speed; others add overhead.
Key Strategies for JavaScript Optimization
-
Swap in uWebSockets.js
- A highly efficient WebSocket library that can outperform traditional stacks like Socket.IO. Reduces latency and improves throughput for real-time apps.
-
Use Workers
- Run JavaScript in parallel threads to offload CPU-intensive tasks from the main event loop. Improves responsiveness and overall performance.
-
Avoid Per-Message JSON on Hot Paths
- Serializing/deserializing JSON per message is costly at high throughput. Prefer binary protocols (MessagePack/Protobuf) and reuse buffers.
- Hot paths are the code paths executed most frequently; optimizing them yields outsized gains.
Caveats
- “TypeScript performance” really means Node runtime + your libraries. Swap in uWebSockets.js, use workers, avoid per-message JSON on hot paths, and Node can narrow gaps a lot.
- “Go performance” depends on sane patterns too (reuse buffers, avoid unnecessary allocations, profile p95/p99—not just averages).
Conclusion
Go isn’t universally 30x faster than TypeScript/Node. It is often much better for high-concurrency, low-latency, allocation-heavy I/O—like the video’s WebSocket broadcast test. For many everyday APIs, expect a modest multiple—not a miracle. Pick based on your workload shape, required latencies (especially p95/p99), team skills, and ecosystem—and measure under realistic load before you commit.
If your workload looks like high-concurrency real-time with strict latency goals, pick Go. If it looks like feature-rich web/API work with heavy ecosystem needs and shared front-end logic, TypeScript is a strong default. Measure under realistic load before you commit.
It’s fair to say Go is usually faster and more stable under load, and in certain real-time, high-connection scenarios it can be 10x or more faster than Node. But for the majority of bread-and-butter web backends (APIs talking to databases), the gap is closer to 2—5x, and sometimes negligible.
When Go Can Be Drastically Faster (10—30x)
- Very high concurrency: thousands of open WebSockets, chat servers, pub/sub systems.
- Broadcast fan-out: sending the same message to hundreds/thousands of clients at once.
- Tail-latency-sensitive systems: where you care about the slowest 1—5% of requests (p95/p99). Example: if most requests finish in 50 ms but 5% suddenly spike to 2 seconds, users will notice.
- Small hardware / limited cores: Go’s scheduler squeezes more out of 1 core than Node’s single event loop.
For a site with thousands of concurrent users, this difference matters a lot:
- Node/TS → average (p50) latency may still look fine, but your p95/p99 requests get slower and less predictable. Users see intermittent lag or timeouts.
- Go → even with high concurrency, p95/p99 stay low, meaning nearly every user gets a consistent experience.
Here, Go can feel “an order of magnitude” faster and, more importantly, much more predictable. That consistency is what makes Go scale more gracefully.
When Go Is Only Modestly Faster (2—5x)
- Standard web APIs: CRUD endpoints, GraphQL resolvers, REST services where the DB or external API calls dominate latency. Example: If a DB query takes 40 ms, it doesn’t matter much if Go handles its overhead in 1 ms and Node does in 3 ms.
- Compute-light workloads: serving static assets, proxying, or handling light request/response cycles.
Here, Node/TS is “fast enough,” and Go’s advantage shrinks.
So instead of thinking “Go is always 10x faster,” think: 👉 Go guarantees better tail latency and scales more gracefully under extreme concurrency. 👉 Node/TS is “fast enough” for many apps, and wins on ecosystem and developer familiarity.