Back to Blog
rust engineering backend

Why We Chose Rust for QCK's Backend

The honest story of why we built QCK in Rust—no fake benchmarks, just real engineering decisions about garbage collection, latency consistency, and what actually matters for a URL shortener.

QCK Engineering Team
January 6, 2025
8 min read

Why We Chose Rust for QCK's Backend

Let's skip the made-up benchmark tables. You've seen enough blog posts with suspiciously specific numbers that were clearly never measured. Instead, here's the honest story of why we chose Rust.

The Problem We Were Solving

A URL shortener has one job: receive a request, look up a mapping, return a redirect. Simple.

But "simple" doesn't mean "easy to make fast." Every millisecond of redirect latency is a millisecond your users wait before seeing your actual content. When you're processing millions of redirects, those milliseconds add up—to user frustration, to bounce rates, to real money.

We needed two things:

  1. Fast median latency — most requests should be quick
  2. Consistent tail latency — the slow requests shouldn't be too slow

The second one is harder than it sounds.

Why Tail Latency Matters

If your P50 (median) is 5ms but your P99 is 500ms, one in every hundred users waits half a second. Run a million redirects and 10,000 people have a bad experience.

Worse: tail latency compounds. If a page load triggers 5 redirected resources and each has a 1% chance of being slow, there's roughly a 5% chance something on the page is slow.

We cared about P99 as much as P50.

The Garbage Collection Problem

Most languages have a garbage collector. Python, JavaScript, Go, Java, C#—they all periodically pause your program to clean up unused memory.

For web apps serving HTML, this is fine. A 10ms GC pause during a 200ms page render is noise.

For a URL shortener, a 10ms GC pause during what should be a 3ms redirect is a 4x slowdown. And GC pauses can be unpredictable—they happen when memory pressure is high, which is exactly when your server is already under load.

This is the fundamental tension: garbage collection trades predictable performance for developer convenience.

For most applications, that trade is worth it. For latency-critical infrastructure, it might not be.

Why Not Go?

Go was a serious contender. It's fast, it compiles to native code, and it has a garbage collector designed for low-latency applications.

But "designed for low latency" isn't "no latency impact." Go's GC is impressive, but it still exists. Under heavy load with lots of allocations, you'll see occasional pauses. The Go team has done heroic work minimizing this, but physics is physics—if you have a GC, it will occasionally run.

We could have made Go work. Many companies do. But we kept asking: what if we just... didn't have a GC?

The Rust Proposition

Rust's pitch is simple: memory safety without garbage collection.

Instead of a runtime cleaning up memory, Rust tracks ownership at compile time. When a value goes out of scope, its memory is freed immediately—not eventually, not when the GC gets around to it, but right then.

fn handle_redirect(code: String) -> Response {
    let link = db.get_link(&code);  // Memory allocated
    build_response(&link)
}  // Memory freed here. Deterministically.

No background threads. No stop-the-world pauses. No "hmm, memory usage is climbing, hope GC kicks in soon."

This sounded too good to be true. So we tried it.

The Learning Curve Is Real

Let's be honest: the first month sucked.

The borrow checker rejected code that seemed obviously correct. Lifetimes were confusing. The error messages were helpful but overwhelming. Simple things took forever.

// Why doesn't this work??
fn get_name(user: &User) -> &str {
    let name = user.name.clone();
    &name  // Error: cannot return reference to local variable
}

We spent a lot of time fighting the compiler.

But here's the thing: every error the compiler caught was a real bug. Not a style issue. Not a pedantic nitpick. Actual memory bugs that would have made it to production in other languages.

By month two, we started to get it. By month three, we were productive. By month six, we were faster than we'd been in our previous stack—because we weren't spending time debugging memory issues, race conditions, or mysterious production crashes.

What We Actually Got

After running Rust in production for a while, here's what we observed:

Consistent latency: No random spikes. No "what happened at 3am?" mysteries. The performance you see in testing is the performance you get in production.

Low memory usage: Our redirect service runs in about 15MB of RAM. The equivalent Node.js version used 80MB. This matters when you're paying for memory.

Confidence in deployments: If it compiles, it probably works. We've had exactly zero memory-related crashes in production.

Actual numbers? We're not going to publish fake benchmark tables. Run your own tests. What we can say: our P99 latency is low enough that we stopped worrying about it.

The Trade-offs

Rust isn't free. Here's what it costs:

Slower initial development: The first version of anything takes longer. The compiler is demanding.

Compile times: Full rebuilds take minutes, not seconds. Incremental builds are faster but still noticeable.

Smaller ecosystem: crates.io has 140k packages vs npm's 2 million. Sometimes you write things yourself.

Hiring: Fewer Rust developers exist. You'll train people or pay a premium.

Complexity budget: Rust takes more mental energy than Python or JavaScript. You can't just slap things together.

When Rust Makes Sense

Based on our experience, Rust is worth considering when:

  • Tail latency matters — You care about P99, not just averages
  • You're building infrastructure — Code that runs for years, not experiments
  • Memory is constrained — Edge computing, embedded, or just saving on cloud bills
  • Reliability is critical — The cost of a crash is high
  • Your team can invest in learning — This isn't a weekend project

When Rust Doesn't Make Sense

Don't use Rust for:

  • MVPs and prototypes — You need to move fast and break things
  • CRUD apps — The performance difference won't matter
  • Teams without buy-in — Rust requires everyone to care about the craft
  • Tight deadlines — The learning curve is real

The Honest Take

We didn't choose Rust because it's cool or because we wanted to pad our resumes. We chose it because we needed consistent, predictable performance for a latency-critical service, and garbage collection was a fundamental obstacle to that goal.

Rust delivered on that promise. The performance is predictable. The code is reliable. The memory usage is minimal.

The cost was a steep learning curve and slower initial development. For us, that trade-off made sense. For your project, it might not.

Don't believe anyone who tells you Rust is always the right choice. Don't believe anyone who tells you it's never the right choice. Think about your actual requirements, measure what matters, and make the call.

For QCK, Rust was the right call. Your mileage may vary.


Want to see for yourself? Try QCK — create a short link and watch the redirect. It's fast.