Blog Posts

Your SaaS Has Scaling Bottlenecks – Do You Know Where?

Scaling bottlenecks choke SaaS growth.  Bottlenecks can prevent you from onboarding customers fast enough, make supporting your largest customers impossible, and even leave you saying no to giant deals.  Scaling issues impact your annual recurring revenue (ARR), net dollar retention (NDR), and customer lifetime value (CLTV).  Imagine telling paying customers that they’ve grown too big and need to move to another platform!  It is not only extremely frustrating, it weighs down all of your major metrics.

The rate at which you can onboard new customers is knowable.  So is the maximum customer size that has delightful experiences.  Customers don’t get too big overnight, they grow with you for years.  You can write tools to discover the system maximums.  Knowing the limits won’t prevent you from hitting them, but it will prevent you from being surprised.

Scaling bottlenecks are a form of tech debt; bottlenecks are the result of your past decisions, regardless of whether those decisions were intentional.  Accidentally creeping up on the system’s limits requires not knowing where they are in the first place.  

Do you know where the limits are?  Was it not worth investigating because it wasn’t maxed out?

If you don’t know, you will end up turning away customers and limiting ARR growth.  Capping customer size also caps CLTV.  Saying goodbye to long term customers tanks your NDR and hurts your ARR.

All systems have bottlenecks.  The only question is: How do you want to find them?  You can seek them out, or you can find them in your bottom line.

Latency, Throughput, And Spherical Cows

My post about latency and throughput featured an extremely simplistic model to demonstrate that Latency and Throughput are independent.  An astute reader called it a spherical cow, a model so over simplified that it is a bit ridiculous.

So, let’s deflate the cow, just a bit, and see how things hold up.  I hope you like tables and cow jokes!

(Keenan Crane; GIF by username:Nepluno, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons)

(Keenan Crane; GIF by username:Nepluno, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons)

Chewing The Cud

The original model was a streaming system that receives 1 million messages a second.  Perfectly spherical.

There were two systems, one with 5s latency, one with 2s latency.

We will leave our processors completely spherical - they each process 100,000 events simultaneously.  Our pipelines then look like this

5s Latency

TimeNew Events/sProcess InstancesEvents Being ProcessedThroughputExtra Capacity
11,000,000501,000,00004,000,000
21,000,000502,000,00003,000,000
31,000,000503,000,00002,000,000
41,000,000504,000,00001,000,000
51,000,000505,000,0001,000,0000
61,000,000505,000,0001,000,0000
71,000,000505,000,0001,000,0000
81,000,000505,000,0001,000,0000

2s Latency

TimeNew Events/sProcess InstancesEvents Being ProcessedThroughputExtra Capacity
11,000,000201,000,00001,000,000
21,000,000202,000,0001,000,0000
31,000,000202,000,0001,000,0000
41,000,000202,000,0001,000,0000
51,000,000202,000,0001,000,0000
61,000,000202,000,0001,000,0000
71,000,000202,000,0001,000,0000
81,000,000202,000,0001,000,0000

Conclusion: Same Throughput

The Throughput of the two systems is the same.

The first system, with 5s of latency, takes longer to warm up and needs 2.5x more instances, but it still produces the same throughput.  3 seconds later..

What Happens If You Add Scaling?

Maybe that model is too simple.  Let’s deflate the cow a little bit, vary the input and add auto-scaling.

Let’s make it an average of 1 million messages a second, with peaks and valleys between 500,000 and 1.5 million per second.  20 second period, so it changes +/- 100,000 messages every second.  But, we’re only deflating the cow a little bit, so the changes will be step changes at the end of the second.

We will leave our processors completely spherical - they each process 100,000 events simultaneously.  It takes 1 second to start a processor, and 1 second to shut down.  The only difference between the two is that one takes 2s to process a message and the other takes 5s.

Now our input looks like this:

5s Latency

TimeNew Events/sProcess InstancesEvents Being ProcessedEvents Waiting to be ProcessedThroughputExtra Capacity
11,000,000001,000,00000
21,100,000101,000,0001,100,00000
31,200,000212,100,0001,200,00000
41,300,000333,300,0001,300,00000
51,400,000464,600,0001,400,00000
61,500,000606,000,0001,500,0001,000,0000
71,400,000656,500,000300,0001,100,0000
81,300,000686,800,000200,0001,200,0000
91,200,000707,000,00001,300,0000
101,100,000706,900,00001,400,0001
111,000,000696,500,00001,500,0004
12900,000655,900,00001,400,0006
13800,000595,300,00001,300,0006
14700,000534,700,00001,200,0006
15600,000474,100,00001,100,0006
16500,000413,500,00001,000,0006
17600,000353,100,0000900,0004
18700,000312,900,0000800,0002
19800,000292,900,0000700,0000
20900,000292,900,000200,000600,0000
211,000,000313,100,000400,000500,0000

2s Latency

TimeNew Events/sProcess InstancesEvents Being ProcessedEvents Waiting to be ProcessedThroughputExtra Capacity
11,000,000001,000,00000
21,100,000101,000,0001,100,00000
31,200,000212,100,0001,200,0001,000,0000
41,300,000232,300,0001,300,0001,100,0000
51,400,000252,500,0001,400,0001,200,0000
61,500,000272,700,0001,500,0001,300,0000
71,400,000292,900,0001,400,0001,400,0000
81,300,000292,900,0001,300,0001,500,0000
91,200,000292,700,00001,400,0002
101,100,000272,500,00001,300,0002
111,000,000252,300,00001,200,0002
12900,000232,100,00001,100,0002
13800,000211,900,00001,000,0002
14700,000191,700,0000900,0002
15600,000171,500,0000800,0002
16500,000151,300,0000700,0002
17600,000131,100,0000600,0002
18700,000111,100,000100,000500,0000
19800,000121,200,000300,000600,0000
20900,000151,500,000300,000700,0000
211,000,000181,800,000300,000800,0000

Result - Latency Does Not Impact Throughput

Our slightly less spherical model with perfect step changes produced the same fundamental result:

You can’t increase the throughput of a streaming system to be higher than the input.

Latency has a huge impact on the amount of resources required!  The first system, with 5s latency, fluctuated between 29 and 70 instances.  The second system, with 2s latency, fluctuated between 11 and 29.

The second system’s maximum scale out was equal in size to the first system’s minimum.

And yet, neither system was able to get above 1.5 million events/s.

No matter how non-spherical the cow may be, you can’t sustain a throughput faster than then inputs.

The Never Rewrite Podcast, Episode One Hundred Ten: MVPs, YAGNI, and the Goldilocks Problem

Getting MVPs to actually be Minimum, Viable, and Products is surprisingly complex. In this epsiode, Isaac, Dustin, and I dive into the tradeoffs between simplicity and scalability, the Goldilocks Problem, and overly long development cycles.

If you've ever worked on an MVP that became a full product before it launched, this is the episode for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

Reducing Latency Won’t Increase Throughput Of Streaming Systems

A counter intuitive property of streaming systems is that latency has no long term impact on throughput.  Increasing or decreasing latency will give a short term change, but once the system stabilizes in its steady state, the throughput will be the same as before.

How can latency and throughput, two important performance metrics, be unrelated?

Let’s define some terms

Latency is the amount of time between when a message is sent and when it is fully processed.  This includes the time spent getting the message onto the stream, in queue waiting to process, and process time.

Throughput is the number of completions in a time period.  It could be 1 million messages a second, 5 per hour, or anything else.  Throughput doesn’t include processing time, that’s part of latency.  The million messages/s could have taken 10ms or 10 minutes each to process; so long as 1 million of them finish every second, the throughput is 1 million/s.

Steady State is when the system is fully warmed up and taking on its full load.  For a streaming system, this means that it is consuming the full stream, it is producing its maximum output, and the work in progress is being added to as rapidly as it is finished.

Example

Imagine two systems that receive 1 million events per second.  The first system takes 5s to process a million messages, the second system takes 2s to process the same messages.

The latency is different, the throughput is the same!

Implications beyond Latency and Throughput

Besides latency and throughput, there are 3 other notable differences between the two systems.

  1. Higher latency means more events in flight.  When it gets to steady state, the first system will be working on 5 million events at a time, the second system will only be working on 2 million.  This usually means that the first system will require more resources - bigger queues, more workers, a higher degree of parallelism, etc.
  2. Higher latency means slower startup.  It takes 5 seconds for events to start emerging from the first system, but only 2 seconds for the second system.
  3. Higher latency means slower shutdown.  At the other end of the lifecycle, systems with higher latency take longer to drain and safely shut down than systems with lower latency.

Summary

Why doesn’t latency matter?  Because streaming systems have constrained inputs.  So long as the system has enough capacity to handle 100% of the inputs, then latency doesn’t impact throughput.

Latency still controls the system requirements; slow is expensive!

The Never Rewrite Podcast, Episode One Hundred Nine: Conway’s Law and Software Quality

Does Conway's Law apply to software quality? In this episode, Isaac, Dustin, and I explore how company culture and structure shape software.

If you've ever wondered about the forces that shape your code base, this is the episode for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

The Never Rewrite Podcast, Episode One Hundred Eight: Consolidating Tech Stacks – Is It Worth It?

How can you determine the mertis of consolidating or diversifying your tech stack? In this episode we discuss the how consolidation and diversification impact the business, engineering efficiency, and cross-team dynamics.

If you've been wondering how to go about debating your tech stack, this is the episode for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

The Never Rewrite Podcast, Episode One Hundred Seven: Rebuilding vs. Rewriting vs. Refactoring?

This week, Isaac and I dive deep into an Allen Holub suggestion that developers should 'rebuild' instead of 'rewrite' software. Are we all saying the same thing? Is there some neuance between rebuilding, rewriting, and refactoring?

If you've been wondering if you should even bother updating your legacy system, this is the episode for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

The Never Rewrite Podcast, Episode One Hundred Six: How to Stop a Rewrite in Progress

It is all well and good to say "Never Rewrite", but what do you do if you find yourself part of one?
In this episode Isaac and discuss the steps and thinking that will help you stop a rewrite faster and safer than waiting for it fail.

If you're working on a rewrite and don't know what to do, this is the episode for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

The Never Rewrite Podcast, Episode One Hundred Five: A Core Engine Rewrite with Nick Gerace

Guest Nick Gerace discusses how he backed into a rewrite of the core engine at System Initiatives. Nick walks us through how and why his work to add plugins and package management ended with a new core engine that still lacks package management.

If you want to hear about the philosophy and tradeoffs behind a successful rewrite, this episode is for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

The Never Rewrite Podcast, Episode One Hundred Four: Iteratively Replacing Logging Infrastructure with Guest Paul Stack

Paul Stack shares his experiences transforming his company from individual server based logs, to a unified log stream searchable with Grafana. Paul walks us through the stepwise iterations: going from single machine logs to aggregated, how aggregating the logs overwhelmed the service so they brought in kafka, how kafka made it difficult to restart, and so on. This story is pre-cloud and years before the concept of Open Telemetry; Paul's deep dive sheds light on some of the very difficult problems that modern observability stacks make easy.

If you've ever wondered about how aggregated logging systems evolved, this is the episode for you!

Watch on YouTube or listen to it at Spotify, Apple Podcasts, or your favorite podcast app, and let us know if you have ever been involved in a rewrite. We would love to have you on the show to discuss your experience!

Site Footer