I wouldn’t have done it this way.

I confess to having said, “I wouldn’t have done it this way.”  The phrase seems like a polite way of trashing the current system architecture while implying that you know the correct design.

You might get a snicker and feel smart and clever, but you’re creating problems for yourself and your project.

It puts the original programmers on the defensive.  

You may be politely trashing their design, but you’re still trashing their design.  Instead of listening to you describe your superior solution, the original programmers are going to be thinking of arguments defending their work.

The Business People Don’t Care

The managers and executives in the room didn’t care about how the original programmers designed the system.  They don’t care about how you would have designed the system. They want to know what you’re going to do about their business problems.

It leaves the listener questioning your intent.  

Are you changing the design because you don’t like it, or because it needs to be done?  You never want anyone wondering if you are proposing a refactor or rewrite because of style.

Your way might not be possible

I once used PostgreSQL as a noSql system because all the other AWS options had row size limits that were too small.  For years after new developers would tell me that I should have used different technologies. I would explain the size constraint, and more often than not, show that AWS still had constraints that would prevent us from using the technology.

Instead of trashing the original design, try these two approaches instead:

Talk about the business problem with the current design.

“The current design won’t scale to the levels we need.”  Maybe it won’t scale because it was a bad design, maybe it was a massively successful MVP.  Either way, you need to replace it to go forward.

Be positive

“The current design was a great way to get started.”  If the original design was an abject failure, no one would be asking you to rebuild or expand it.  Acknowledge that, whatever its shortcomings, the original design moved the business forward.

Acknowledge your ignorance

“I don’t know what the original requirements were, but the current design isn’t a good fit for our needs”.  It’s useful to know why past choices were made so that you don’t miss requirements, and people are much more likely to tell you if you’re the first to admit you don’t know.

“I wouldn’t have done it this way” is an old developer cliche.  The developers who inherited your work are probably saying it about you right now.

It’s Not Sabotage, They’re Drowning

As you gain visibility into system problems, you’ll often experience push back like this:

I refactored the code from untested and untestable, to testable with 40% test coverage. The senior architect is refusing to merge because the test coverage is to low.

Me: I instrumented the save process. Saving takes between 10-25s, with the average around 16s!
Other Developer: That’s crazy! Wait! How much time does the metric system add?
Me: About 500ms.
OD: That’s over a 3% increase! You have to turn the metrics off!

Me: Good news, I can make the system 15% faster and support jobs about 6x our current max. Bad news, it uses short lived DB table, which will increase disk usage by up to 1GB/client.
Different Developer: You need to find another way, we can’t have that much additional disk usage.
Me: What about all those orphaned short lived tables lingering about due to bugs and errors? Would cleaning those up get us enough space?
DD: I don’t know, we don’t have any visibility into the magnitude of that problem. You’re going to have to find another solution.

I used to believe that this kind of push back was intentional sabotage. That the people responsible for creating the problems were threatened by exposure.

It took me many years to learn that most of the time, it’s not sabotage, it’s drowning people sinking the lifeboat in an attempt to save themselves.

When you add visibility to a system, the numbers are always bad. That’s why you’re putting in the effort to add visibility to an existing system. When the initial steps towards fixing your problems make that number worse, that’s when the fear of drowning sets in.

I don’t have a solution for dealing with your colleagues’ reactions. Just remember, they aren’t trying to sabotage you, they’re drowning, and you’re manning the lifeboat.

If you’ve got similar stories of metrics bringing pushback, I’d love to hear from you!

Making Link Tracking Scale – Part 1 Asynchronous Processing

Link Tracking is a core activity for Marketing and CRM SaaS companies.  Link Tracking often an early system bottleneck, one that creates a lousy user experience and frustrates your clients.  In this article I’m going to show a common synchronous design, discuss why it fails to scale, and show how to overcome scaling issues by making the design asynchronous.

What Is Link Tracking?

Link Tracking allows you to track who clicks on a link.  This lets you measure the effectiveness of your marketing, learn what offers appeal to which clients, and generally track user engagement.  When you see a link starting with fb.me or lnkd.in, those are tracking links for Facebook and LinkedIn.

Instead of having a link go to original target, the link is changed to a tracking link.  The system will track 3 pieces of data: which client, which user, and what url, and then redirect the user’s browser to the original link.

A Simple Synchronous Design

Here’s what that looks like as a sequence diagram. 

There are 2 trips to the database, first to discover what the original link is, and a second to record the click.  After all of that is done, the original link is returned to the user and their browser is redirected to the actual content they are looking for.

Best case on a cloud host like AWS the Server + Database time will be about 10ms.  That time will be dwarfed by the 50-100ms from general network latency getting to AWS, through the ELB and to the server.

This design is simple, speedy, and works well enough for your early days.

Why Synchronous Processing Fails to Scale

Link Tracking events tend to be spikey – there’s an email blast, an article is published, or some tweet goes viral.  Instead of 150,000 events/day uniformly spread over 2 events/s, your system will suddenly be hit with 100 events/s, or even 10,000/s.  Looking up the URL and recording the event will spike from 10ms to 1s or even 10s.

While your system records the event, the user waits.  And waits. Often closing the browser tab without ever seeing your content.

Upgrading the database’s hardware is an expensive way to buy time, but it’ll work for a while.  Eventually though, you’ll have to go asynchronous.

How Asynchronous Scales

With Asynchronous Processing, it becomes the responsibility of the Server to remember the Link Tracking event and process it later.  Depending on your tech stack this can be done a lot of different ways including Threads, Callbacks, external queues and other forms of buffering the data until it can be processed.

The important part, from a scaling perspective, is that the user is redirected to the original URL as quickly as possible.

The user doesn’t care about Link Tracking, and with Asynchronous Processing you won’t make your users wait while you write to the database.

Making the event processing asynchronous is an important first step towards making a scalable system. In part 2 I will discuss how caching the URLs will improve the design further.

Your Database is not a queue – A Live Example

A while ago I wrote an article, Your Database is not a Queue, where I talked about this common SaaS scaling anti-pattern. At the time I said:

Using a database as a queue is a natural and organic part of any growing system.  It’s an expedient use of the tools you have on hand. It’s also a subtle mistake that will consume hundreds of thousands of dollars in developer time and countless headaches for the rest of your business.  Let’s walk down the easy path into this mess, and how to carve a way out.

Today I have a live example of a SaaS company, layerci.com, proudly embracing the anti-pattern. In this article I will compare my descriptions with theirs, and point out expensive and time consuming problems they will face down the road.

None of this is to hate on layerci.com. An expedient solution that gets your product to market is worth infinitely more than a philosophically correct solution that delays giving value to your clients. My goal is to understand how SaaS companies get themselves into this situation, and offer paths our of the hole.

What’s the same

In my article I described a system evolving out of reporting, layerci’s problem:

We hit it quickly at LayerCI – we needed to keep the viewers of a test run’s page and the github API notified about a run as it progressed.

I described an accidental queue, while layerci is building one explicitly:

CREATE TYPE ci_job_status AS ENUM ('new', 'initializing', 'initialized', 'running', 'success', 'error');

CREATE TABLE ci_jobs(
	id SERIAL, 
	repository varchar(256), 
	status ci_job_status, 
	status_change_time timestamp
);

/*on API call*/
INSERT INTO ci_job_status(repository, status, status_change_time) VALUES ('https://github.com/colinchartier/layerci-color-test', 'new', NOW());

I suggested that after you have an explicit, atomic, queue your next scaling problem is with failures. Layerci punts on this point:

As a database, Postgres has very good persistence guarantees – It’s easy to query “dead” jobs with, e.g., SELECT * FROM ci_jobs WHERE status='initializing' AND NOW() - status_change_time > '1 hour'::interval to handle workers crashing or hanging.

What’s different

There are a couple of differences between the two scenarios. They aren’t material towards my point so I’ll give them a quick summary:

  • My system imagines multiple job types, layerci is sticking to a single process type
  • layerci is doing some slick leveraging of PostgreSQL to alleviate the need for a Process Manager. This greatly reduces the amount of work needed to make the system work.

What’s the problem?

The main problem with layerci’s solution is the amount of developer time spent designing the solution. As a small startup, the time and effort invested in their home grown solution would almost certainly have been better spent developing new features or talking with clients.

It’s the failures

From a technical perspective, the biggest problem is lack of failure handling. layerci punts on retries:

As a database, Postgres has very good persistence guarantees – It’s easy to query “dead” jobs with, e.g., SELECT * FROM ci_jobs WHERE status='initializing' AND NOW() - status_change_time > '1 hour'::interval to handle workers crashing or hanging.

Handling failures is a lot of work, and something you get for free as part of a queue.

Without retries and poison queue handling, these failures will immediately impact layerci’s clients and require manual human intervention. You can add failure support, but that’s throwing good developer time after bad. Queues give you great support out of the box.

Monitoring should not be an afterthought

In addition to not handling failure, layerci’s solution doesn’t handle monitoring either:

Since jobs are defined in SQL, it’s easy to generate graphql and protobuf representations of them (i.e., to provide APIs that checks the run status.)

This means that initially you’ll be running blind on a solution with no retries. This is the “Our customers will tell us when there’s a problem” school of monitoring. That’s betting your client relationships on perfect software with no hiccups. I don’t like those odds.

SCALING Databases is expensive

The design uses a single, ever growing jobs table ci_jobs, which will store a row for every job forever. The article points out postgreSQL’s amazing ability to scale, which could keep you ahead of the curve forever. Database scaling is the most expensive piece in any cloud application stack.

Why pay to scale databases to support quick inserts, updates and triggers on a million row table? The database is your permanent record, a queue is ephemeral.

Conclusion

No judgement if you build a queue into your database to get your product to market. layerci has a clever solution, but it is incomplete, and by the time you get it to work at scale in production you will have squandered tons of developer resources to get a system that is more expensive to run than out of the box solutions.

Do you have a queue in your database? Read my original article for suggestions on how to get out of the hole without doing a total rewrite.

Is your situation unique? I’d love to hear more about it!

What is a Queue Anyway?

Several readers wrote in response to my article Your Database is not a Queue to tell me that they don’t know what a queue is, or why they would use one.  As one reader put it, “assume I have 3 tools, a hammer, a screwdriver and a database”. Fair enough, this week I want to talk about what a message queue is, what features it offers, and why it is a superior solution for batch processing.

To keep things as concrete as possible, I’ll use a real world example, Nightly Report Generation, and AWS technologies.

Your customers want to know how things are going.  If your SaaS integrates with a shopping cart, how many sales did you do?  If you do marketing, how many potential customers did you reach? How many leads were generated?  Emails sent? Whatever your service, you should be letting your customers know that you’re killing it for them.

Most SaaS companies have some form of RESTful API for customers, and initially you can ask clients to help themselves and generate reports on demand.  But as you grow, on demand reporting becomes to slow. Code that worked for a client with 200 customer shopping carts a day may be to slow at 2,000 or 20,000 carts.  These are great problems to have!

To meet your client’s needs, you need a system to generate reports overnight.  It’s not client facing, so it doesn’t need to be RESTful, and it’s not driven by client activity, so it won’t run itself.

Enter queues!

For this article we will use AWS’s SQS, which stands for Simple Queue Service.  There are lots of other wonderful open source solutions like Rabbit MQ, but you are probably already using AWS, and SQS is cheap and fully managed.

What does a queue do?

  • A queue gives you a list of messages that can only be accessed in order.
  • A queue guarantees that one, and only one, consumer can see a message at a time.
  • A queue guarantees that messages do not get lost, if the consumer takes a message from the queue, it has a certain amount of time to tell the queue that the message is done, or the queue will assume something has gone wrong and make the message available again.
  • A queue offers a Dead Letter Queue, if something goes wrong to many times the message is removed and placed on the Dead Letter Queue so that the rest of the list can be processed

As a practical matter all that boils down to:

  • You can run multiple instances of the report generator without worrying about missing a client, or sending the same report twice.  You get to skip the early concurrency and scaling problems you’d encounter if you wrote your own code, or tried to use a database.
  • When your code has a bug in an edge case, you’ll still be able to generate all of the reports that don’t hit the edge case.
  • You can alert on failures, see which reports failed, and after you fix the bug, you can click of a button put them back into the queue to run again.
  • Because SQS is managed by AWS you get monitoring and alerts out of the box
  • Because SQS is managed by AWS you don’t have to worry about the queue’s scale or performance.
  • You can add API endpoints to generate “on demand” messages and put them on the queue.  This means that the “standard” report messages, and client generated “on demand” report requests follow the same process and you don’t need to build and maintain separate reporting pipelines for internal and external requests.

Everything is tradeoffs, and queues don’t solve all problems well.  What kinds of problems are you going to have with SQS?

  • Priority/Re-sorting existing messages – You have 10,000 messages on the queue and an important client needs a report NOW!  Prioritized messages, or changing the order of messages on the queue is not something SQS supports. You can have a “fast lane” queue that gets high priority messages, but it’s clunky.
  • Random Access – You have 10,000 messages and want to know where in the queue a specific client’s report is?  Queues let you add at the end and take from the front, that’s it. If you need to know what’s in the middle you’ll need to maintain that information in a separate system.
  • Random Deleting – Probably not important for reports, but for cases like bulk importing, if a client changes her mind and wants to cancel a job, you can’t reach into the middle of the queue and remove the message.
  • Process order is not guaranteed – If you have a 3 part task, and you cannot start Task 2 until Task 1 finishes, you cannot add all 3 tasks onto the queue at once.  It is highly likely that a second worker will come along and start Task 2 while Task 1 is still in process. Instead you will need to have Task 1 add Task 2 onto the queue when it finishes.

None of these problems will crop up in your early iterations, and they are great problems to have!  They are signs that your SaaS is growing to meet your client’s needs, you and your clients are thriving!

To bring it full circle, what if you already have a home grown system for running reports?  How can you get on the path to queues? See my article on about common ways into the DB as a Queue mess, and some suggestions on how to get out.

What’s the first step towards fixing a terrible system?

Imagine you’ve just been hired to fix a horrible legacy system.

You’ve just been handed a giant monolith that talks to customers on the internet, accesses 4 different internal databases, handles employee and customer on-boarding, sends marketing emails, contains a proprietary task scheduling system, and has concurrency issues.  Which problem do you tackle first?

Separation of concerns?  Maybe the massive number of exceptions in the logs?  Probably none of these.

The first thing you really need to learn is why you been hired to fix the system in the first place.  Ignore the technical problems, what are the business problems with the current system?  What does the business need done so badly that they’ve hired you?  Your fellow programmers care about how difficult the current system is, but the business cares about how hard it is to get what they need.  Don’t confuse tech problems with business problems.

Learn the business problem that justifies your salary.  Find a way to provide that value.  You can fix or replace the legacy system over time.