Chipping Away

You have a goal, you know what it means, and what it implies.

You also know what’s blocking your progress.

It’s time to iterate!

Iterating Against the Blockers

Ask yourself:

  • What’s the first step?
  • What if all I needed was a tool?
  • How will I know if it’s working?

These three primary questions will help you chip away at the Blockers.

You want progress, not victory.  When you keep iterating away at the Blockers, eventually achieving your goals becomes easy.

Besides the questions, the main constraint to keep in mind is that each iteration needs to leave the system in a state where you can walk away from the project.

Unused functionality?  Totally fine.  

Refactored code with no new usage?  Great!

Tools that work but don’t do anything useful?  One day.

Nothing that requires keeping two pieces of code in sync.

Nothing that would prevent other developers from evolving existing code.

Whatever you do, it must go into the system with each iteration.

Example: Continuing from the Async Processing Blockers

I want to make my API asynchronous so that client size doesn’t impact the API’s responsiveness.  But, I can’t make the API asynchronous because:

  • I don’t have a system to queue the requests.
  • I don’t have a system to process the requests off of a queue.
  • I don’t have a way to make the processing visible to the client.

Attempting to do all of this work in one giant step is a recipe for a project that gets delivered 6 months to a year late.

I’m going to hand wave and declare that we are using SQS, AWS’s native queuing system.  This makes setting up the queue trivial and reliable.

I don’t have a system to queue the requests.

What’s the first step?

Write a data model and serializer.  What am I even going to write onto this queue?

What if all I needed was a tool?

Instead of worrying about a system, create a command line tool in your existing codebase to push data to SQS.  It won’t be reliable, it won’t have logging, and it won’t have vizability.

But you’re the only one using it, so that’s fine.

How will I know it’s working?

Manually!  AWS has great observability.  You don’t need to do anything.

Combining your first step, a data model, your tool and AWS observability you’ll be able to push data onto a queue and view what got sent.

The data model will be wrong and the tool will not be production ready!

That’s ok because no existing functionality is blocked or broken.  Getting interrupted doesn’t create risk, which means you can work even if you only have a little time.

I don’t have a system to process the requests off of a queue.

What’s the first step?  

Write a data model and deserializer.  What data do I need to be on the queue in order to recreate the event I need to process?

What if all I needed was a tool?

Create a tool to pull the message off the queue and deserialize.  Send the result to a data validator.  (You’re accepting customer requests from an API, you’d better have a data validator)

How will I know it’s working?

Manually!  AWS has great observability.  You don’t need to do anything.

Combining the three gets the ability to manually pull data off the queue, deserialize and validate.

You can do this before, after or during your work to get the data onto a queue.  It’s not production ready, but it also doesn’t create risk.

I don’t have a way to make the processing visible to the client.

What’s the first step? 

What does visibility look like to the client?  Where does the data go in your UI?  What would you want to know?

What if all I needed was a tool?

Make an endpoint that calls AWS and returns the data you think you need.

How will I know it’s working?

Manually!  Compare what your endpoint tells you with what AWS tells you.  Don’t start until you have tools for adding and removing events from the queue.

Combining the three gets you an endpoint that tells you about the queue.

The endpoint should be safe to deploy to production.  The queue is always empty.

Conclusion

Iterating allows you to chip away at your blockers until there’s nothing stopping you.

Apply the three questions:

  • What’s the first step?
  • What if all I needed was a tool?
  • How will I know if it’s working?

Always keep the system in a state where you can walk away from the project.

Keep iterating against your blockers, and you’ll be amazed at how soon you’ll achieve your goals!

The Implications Of Your Characteristics

Part Three of my series on iterative delivery 

You have a goal, you have characteristics, now it’s time to ponder the implications.

The implications are things that would have to be true in order for your characteristic to be achieved.  They are levers you can pull in order to make progress.

Let’s work through an example

In part 2 I suggested that a Characteristic of a system that can support clients of any size is that API endpoints respond to requests the same amount of time for clients of any size.

What would need to happen to an existing SaaS in order to make that true?

  • The API couldn’t do direct processing in a synchronous manner.  It would have to queue the client request for async processing.  Adding a message to a queue should be consistent regardless of how large the client, or the queue, becomes.
  • For data requests endpoints would need to be well indexed and have constrained results.
  • Offer async requests for large, open ended, data.  An async request works by quickly returning a token, which can then be used to poll for the results.

Implications are measurable

How much processing can be done asynchronously today?  How much could be done last month?

How many data endpoints allow requests that will perform slowly?  Is that number going up or down?

How robust is the async data request system?  Does it exist at all?

Implications are Levers

Progress on implications pushes your service towards its goal.  Sometimes a little progress will result in lots of movement, sometimes a lot of progress will barely be noticeable.

Speaking to Implications

It is important that you can speak to how the implications drive progress towards your goal.

Asynchronous processing lets your service remain responsive.  It doesn’t mean you can process the data in a timely manner yet.  It sets the stage for parallel processing and other methods of handling large loads.

Next Steps

Before continuing on, try to come up with 3 implications for your most important characteristics.

You’ll want a good selection of implications for the next part – blockers.

We will explore what’s preventing you from moving your system in the direction it needs to go.

Getting Started With Iterative Delivery

The last 4 posts have been trying to convince you that iterative, baby step, delivery, is better for your clients than moonshot, giant step delivery:

But how do you get started?  How do you shorten your stride from shooting the moon, to one small step?

The next series of posts is going to lay out my scaling iterative delivery framework.  This site is about scaling SaaS software, and this framework works best if you want an order of magnitude more of what you already offer your clients.  This isn’t a general framework, and it certainly isn’t the only way to get started with iterative delivery.

Work your way through these steps:

  1. Pick a goal – 1 sentence, highly aspirational and self explanatory.
  2. Define the characteristics of your goal – What measurable characteristics does your system need in order to achieve your goal?
  3. What are the implications? – What technical things would have to be true in order for your system to have all the characteristics you need?
  4. What are the blockers? – What is stopping you from making the implications true?
  5. What can you do to weaken the blockers? – Set aside the goal, characteristics and implications; what can you do to weaken the blockers?

Weakening the blockers is where you start delivering iteratively.  As the blockers disappear, your system becomes better for your clients and easier for you to implement your technical needs.

We will explore each step in depth in the following posts.

The Opposite of Iterative Delivery

Iterative Delivery is a uniquely powerful method for adding value to a SaaS.  Other than iterative, there are no respectably named ways to deliver features, reskins, updates and bug fixes.  Big bangs, waterfalls, and Quarterly releases fill your customer’s hearts with dread.

Look at 5 antonyms for Iterative Delivery:

  • Erratic Delivery
  • Infrequent Delivery
  • Irregular Delivery
  • Overwhelming Delivery
  • Sporadic Delivery

If your customers used these terms to describe updates to your SaaS, would they be wrong?

Iterative Delivery is about delivering small pieces of value to your customers so often that they know you’re improving the Service, but so small that they barely notice the changes.

Don’t be overwhelming, erratic or infrequent – be iterative and delight your customers.

The Chestburster Antipattern

The Chestburster is an antipattern that occurs when transitioning from a monolith to services. 

The team sees an opportunity to exact a small piece of functionality from the monolith into a new service, but the monolith is the only place that handles security, permissions and composition.

Because the new service can’t face clients directly, the Chestburster hides behind the monolith, hoping to burst through at some later point.

The Chestburster begins as the inverse of the Strangler pattern, with the monolith delegating to the service instead of the new service delegates to the monolith.  

Why it’s appealing

The Chestburster’s appeal is that it gets the New Service up and running quickly.  This looks like progress!  The legacy code is extracted, possibly rewritten, and maybe better.

Why it fails

There is no business case for building the functionality the new service needs to burst through the monolith.  The functionality has been rewritten.  It’s been rewritten into a new service.  How do you go back now and ask for time to address security and the other missing pieces?  Worse, the missing pieces are usually outside of the team’s control; security is one area you want to leave to the experts.

Even if you get past all the problems on your side, you’ve created new composition complexities for the client.  Now the client has to create a new connection to the Chestburster and handle routing themselves.  Can you make your clients update?  Should you?

Remember The Strangler

If you want to break apart a monolith, it’s always a good idea to start with a Strangler. If you can’t set up a strangle on your existing monolith, you aren’t ready to start breaking it apart.

That doesn’t mean you’re stuck with the current functionality!

If you have the time and resources to extract the code into a new service, you have the time and resources to decouple the code inside of the monolith.  When the time comes to decompose into services, you’ll be ready.

Conclusion

The chestburster gives the illusion of quick progress; but quickly stalls as the team runs into problems they can’t control.  Overcoming the technical hurdles doesn’t guarantee that clients will ever update their integration.

Success in legacy system replacement comes by integrating first, and moving functionality second.  With the chestburster you move functionality first and probably never burst through.

Building Your Way Out OF A Monolith – Create A Seam

Why Build Outside The Monolith

When you have a creaky monolith the obvious first step is to build new functionality outside the monolith.  Working on a greenfield, without the monolith’s constraining design, bugs, and even programming language is highly appealing.

There is a tendency to wander those verdant green fields for months on end and forget that you need to connect that new functionality back to the monolith’s muddy brown field.

Eventually, management loses patience with the project and pushes the team to wrap up.  Integration at this point can take months!  Worse, because the new project wasn’t talking to the monolith, most of the work tends to be a duplication of what’s in the monolith.  Written much better to be sure!  But, without value to the client.

Integration is where greenfield projects die.  You have to bring two systems together, the monolith which is difficult to work with, and the greenfield, which is intentionally unlike the monolith.  Now you need to bring them together, under pressure, and deliver value.

Questions to Ask

When I start working with a team building outside their monolith, integration is the number one issue on my mind.

I push the team to deliver new functionality for the client as early as possible.  Here are 3 starting questions I typically ask:

  1. What new functionality are you building?  Not what functionality do you need to build; which parts of it are new for the client?
  2. How are you going to integrate the new feature into the monolith’s existing workflows?
  3. What features do you need to duplicate from the monolith?  Can you change the monolith instead?  You have to work in the monolith sooner or later.

First Create the Seam

I don’t look for the smallest or easiest feature.  I look for the smallest seam in the monolith.

For the feature to get used, the monolith must use it.  The biggest blocker, the most important thing, is creating a seam in the monolith for the new feature!

A seam is where your feature will be inserted into the workflow.  It might be a new function in a procedural straight away, an adapter in your OOP, or even a strangler at your load balancer.  

The important part is knowing where and how your feature will fit into the seam. 

Second Change The Monolith

Once you have a seam, you have a place to start modifying the monolith to support the feature.  This is critical to prevent spending time recreating existing functionality.

Instead of recreating functionality, refactor the seam to provide it to your new service.

Finally Build Outside the monolith

Now that the monolith has a spot for your feature in its workflow, and it can support the external service, building the feature is easy.  Drop it right in!

Now, the moment your external service can say “Hello World!”, it is talking to the monolith.  It is in production, and even if you don’t finish it 100%, the parts you do finish will still be adding value.  Odds are, since your team is delivering, management will be happy to let you go right on adding features and delivering value.

Conclusion

Starting with a seam lets you develop outside the monolith while still being in production with the first release.  No working in a silo for months at a time.  No recreating functionality.

It delivers faster, partially by doing less work, partially by enabling iterations.

2 Developers, A Mathematician and a Scrum Master Walk Into a Bar

And come up with “The Worst Coding Problem Ever” dun dun dun!

Imagine getting this whopper in an interview or a take home test:

The United States has been conducting a census once a decade for over 200 years.

Imagine you can iterate the data at a family level, with the family data being whatever format/object is easiest for you. 

Find the family with the longest fibonacci sequence of children.

The most fundamental issue is that it’s not clear what the answer looks like.  In fact, the 4 of us had 3 different interpretations of what the answer would look like.

Is the question looking for children’s ages going forward?

That would be an age sequence of 0, 1, 1, 2, 3, 5, etc

Or a newborn, a pair of 1 year old twins, a 2 year old, 3 year old, 5 year old, etc

Or is it looking for children born in the sequence?  (This is the inverse of the first answer)

A 6 year old, 5 year old twins, a 3 year old and a newborn

Or is it asking about the age gap between children?

In that case you’d be hunting for Twins (gap of 0), a gap of 1 year, a second gap of 1 year, a gap of 2 years, etc.

There are so many ways to be the family fibonacci.

Many Technical Problems are like this

Fairly straightforward computer problems with meaningless mathematics sprinkled on top.  Being asked by people who won’t know the implications of any of the 3 answers. 

But what’s the answer?

If you are presented with this question in an interview, the correct answer is to thank the interviewer for their time, wish them the best of luck in their search, and end the interview.

Whose Data Is It Anyway?

The tenancy line is a useful construct for separating SaaS data from client data.  When you have few clients, separating the data may not be worth the effort of having multiple data stores.  As your system grows, ensuring that client data is separated from the SaaS data becomes as critical as ensuring the clients’ data remains separate from each other.

Company data is everything operational.  Was an action successful?  Does it need to be retried?  How many jobs are running right now?  This data is extremely important to make sure that client jobs are run efficiently, but it’s not relevant to clients.  Clients care about how long a job takes to complete, not about your concurrency, load shaping, or retry rate.

While the data is nearly meaningless to your clients, it is useful to you.  It becomes more useful in aggregate.  It has synergy.  A random failure for one client becomes a pattern when you can see across all clients.  When operational data is stored in logically separated databases you quickly lose the ability to check the data.  This is when it becomes important to separate operational data from clients.  

Pull the operational data from a single client into a multi-tenant repository for the SaaS, and suddenly you can see what’s happening system wide.  Instead of only seeing what’s happening to a single client, you see the system.

Once you can see the system, you can shape it.  See this article for a discussion on how.

Other considerations

If visibility isn’t enough, extracting operational data is usually its own reward.

Operational data is usually high velocity – tracking a job’s progress involves updating the status with every state change.  If your operational store is the same as the client store, tracking progress conflicts with the actual work.

DropDooms! How Binding an unbounded reference table can kill your UI’s performance

This post has been a long time coming – I wrote the idea down in my first list of potential posts, and I wrote a draft way back in 2019!

It is also the first time I can say that the content has been approved by my employer since they published it on their website.

It’s a great read, and I hope you enjoy it:

Dropdooms! How Binding an Unbounded Reference Table Can Kill Your UI’s Performance