Part 2 covers how NoSQL emerged as an improvement over the classic relational database solution for User Defined Fields. NoSQL delivers speed and scalability by being expensive and fragile. In part 3 I’m going to cover the emerging Hybrid Database solution for User Defined Fields.
Hybrid Databases allow you to combine the best aspects of the relational and NoSQL models, while avoiding most of the downsides.
A hybrid implementation looks like this:
The hybrid model brings the data back to a single server, but without the Contact->Field relation. Instead the field data is stored as a JSON object in the Contact table itself.
No meta programming and no filters, everything is back to SQL. Hybrid databases allow you to directly query JSON fields as if they were regular columnar fields.
You can create indexes on the JSON data. This is an improvement over both the classic and NoSQL models. It can significantly improve performance by allowing the database engine to optimize queries based on usage.
Having a single system makes things simple to set up and easier to maintain.
The database will enforce valid JSON structures, which makes it difficult to poison your data.
There’s no enforced relationship between the JSON data and your User Defined Fields. This means that data can get lost because your system no longer knows to display or delete it.
While Hybrid Databases should scale far beyond the needs of your SaaS, the scaling isn’t quite as open ended as the NoSQL model. If you out-scale the Hybrid model, congratulations, your company’s services are in high demand!
If your SaaS is implementing User Defined Fields from scratch today, go with the Hybrid model. If you already have the classic or NoSQL pattern in place, it’s a good time to start thinking about how to evolve towards a hybrid solution.
I’ll cover how to evolve your existing solution in Part 4.
In part 1 - I covered the classic solution for User Defined Fields; simple but unscalable.
NoSQL emerged as a solution to relational fields in the late 2000s. Instead of having a meta table defining fields in a relational database, the User Defined data would live in NoSQL.
The structure would look like this:
This model eliminates the meta programming and joining the same table against itself. The major new headache that this model creates is difficulty in maintaining the integrity of the field data.
No complicated meta programming. Instead you write a filter/match function to run against the data in the Collection Of Fields.
No more repeated matching against the same table. Adding additional search criteria has minimal cost.
Open ended/internet level scaling. For a CRM or SaaS, the limiting factor will be the cost of storing data, not a hard limit of the technology.
Much more complicated to set up and maintain. Even with managed services supporting two database technologies doubles the difficulty of CRUD. Multiple inserts, multiple deletes, tons of ways for things to go wrong.
Without a relational database enforcing the data structure, poisoned or unreadable data is common. Being able to store arbitrary data collections means you’ll invariably store buggy data. You’ll miss some records during upgrades and have to support multiple deserializers. You will lose customer data in the name of expediency and cost control.
It’s more expensive. You’ll pay for your relational database, NoSQL database, and software to map between the two.
NoSQL systems solve the scaling problems with setting up User Defined Fields in a relational database. The scaling comes with high costs in terms of complexity, fragility and costs.
Reducing the complexity, fragility, and costs leads to the upcoming 3rd shift, covered in part 3.
This series covers a brief history of the 2 historic patterns for implementing User Defined Fields in a CRM, the upcoming hybrid solution that provides the best of both worlds, and how to evolve your existing CRM to the latest pattern. If you care about CRM performance, scaling, or cost, this series is for you!
What are User Defined Field Patterns?
Every CRM provides a basic fields for defining a customer. Every CRM’s basic field set is different depending on the CRM’s focus. So, every user of a CRM needs to expand the basic definition in some way. Birthdays, purchase history, and interests are three very common additions.
The trick is allowing users to define their own fields in ways that don’t break your CRM.
The Three Patterns
At a high level, there have been three major architectures for implementing Custom Fields. Most of the design is driven by the strengths and weaknesses of the underlying database architecture.
Pattern 1, generalized columns in a database, spanned the dawn of time until the rise of NoSQL around 2010.
Pattern 2, NoSQL, began around 2010 and continues to today.
Pattern 3, JSON in a relational database, began in the late 2010s and combines the best of the two approaches
Pattern 1 - All in a Relational Database
Before the rise of NoSql there was pretty much one way to build generic user defined fields.
The setup is simple, just 3 tables. A table of field definitions, a table for contacts, and a relational table with the 2 ids and the value for that contact’s custom field.
This design is extremely simple and can be implemented by a single developer very quickly.
Basic CRUD operations are easy and efficient.
Building search queries requires complicated techniques like metaprogramming.
Every search criteria results in a join against the ContactFields table. This results in an exponential explosion in query times.
The lack of defined table columns handicaps the database’s query optimization strategies.
The classic relational database pattern is easy to set up, but has terrible scaling. This super simple example would bog down by 1,000 contacts and 50 fields.
There are lots of ways to redesign for scale, but this is a SHORT history. Suffice it to say that it takes extremely complex and finicky systems to scale past 100,000 contacts and 1,000 fields.
The solutions to the classic pattern’s scaling led to the NoSQL revolution, covered in part 2.
You become a Scaleup when your SaaS’s service offering becomes compelling and you start attracting exponentially more clients.
All at once you have a lot more clients, clients with a lot more data.
Solutions that support 1,000 clients buckle as you pass 5,000. Suddenly, 25,000 clients is only months away.
Services that support hundreds of thousands of transactions a day fall hopelessly behind as you onboard clients with millions of transactions.
You finally know what customers want. You quickly find the edges of your system. Money is rolling in from customers and VCs. You can throw money at the problems to literally buy time to find a solution.
But you’re faced with a looming question - moonshots or baby steps.
Moonshots Are About You, Baby Steps Are About Your Clients
It’s not about you or your SaaS, it’s about your client’s outcomes.
Moonshots are appealing because they take you directly to where you need to be. Your system needs to scale 10x today and 100x next year; why not go straight for 100x?
Baby steps feel like aiming low because the impact on you is small. But it’s not about you! Think about the impact on your clients.
From a technology perspective, sending emails 1% faster is ::yawn::
But for your clients, faster emails means more engagement, which means more sales.
Would your clients rather have more sales this week, compounding every week for the next year, or flat sales for a year while you build a moonshot?
Clients who churn, or go out of business, won’t get value from the moonshot. Even if you deliver greater value eventually, your clients are better off getting some value now.
Are you delivering value to your SaaS or your clients?
The Chestburster is an antipattern that occurs when transitioning from a monolith to services.
The team sees an opportunity to exact a small piece of functionality from the monolith into a new service, but the monolith is the only place that handles security, permissions and composition.
Because the new service can’t face clients directly, the Chestburster hides behind the monolith, hoping to burst through at some later point.
The Chestburster begins as the inverse of the Strangler pattern, with the monolith delegating to the service instead of the new service delegates to the monolith.
Why it’s appealing
The Chestburster’s appeal is that it gets the New Service up and running quickly. This looks like progress! The legacy code is extracted, possibly rewritten, and maybe better.
Why it fails
There is no business case for building the functionality the new service needs to burst through the monolith. The functionality has been rewritten. It's been rewritten into a new service. How do you go back now and ask for time to address security and the other missing pieces? Worse, the missing pieces are usually outside of the team’s control; security is one area you want to leave to the experts.
Even if you get past all the problems on your side, you’ve created new composition complexities for the client. Now the client has to create a new connection to the Chestburster and handle routing themselves. Can you make your clients update? Should you?
Remember The Strangler
If you want to break apart a monolith, it’s always a good idea to start with a Strangler. If you can’t set up a strangle on your existing monolith, you aren’t ready to start breaking it apart.
That doesn’t mean you’re stuck with the current functionality!
If you have the time and resources to extract the code into a new service, you have the time and resources to decouple the code inside of the monolith. When the time comes to decompose into services, you’ll be ready.
The chestburster gives the illusion of quick progress; but quickly stalls as the team runs into problems they can’t control. Overcoming the technical hurdles doesn’t guarantee that clients will ever update their integration.
Success in legacy system replacement comes by integrating first, and moving functionality second. With the chestburster you move functionality first and probably never burst through.
3 Signs Your Resource Allocation Model Is Working Against You
After 6 posts on SaaS Tenancy Models, I want to bring it back to some concrete examples. When your SaaS has a Single Tenant model, clients expect to allocate all the resources they need, whenever they want. When every client is entitled to the entire resource pool, no client gets a great customer experience.
Here are 3 signs your Resource Allocation Model is working against you:
Large clients cause small client’s work to stall
You have to rebalance the mix of clients in a cell for stability
Run your job at night for best performance
Large clients cause small client’s work to stall
This is a classic “noisy neighbor” problem. Each client tries to claim all the shared resources needed to do their work. This isn’t much of a problem when none of the clients need a significant percentage of the pool. When a large client comes along, it drains the pool, and leaves your small clients flopping like fish out of water.
You have to rebalance the mix of clients in a cell for stability
When having multiple large clients in a cell affects stability, the short term solution is to migrate some clients to another cell. Large clients can impact performance, but they should not be able to impact stability. Moving clients around buys you time, but it also forces you to focus on smaller, less profitable clients.
Run your job at night for best performance
This is advice that often pops up on SaaS message boards. Don’t try to run your job during the day, schedule it to run in the evening so it is ready for the morning. When clients start posting workarounds to your problems, it’s a clear sign of frustration. Your clients are noticing that performance varies by the time of day. They are building mental models of your platform and deciding you have load and scale issues. By being helpful to each other, your clients are advertising your problems.
A Jobs Service is a very common service for SaaS companies. It provides a way to run work on a schedule, on demand, and independent of human activity. Often, everything that isn’t done through the website is done by a Job Service.
I have never worked at a SaaS without some version of a Job Service, usually homegrown and built off a database instead of a queue. They usually have descriptive and funny names - Task Processor, Crons, Crontabulous, Maestro, Batch Processor and of course Polite Batch Jobs.
Starting early in the SaaS’s life, they also evolve and grow with the SaaS, creating problems as they migrate from Single Tenant to a logically shared environment.
Single Tenant Job Service
In a single tenant model, provisioning a Job Service with a pool of workers is fairly straightforward. Jobs are generated and put onto a queue (and not a database!)
The Job Service takes jobs off of the queue and fans them out to the worker pool. This is simple and works well because the Queue handles the complexities of tracking and retrying jobs.
Because the Queue is FIFO, the Job Service has no visibility into the client composition of the pending jobs, and a large client can easily starve a small one of resources by adding hundreds or thousands of jobs to the queue. The large client will see progress as the jobs are processed, but nothing happens for the small client until the large job finishes.
Things get even worse if the Queue and Job Service are Global instead of Cell based. A global queue feeding a global worker pool that works on clients spread across multiple database clusters will naturally cause database cluster hot spots. Performance will degrade for everyone on the cluster while the workers do massive jobs for a few large clients.
You can add bandaids like limiting the number of jobs per client and moving excess work onto overflow queues. This will help smaller clients somewhat, but natural hotspots will still occur.
Cross The Tenancy Line - Become Multi-Tenant
The Job Service needs to evolve from being Logically Separated into a Multi-Tenant service.
It needs to know how many jobs each client has pending, how long the jobs are taking, and how hot the database clusters are running so that it can operate a priority queue instead of FIFO.
The Jobs Service needs to move across the Tenancy Line
What is the Tenancy Line?
With Logically Separate infrastructure the clients share infrastructure, but the data and services all behave as if there is only one client at a time. As a result each client can regulate its own behavior, but has no visibility into the infrastructure as a whole.
To stop acting like a Single Tenant service, the Jobs Service needs to cross the line into Multi-Tenancy.
This change is conceptually simple, but has a lot of subtle implications.
The Service can control load across clients
In the original model work loads are random based on when jobs are added to the queue. When a hotspot emerges, there’s not much that the service can do without manual intervention. When there’s a noisy neighbor you can’t do much to stop them from starving smaller clients because you don’t know where those clients are in the queue.
With a Multi-Tenant job service, you can control resources across cells and the entire platform. Small clients can be protected by moving jobs up in priority based on how many recent jobs they have completed.
Jobs will finish faster as worker loads can be managed across cells, preventing hotspots.
Overall throughput will rise, smaller client performance will improve dramatically, and large clients will see more consistent execution times.
The Job Service Becomes a Queue
The original design used a single simple queue. Every client adds jobs directly to the queue, and the Job Service’s responsibility is to take work, pass it to a worker, and mark the job as complete. If there’s a failure, the queue will time the job out and put the work back on the queue.
A FIFO queue prioritizes by insertion order and doesn’t have any mechanism for reordering. The Job Service will have to build prioritization logic and find a way to integrate into a queuing mechanism. Do not give in to temptation and turn your database into a queue!
Pushing the Jobs Service across the Tenancy Line is a major coming of age step in the evolution of a SaaS company.
It trades significant development resources and complexity for consistent execution and a solution to the Noisy Neighbor Problem. The SaaS benefits from the synergy this creates with better resource utilization and reduced database hotspotting.
Once a SaaS has enough clients to warrant the change, making the Jobs Processor Multi-Tenant is a major step forward.
This is part 4 in a series on SaaS Tenancy Models. Parts 1 , 2 , and 3.
SaaS companies are often approached by potential clients who want their instance to be completely separate from any other client. Sometimes the request is driven by legal requirements (primarily healthcare and defense), sometimes it is a desire for enhanced security.
Often, running a Multi-Tenant service with a single client will satisfy the client’s needs. Clients are often willing to pay for the privilege of their account run Single Tenant, making it a potentially lucrative option for a SaaS.
What is a Cell?
A Cell is an independent instance of a SaaS’ software setup. This is different from having software running in multiple datacenters or even multiple continents. If the services talk to each other, they are in the same cell regardless of physical location.
Cells can differ with the number and power of servers and databases. Cells can even have entirely different caching options depending on need.
The 3 most common Cell setups are Production, Staging (or Test), and Local.
Cell architecture comes with a few distinct properties:
Cell structures allow SaaS to grow internationally and offer clients low latency and localized data policies (think GDPR). Latency from the US to Europe, Asia and South America is noticeable and degrades the client experience.
Clients exist in 1 cell at a time. They can migrate, but they can’t exist in multiple cells.
Generally speaking, Cells can not be part of a disaster recovery plan. Switching clients between Cells usually involves copying the database, and can’t be done if the client’s original Cell is down.
Cell Isolation as a Single Tenant Option
In part 3 I covered the difficulties in operating in a true Single Tenant model at scale. A Cell with a single client effectively recreates the Single Tenancy experience.
Few clients want this level of isolation, but those that need it are prepared to pay for the extra infrastructure costs of an additional Cell.
For SaaS without global services, a Cell model enables a mix of clients on logically separated Multi-Tenant infrastructure and clients with effectively Single Tenant infrastructure. This allows the company to pursue clients with Single Tenant needs, and the higher price point they offer.
The catch is that Single Tenant Cells can’t exist in an architecture with global services. If there is a single service that must have access to all client data, Single Tenant Cells are out.
In the first post on Saas Tenancy Models, I introduced the two idealized models - Single and Multi-Tenant. Many SaaS companies start off as Single Tenant by default, rather than strategy, and migrate towards increasingly multi-tenant models under the influence of 4 main factors - complexity, security, scalability, and consistent performance.
After publishing, I realized that I left out an important fifth factor, synergy.
In the context of this series, synergy is the increased value to the client as a result of mixing the client’s data with other clients. A SaaS may even become a platform if the synergies become more valuable to the clients than the original service.
Another aspect of synergy is that the clients only gain the extra value so long as they remain customers of the SaaS. When clients churn, the SaaS usually retains the extra value, even after deleting the client’s data. This organically strengthens client lock in and increases the SaaS value over time. The existing data set becomes ever more valuable, making it increasingly difficult for clients to leave.
Some types of businesses, like retargeting ad buyers, create a lot of value for their clients by mixing client data. Ad buyers increase effectiveness of their ad purchases by building larger consumer profiles. This makes the ad purchases more effective for all clients.
On the other hand, a traditional CRM, or a codeless service like Zapier, would be very hard pressed to increase client value by mixing client data. Having the same physical person in multiple client instances in a CRM doesn’t open a lot of avenues; what could you offer - track which clients a contact responds to? No code services may mix client data as part of bulk operations, but that doesn’t add value to the clients.
Sometimes there might be potential synergy, like in Healthcare and Education, but it would be unethical and illegal to mix the data.
Not All Factors Are Client Facing
Two of the factors, complexity and scalability, are generally invisible to clients. When complexity and scalability are noticed, it is negative:
Why do new features take so long to develop?
Why are bugs so difficult to resolve?
Why does the client experience get worse as usage grows?
A SaaS never wants a client asking these questions.
Security, Consistent Performance and Synergy are discussion points with clients.
Many SaaS companies can adjust Security concerns and Consistent Performance through configuration isolation.
Synergy is a highly marketable service differentiator and generally not negotiable.
As much as possible I’m going to treat and draw things as 2-tier systems rather than N-tier. As long as the principles are similar, I’ll default to simplified 2-tier diagrams over N-tier or microservice diagrams.
Coming up I’ll be breaking down single to multi-tenant transformations.
Why a SaaS would want the transformation, what are the tradeoffs, and what are the potential pitfalls.
Please subscribe to my mailing list to make sure you don’t miss out!