Pixel Tracking is a common Marketing SaaS activity used to track page loads. Today I am going to try and tie several earlier posts together and show how to evolve a frustrating Pixel Tracking architecture into one that can survive database outages.
Pixel Tracking events are synchronously written to the database. A job processor uses the database as a queue to find updates, and farms out processing tasks.
Designed to Punish Users
This design is governed by database performance. As the load ramps up, users are going to notice lagging page loads. Worse, each event recorded will have to be processed, tripling the database load.
Designed to Scale
You can relieve the pressure on the user by making your Pixel Tracking asynchronous. Moving away from using your database as a queue is more complicated, but critical for scaling. Finally, using Topics makes it easy to expand the types of processing tasks your platform supports.
Users are now completely insulated from scale and processing issues.
Dead Database Design
There is no database in the final design because it is no longer relevant to the users’ interactions with your services. The performance is the same whether your database is at 0% or 100% load.
The performance is the same if your database falls over and you have to switch to a hot standby or even restore from a backup.
With a bit of effort your SaaS could have a database fall over on the run up to Black Friday and recover without data loss or clients noticing. If you are using SNS/SQS on AWS the queue defaults are over 100,000 events! It may take a while to chew through the queues, but the data won’t disappear.
When your Pixel Tracking is causing your users headaches, going asynchronous is your Best Alternative to a Total Rewrite.