Why Kubernetes is a game-changer for E-commerce

Why Kubernetes is a game-changer for E-commerce

3 min read

When I started working at Purple managing the E-commerce stack, I inherited a single AWS EC2 instance that represented our entire infrastructure. The problem was, the company was doubling in size every few months, and with this exponential increase in load combined with the issues we were already experiencing with this infrastructure, it became a large business risk.

We immediately switched to using multiple compute instances behind a load balancer, but this solution wasn’t very scalable and still potentially error-prone and hard to recover. We wanted an autoscaling solution that just worked.

While undergoing a large re-platforming and code modernization project, we also took the task to modernize our infrastructure and DevOps strategy, including adopting containers. The question then became “what infrastructure should we use?” We knew that we wanted some sort of container orchestration system.

At the time, there was some debate as to whether we should go with ECS, Docker Swarm, Kubernetes, or something else. After weighing all of the options, Kubernetes became our first choice. It was a little bit more complicated than the other options but much more powerful, and Google Kubernetes Engine made it simple to get started.

After running Kubernetes in production, most of the benefits that we had expected (and some unexpected benefits) came to fruition. I want to share the experience we had for the benefit of other E-commerce brands that may be considering a similar move.

Easy scalability during launches and BFCM

It’s no secret that for many retailers, Black Friday and Cyber Monday are the biggest sales days of the year. But the magnitude of sales can be huge—many, including us at Purple, had as much as 10x the traffic, sales, and infrastructure load when compared to a “normal day.”

Of course, other events like product launches can also bring a sharp, quick increase in server load. We even once had a very sharp and sudden traffic spike driven by a couple arguing who would keep their Purple Mattress in a divorce court TV show.

With Kubernetes and GKE cluster autoscaling in place, we were now able to easily handle these sharp increases in load without any human intervention.

One member of our executive team who had been managing large retail E-commerce for decades was surprised by the lack of engineering time spent during Thanksgiving and BFCM. In his experience, the only way to ensure reliability during these high-traffic holidays was to have an operations team actively monitoring the site, ready to fix issues immediately.

After our switch to Kubernetes, scaling during these high-traffic times became trivial. With automated monitoring, we didn’t need to constantly monitor the site, giving more time for our engineering teams to enjoy the holidays.

Less deployment failure

Our legacy inherited system was an older, PHP-based monolithic application that was being manually deployed. As you can imagine, failures happened regularly, although we were usually quick to fix those.

Moving to containers immediately decreased our deployment failure rate while increasing our developer satisfaction and productivity. When deploying to production in Kubernetes with rolling updates, the failure rate went down significantly.

Dynamic pull request environments

Armed with infrastructure as code, Kubernetes, and some knowledge on the build system, it’s relatively simple to create on-demand environments for each pull request. This makes sharing new feature branches with non-technical stakeholders extremely easy. Additionally, it eliminates confusion around which branch is on staging, and results in a much smoother QA process.

This is a great example of the wide possibilities that using Kubernetes opens up. This would be difficult to do well (or at all) using legacy infrastructure and is a vast improvement over other methods.


For us, using Kubernetes was absolutely the best option and helped us achieve 99.9999%+ uptime, increase load speed, decrease failure rates, and increase developer velocity.

Now working at Codefresh, I’ve talked to several brands that have seen similar benefits as they’ve switched from legacy infrastructure to Kubernetes. If you’re looking for a tool to make the migration to Kubernetes as easy as possible, consider Codefresh. Codefresh has built-in steps for any Kubernetes deployment strategy, automatic authentication, Kubernetes dashboards, and more. Plus, it can easily handle any older, non-Kubernetes workflows, serverless workflows, or nearly anything else.

Try it today — Create Your Free Account!

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Build your GitOps skills and credibility today with a GitOps Certification.

Get GitOps Certified

Ready to Get Started?
  • safer deployments
  • More frequent deployments
  • resilient deployments