Since our inception in 2015, Episode Six (E6) has been a cloud native company. When we were an early-stage payments technology startup, we had to spin up demo instances of our software on demand while keeping costs under tight control. This was a great fit for the public clouds of the time and made us see cloud computing as a key part of our solution right from the start.
When we expanded our business to delivering a PaaS option, it was natural for us to host in the cloud. Over the last few years, public cloud providers have recognized the diverse set of compliance and security requirements of their clients and really stepped up to assist. By embracing the need for compliance with PCI DSS, SOC 2, ISO 27001, etc., public clouds have made it very easy for companies like E6 to see the cloud as the go-to option hosting their live services.
Fast forward to today. E6 is running services for live clients out of AWS regions in Asia, Europe, and North America. Gone are the days of running a global business through a single data center. We have a philosophy that we should deliver for our clients from facilities which are geographically close to those clients and their customer bases – and we’ve built a technology platform to deliver it! Deploying in the AWS cloud enables this for E6, with hosting available in a diverse set of locations which provide us with a consistent set of services.
The result: E6’s Globally Distributed Processor.
How do we do this? It starts with a guiding set of principles, well executed.
Define everything as code. Our Infrastructure as Code (IaC) approach allows for repeatable builds wherever we need to deploy, and for controlled changes to environments. All global deployments – Test or Production, Live or DR – are ultimately built from the same IaC codebase which we manage in Git. Environments refresh regularly to prevent configuration drift.
Use the cloud as intended. The public cloud is not a ‘colo in the sky’, so a ‘lift and shift’ of a traditional data center is not the solution. We take full advantage of managed services such as DB as a Service (DBaaS) and Kubernetes control planes. With elastic scaling of compute resources, peak demand can be met while costs are contained.
No Single Points of Failure. We engineer to avoid Single Points of Failure. In each cloud region, we run multiple Availability Zones (AZ) active-active, meaning they are deployed across two or three zones and this provides high availability to keep your applications running. Primary Data Recovery (DR) mode for an AZ failure is to continue running in the remaining AZs without interruption; failing to a different region is our contingency scenario. Critical connections to payment schemes such as Mastercard and Visa are to multiple, physically separate data centers.
Strategic deployment of assets. Delivering our services uses a broad range of technical components. Some – DBaaS, managed compute, load balancers – are available in every cloud region. We deploy these close to clients. Others – crucially Payment Hardware Security Module (HSMs) – are not yet consistently available in all public clouds. We work with providers to host these components in strategic locations and use them from our regional deployments.
For E6, the cloud is not the only show in town. We offer two distinct pathways for our clients: licensed software, and managed services. Although many clients opt for our managed services for the ease of onboarding and managed PCI DSS compliance, we regularly sign new clients who want the extra control of an on-premise deployment.
Sustaining this dual model drives us to avoid cloud-specific requirements in our core, meaning our Tritium™ software remains infrastructure agnostic. We leverage cloud technologies ourselves, but don’t impose a particular infrastructure stack on our clients. That’s good both for our clients, and for E6 as we consider future cloud options.