Last week human error with Amazon Web Services (AWS) seemed to bring half the internet grinding to a halt. For people who rely on the many platforms, tools, and digital conveniences afforded by AWS, it became a pretty stressful day.

One of the best reads on addressing this to avoid future pain was from Data Center Knowledge, who suggests:

The ways to implement this include using multiple regions by the same cloud provider, using multiple cloud providers (Microsoft Azure or Google Cloud Platform in addition to AWS, for example), or using a mix of cloud services and your own data centers, either on-premise or leased from a colocation provider. You can choose to spend the time and money to set this architecture up on your own or you can outsource the task to one of the many service providers that help companies do exactly that.

For us at New Continuum, there are some key takeaways to keep in mind:

  1. Cloud services can be convenient, but they aren’t perfect
  2. More redundancy/resiliency brings significantly higher complexity and costs
  3. No one solution for everyone

A solid Data Center can be a crucial partner in these practices. A reliable Data Center doesn’t lock you into any one provider’s systems, and allows you to design and run your own cloud (Private Cloud). At New Continuum, we take a lot of the low level redundancy problems off your plate (power, cooling, Internet) so you concentrate on your key infrastructure. Our key connectivity with United IX, supporting Google, Cloudflare, and other networks anchors our customers’ high profile/high availability network applications, leveraging the cloud providers for customer presentation or burst capability.

The WHIR has a good overview of the tactics that can be used to prevent single points of failure in the cloud, including multiple data centers and effective architecture. They are right on target when discussing the costs of building in high level redundancy for high end services.