5 secrets to fast and reliable cloud services
I often come across low cost, low-grade competitors who offer cloud and free-bundled DNS services out of a single server farm or, even worse, providing DNS services hosted on the same server or the same network, sometimes even at the end of a very remote xDSL line. Under these conditions, using a single location for a DNS system, long latency and any temporary network failure cause disruption of all services for a domain, including the potential loss of email.
The importance of a good Domain Name System (DNS) and a network consisting of globally distributed application servers help to build fast and reliable cloud services.
- When looking for always-on internet solutions, to protect against network failures and denial-of-service (DoS) attacks the ideal situation is to use redundancy. This usually means using both redundant DNS and redundant servers hosted at different locations, possibly routed by different AS (Autonomous Systems). Down-times with major cloud and CDN providers often occur because their systems are not designed to survive a single point of failure that is inherent in their architecture.
- Any cloud service provider should offer their clients multiple (more than two), redundant DNS servers on different network segments and at various physical locations, with inherent redundancy in all potential single point of failures. Anycast DNS services can also be a more expensive solution (sometimes less effective) to this problem.
- Do you remember the notorious technological problems of HealthCare.gov (the US president Obama Health Insurance Marketplace)? When too many users access an application at the same time (i.e. any line application or a web server) either the server itself or the network may become overloaded. The result is increased latency (slow or no response) and potential loss of data. By carefully engineering on-line applications so that the number of servers can be dynamically increased, the application load and the number of users may be distributed among different servers, at different geographical locations, and more transactions can be managed at the same time. On the other hand, when a single data centre or a single server fails, or if a router or the network is overloaded, all online services will become slow or unreachable.
- By increasing the number of geographical locations where multiple origin servers are hosted, it is also possible to “load balance” and improve reliability of the whole system. If one or more servers or one or more network segments fail, a globally distributed load balancing system will continue to provide service, until there is at least one server online. To this extent, the most common load-balancing or passive CDN solutions that rely on a single origin server are not valid solutions.
- Configuration and de-configuration of servers in case of network or server problems should be fully automatic and should require little or no human intervention after initial set-up. A global load balancing mechanism like WorldDirector provides a fully redundant and distributed DNS system including content acceleration.
Check for example applications like Italyguides by Compart Multimedia that use a distributed cloud system. Italyguides appears “faster” anywhere in the world, and responds quickly to end users despite a large amount of data involved.