Finding the right cloud provider: What criteria should you look for?
Evaluating a cloud provider can be a real challenge. We have compiled a few questions that companies should ask themselves in order to find out with whom they can best achieve their goals in the following article.
In March 2021, a serious incident occurred in a data center near Strasbourg: after a recent repair, an inverter heated up to such an extent that, as a result, a fire with large amounts of smoke broke out. One area of the four-bay data center, which is operated by a large European cloud provider, was destroyed by the fire, and the others were affected by smoke and soot. Due to the fire, cloud services were unavailable for several days, and data loss could not be ruled out. Several customers, some of them well-known, from all over Europe were affected by the outage.
Check cloud providers before committing "forever"....
This total failure of this cloud provider, which lasted several days, showed how closely companies' IT is linked to the cloud and how important it is to choose the right provider. This is because the effect of so-called data gravity does not allow for quick and easy changes. The more data and services a company uses from a particular provider, the greater the data load and the longer and more sluggish the migration to alternative offerings. This makes it all the more important for companies to clarify a few important points before committing to a cloud provider.
1. are all workloads supported?
The workloads reflect the service performance and innovative strength of the respective provider. Particularly in the case of digitization projects, it is crucial that the provider already delivers important cornerstone services and announces modern modules such as Big Data or IoT (Internet of Things) in its roadmap. After all, the cloud provider should not stand in the way of the digital transformation under any circumstances. The following important workloads should be supported:
- Big Data: These service modules help to quickly evaluate large amounts of data and to gain new insights from the data with the help of predictive models.
- Open Source: The provider should ideally be able to integrate a number of open source platforms in addition to the commercial ones, so that a company has a free choice in these. In particular, platforms such as MongoDB, OpenStack and container-based environments such as Docker or Kubernetes should be covered in order to be future-proof.
- Hyperconverged infrastructures: The provider should support hyperconverged infrastructures and enable the company to connect accordingly so that critical data and applications can operate as highly available and fail-safe as possible.
- Hybrid traditional applications: To enable companies to continue operating their legacy applications, the provider should support hybrid cloud constellations.
2. how smart to recover data?
The case described at the beginning of this article underscores the fact that companies are ultimately responsible for their own data and security. Because if data is not backed up, the provider by no means guarantees that it can fully restore it after a failure. Companies should therefore back up data in the cloud themselves as a matter of principle: The keyword here is "shared responsibility".
How can entire data sets or only important parts be recovered in the event of an emergency? The provider should also support granular recovery processes so that a company can, for example, recover a virtual machine or individual files of a virtual application on a granular basis without having to download and rebuild the entire database. This saves a lot of time and reduces the effort enormously. It must be ensured that the critical applications and data can be reconstructed in a prioritized manner so that important services are quickly available again after a total failure. To ensure that everything runs as smoothly as possible in this moment of stress, the following functions should be supported by the cloud provider in the business continuity or recovery plans:
- Automated and orchestrated recovery: This means that entire, complex multi-tier applications can be restored fully automatically at the click of a mouse.
- One-for-One Orchestrations: Here, an IT manager must confirm the steps with minimal commands so that he or she remains in full control of the process.
- Testing the recovery plan: It is important to test this disaster recovery process, as well as possible migration scenarios, in a safe manner without impacting production operations.
- Multi-vendor concept: Recovery mechanisms may need to recover applications of different types on different platforms. Therefore, it is essential to choose multi-vendor or independent disaster recovery mechanisms that can protect the data end-to-end.
3. how to save storage space?
Many companies already use deduplication in their own backup environments to keep the size of backups as small as possible and save storage space. It would be ideal if the cloud provider also supported this form of deduplication. This helps conserve storage and bandwidth by reducing the total amount of data that needs to be stored. One option is for a backup and recovery solution to bring in this intelligence independently of the cloud provider, enabling a multi-cloud strategy.
It is also important to offer storage with different performance levels. High-performance, critical applications should run on higher, more powerful storage, while less important data is stored on slower and less expensive storage services at the provider. The timeliness of the backup also plays a role in this evaluation.
4. how to keep track of the IT infrastructure?
Anyone who migrates their data to the cloud will very likely maintain a hybrid infrastructure architecture for a long time. Data is distributed across all these different platforms on a day-to-day basis, and they have certain interdependencies. It's important to understand and keep track of these dependencies. After all, if one component fails, it may be necessary to take appropriate countermeasures immediately. It is therefore important to continuously monitor the entire infrastructure, the data stock and the state of health.
In this way, critical situations, such as the total failure of a cloud provider, can be bridged well, since data and applications are backed up and, ideally, all critical services are transferred to the cloud of another provider via an automated disaster recovery process. In this way, the disaster in a data center remains limited to that data center.
Cloud providers: also an address for cybercriminals...
Returning to the fire at the aforementioned data center, according to information from an IT security service provider, the fire also caused 36 percent of 140 command servers to disappear, which were used by cybercriminals to host their malware. That cybercriminal groups are using server services from reputable cloud providers seems to be more common than is commonly thought. And knowing how professional today's cybercriminals are, it can (unfortunately) also be assumed that they have the necessary backups...
Editor's tip: If you would like more in-depth information on the topic of the cloud, you can find it here: https://www.digitaleschweiz.ch/markt/cloud-finder-schweiz/