Rethinking hyper-converged infrastructure
Change has been ever-present in IT, but it has gained new speed in recent years. The rate of change, however, is subordinate to the magnitude. The transformation we’ve seen in IT in the last few years would register a nine on the Richter scale when it comes to disruption.
The key driver of this disruption has been the explosion of data due to the rise of mobile, social and the Internet of Things. Infrastructure inefficiencies were well hidden throughout the 1990s and early 2000s — before data began to double every two years — but have surfaced as a hurdle to overcome. As infrastructure costs climbed for many organizations, it became clear that rethinking IT was a priority. The data center needed to become more efficient.
Today’s data center has changed significantly. Compute virtualization is found everywhere, driving utilization up and costs down, and storage is more capable. IBM has focused on bringing virtualization to storage through the IBM Storwize family and can now virtualize over 400 arrays. In addition, flash has thoroughly disrupted the hard drive market for any primary workload. No more 15,000 RPM drives, short stroking or over provisioning for input/output processes. In short, we have become efficient, at least in part.
The data center remains a highly complex environment comprised of several different systems. It takes a significant amount of time to deploy, assemble, integrate and configure all these components. What would happen if a vendor took the time to put them all together? Easier deployment, faster implementation and reduced risk. These are all are strong reasons to look for a new approach.
Enter the hyper-converged infrastructure, fully assembled, validated and tested. VersaStack by Cisco and IBM is a good example. It provides everything needed to deploy an infrastructure quickly, like a data center in a box. Hyper-converged systems are easier and faster to implement and reduce risk. But do they go far enough? How can we become even more efficient?
The next logical step is to abstract the physical location. Apply the same logic of commingled resources to a remote or cloud-based infrastructure, but don’t let physical location bind your thinking. With that framework in mind, the ideal IT solution looks something like this:
- Virtualized compute to allow for high utilization of servers and a mobile workload
- Virtualized storage to allow for integration of the existing environment and increased efficiency
- Integration into a converged infrastructure easing risk and the deployment burden while reducing the time to production
- Seamless and open access to cloud-based resources from multiple cloud providers
At first glance, this seems like a wish list, until you consider IBM. A leader in software defined storage, IBM and its partnership with Cisco yield the ultimate converged infrastructure. But don’t stop there. Imagine all the resources beyond the data center waiting in the cloud. Have you considered this possibility in your current IT strategy?
A day is coming, not far in the future, when it will be necessary to leverage cloud resources just to remain competitive. Consider open standards that provide flexibility and the ability to land or burst a workload to the cloud with agility and without vendor lock-in. IBM is envisioning this future and providing solutions to achieve it.
Think big. Rethink hyper-converged. Think IBM.
Please share your thoughts and head here for more information on IBM Infrastructure.
Then, go here and think beyond hyper-converged with cloud based assets.
The post Rethinking hyper-converged infrastructure appeared first on IBM Systems Blog: In the Making.