In this final installment in our Virtualization series, we look at the future of virtualization. Let us be the visionary today, and describe the outlook of the technology - where it's headed and what impact it will have on the software and hardware of the future. The future of virtualization is upon us. Will the world be ready?
Published in this TG Daily Special: Virtualization Explored
Tomorrow, there is very little doubt that corporations will construct and deploy server farms built entirely around the idea of 100% virtualized machines. The idea of non-virtualized utilization will probably even be frowned upon for all but the most specialized needs. The hard connection between the OS and its underlying equipment will be severed. And from that point forward, we'll be dealing with a hardware support base designed for one purpose, and one purpose alone: Total abstraction of the machine. I find it ironic in many ways that the machine itself will be the vehicle which allows the machine to not be just be the machine, but rather one presenting itself outwardly as a multi-purpose device. Such hardware will be impressive, adaptable, flexible and highly configurable.
There is a full-on movement away from that which we have enjoyed in the past. We're moving away from the physical into the virtual. We're not looking to build just a better machine, but rather to take a better machine and make the software solutions they employ do so much more that the machine acts as if it's more than the machine. Virtualization is the single biggest idea or concept which will drive the the future of computing in all areas. Cell phones, PDAs, desktop machines, servers, new appliances that haven't even been conceived of yet, all of them will be driven almost entirely by virtualization. And certainly virtualization will be at the core of the design. It will be compute needs and virtualization side-by-side which yield future devices. Everything else will be built around that realization.
The realities and possibilities given to us by virtualization hardware are becoming ubiquitous. Very few forward-thinking software designs today do not include the idea of support for virtualization. And those that do are for highly specialized tasks where virtualization is not needed.
Companies like VMware, Xen, Microsoft, Parallels, they are all feeding into this channel by providing the flexible software frameworks necessary to make new software shine. And new users everywhere are finding out more and more ways to achieve benefits from virtualization every day.
As the hardware becomes faster, and the utility becomes greater, and the tools are perfected (those allowing us to wield our soft machines so easily), it will be hard for software to be immune to its draw.
If it's a compute application, there will be ways to increase performance through virtualization. If it's a resource intensive application (like graphics), there will be virtualization protocols and hardware support allowing the high-end hardware to work cooperatively with multiple virtualized OSes which are all in turn working cooperatively with one another to share that resource. And if your needs fall to interprocess cooperation where multiple OSes work together in harmony to do some work, then there will be tools which allow that as well.
Many components the software community needs to make virtualization shine is coming. As a result, it is unlikely that the virtualized machines of tomorrow will look like the ones of today. Over time, we'll be seeing an evolution away from the specific and into the generic. We'll see the dictums of hardware rigidity replaced by software protocols on shared platforms. Devices which today gain their high speed through direct, proprietary couplings in hardware will be supplanted by devices which gain extreme flexibility and far greater utility via the use of software protocols operating across high-speed open-source communication technologies.
In such future designs, the usefulness of the machine won't be in its high speed alone. It will be in what you can do with the machine. And the flexibility provided for in software-based communication protocols (those which do not operate at the highest possible hardware speeds, but rather close to it while providing greater inter-device cooperation) will be the key success component of tomorrow's hardware.
Because virtualization has already taken off so powerfully, and because the immediate next-gen hardware will bring faster support resources to this market, the goals of future software development in communities will change. However, because those software goals change, the goals of future hardware-based virtualization technologies must also change. We believe that these two industries will begin feeding off each other and the key proponents driving both the software industry and a virtualization industries forward will be the same. In that sense, we could see a shared, common, community vision across disciplines.
Virtualization also may bring protocols and extensions defined to address cross-OS platform needs. Those could then mandate new hardware support be added to properly lock-down this community sharing of memory and resources. If that technology is adopted, the software will use it in new ways requiring additional support. And in such times, the idea of any OS thinking of itself as an isolated thing, one operating entirely alone in a machine, is likely to totally go away.
The new CPUs that will support this overall infrastructure will bring to the table not just be the traditional ideas of executing binary code, but rather the expanded ideas of providing the supportive framework needed to give software developers the tools the need which make the machine more than just a machine. The machine will help make itself more than just a machine.
There is so much that could be written about this subject. This article has barely touched the absolute surface. The vision I have gathered so far, however is clear: Virtualization is only in its infancy. Where it's headed is not entirely known, but the fact that it will always be there, is aready a given.
When compute hardware is used for any job, the goal is always to do one thing: More. And with virtualization we're finding out that “doing more” doesn't always mean faster computing. While it's still a key component that will continue to permeate all aspects of semiconductor-based computing, the reality is that the “doing more” concept is employed much more effectively in software than in hardware. The usefulness of the tool proves consistently that “doing more” means better use and not just being faster.
The hardware resources of tomorrow will continue to grow. Memory, storage, communication, accessibility, all of it will feed into virtualized machines which may, for the first time in history, give us the ability to integrate all components of our life via a single tool based on the fundamental principle of doing more. And, we believe, it will be the first tool thatwill actually live up to that phrase.
What are your thoughts? Let us know where you think virtualization is headed by posting a comment below.