“Operating system virtualization” refers to the use of software to permit system hardware to run multiple instances of different operating systems concurrently, allowing you to run different applications requiring different operating systems on one computer system. The operating systems do not interfere with each other or the various applications. It is different from “Operating System-Level Virtualization”, which is a type of server virtualization.
Just sixty years ago, scientists powered up the ENIAC, generally regarded as the first electronic computer. ENIAC's clock ran at five kilohertz, and the system's 17,468 vacuum tubes consumed over 150 kilowatts of power. ENIAC's software environment was primitive by today's standards. Programs consisted of sequences of register settings, entered via dials on control panels, and small modifications to the internal circuits, implemented like the connections in operator-assisted telephone switchboards.
Discussion
Operating System Virtualization: AMD-V and Intel-VT
The industry has come a long way in sixty years. First, the transistor and, later, the integrated circuit enabled the creation of inexpensive microprocessors containing hundreds of millions of transistors, running at multigigahertz frequencies, and consuming less than 100 watts. Advances in software technology enabled the productive deployment of these powerful systems.
Technological evolution both drives, and is driven by, ever-increasing levels of abstraction in hardware and software architectures. High-level programming languages like Fortran, COBOL, BASIC, C, and Java allowed programmers to implement software algorithms in a manner divorced from underlying machine architectures. Operating systems provided abstractions that freed programs from the complex and varied details needed to manage memory and I/O devices. Contemporary application software, swaddled within layers of middleware and dynamic linked libraries, must work overtime to determine the physical characteristics of the hardware on which it runs.
Although application packages and middleware have become blissfully unaware of the vagaries of specific hardware implementations, the operating systems that provide this isolation must themselves be totally cognizant of the hardware on which they reside. Details like MAC and IP addresses, SAN LUN assignments, physical memory configurations, processor counts, and system serial numbers become enmeshed within the OS state at the time of system installation. This stateful information locks the OS to the specific hardware on which it was installed and complicates hardware fault recovery, system upgrades, and application consolidation.
Of course, the same hardware and software architects who created the abstract environments in which today's software and middleware packages reside were not about to let a few annoying details like stateful information stand in their way. They realized that if they could abstract the hardware as seen by the operating system, then they could finesse software's view of the physical configuration on which it was installed. They called their approach “virtualization,” and it turns out to be harder than you might imagine.
Operating system software likes to think it owns the hardware on which it runs, and does not like to be fooled. It is harder still, given that the x86 architecture came into existence long before notions of virtualization entered the industry, and does not lend ...