Desktop Virtualization - What It Is, and What It Isn't

Now that the two main forms of virtualization offering complete systems have been explained, it might be interesting to talk about desktop virtualization. We have noticed a lot of confusion surrounding this term, and wanted to clarify this technology for those who are affected by it.

A common misconception amongst people unfamiliar with desktop virtualization is that it's actually another range of changes in a standard system, which somehow allows users to make use of a "virtual desktop" on their otherwise perfectly normal system. Confusion tends to arise when thinking about what exactly needs virtualizing about something as basic as a desktop, and what purposes it would serve, so let's get this misunderstanding out of the way from the start: desktop virtualization is not as simple as isolating your desktop from the rest of your computer system. In fact, the term "desktop virtualization" is a bit confusing altogether when put next to hypervisors and containers.

Instead, we can think of it as a way to "consolidate desktops" onto a centrally managed system, much in the spirit of terminal services, in an attempt to reduce workstation sprawl on a company network. Desktop virtualization, or VDI as it is commonly referred to, is in fact a combination of technologies that make it possible for people to log in on their private virtual machines (be they hypervisor- or container-based) from some other place on the network. This prevents company data from spreading outside of the IT administrator's influence, and makes the act of troubleshooting a specific problem a lot more convenient than having to run around half the building to find the culprit.


Generally, the system employs a protocol much like Microsoft's Remote Desktop to be able to work freely on their virtual machine, without ever having to interact with the actual hardware directly. Security and authentication systems may be in place, ensuring the right people get assigned to the right virtual machines, and taking care of possible break-ins. The actual system used to log in remotely might be anything, ranging from a thin PC to its bloated cousin, the regular workstation.

Companies that are currently working hard on improving their desktop virtualization solutions are VMware and Citrix.

When Should We User Containers? Application Virtualization - the Odd Man Out?
Comments Locked

14 Comments

View All Comments

  • Ralphik - Wednesday, October 29, 2008 - link

    Hello everybody,

    I have installed a virtual Win98 on my computer, which is running WinXP. The problem I have is that there are no GeForce7 and higher drivers available for such old Windows platforms - has anyone got a tip or a cracked driver that I could use? It now has a completely useless S3 Virge driver installed . . .
  • Jovec - Friday, October 31, 2008 - link

    Unless I'm missing something (new), your Win98 running in your VM will not see your GeForce video card, or indeed any of the actual hardware in your computer. It just sees the virtual hardware provided by your VM software - typically an emulated basic VGA video adapter and AC'97 sound. VM software emulates an emulates an entire virtual computer on your host PC, but does not use the physical hardware natively.

    In short, you are not going to get Geforce level graphics power in your Win98 VM.
  • stmok - Wednesday, October 29, 2008 - link

    "Could it be that these two pieces of software are using related techniques for their 3D acceleration? Stay tuned, as we will definitely be looking into this in further research!"

    => Parallels took Wine's 3D acceleration component. More specifically, they took the translator that allowed one to translate OpenGL calls to DirectX and vice versa.

    There was a minor issue about this when Parallels are not compliant with the open source license of Wine. But that was settled when Parallels complied with the LGPL two weeks later.
    => http://parallelsvirtualization.blogspot.com/2007/0...">http://parallelsvirtualization.blogspot...2007/07/...
    => http://en.wikipedia.org/wiki/Parallels_Desktop_for...">http://en.wikipedia.org/wiki/Parallels_Desktop_for...

    What annoys me, is that they never bothered with adding 3D Acceleration support in the Linux version of Parallels. The only option is the very current release of VMware Workstation. (Version 6.5 has technology implemented from their VMware Fusion product).
  • duploxxx - Tuesday, October 28, 2008 - link

    btw is this a teaser for the long announced virtualization performance review?
  • Vidmo - Tuesday, October 28, 2008 - link

    I was hoping this article would get into some of the latest hardware technologies designed for better virtualization. It's still quite confusing trying to determine which hardware platforms and CPUs support VT-d for example.

    The article is a nice software overview, but seems incomplete without getting into the hardware side of the issues.
  • solusstultus - Tuesday, October 28, 2008 - link

    Hardware support for VT is not used by most/any? commercial hypervisors (VMware doesn't use it) and has been shown to actually have lower performance in many cases than binary translation:

    http://www.vmware.com/pdf/asplos235_adams.pdf">http://www.vmware.com/pdf/asplos235_adams.pdf
  • duploxxx - Tuesday, October 28, 2008 - link

    unfortunately your link is 2 years old.

    Current statement for Vmware ESX is that you should use the hardware virtualization layer when you have 64bit OS at any time and when virtualization layer 2 aka NPT from amd (ept when intel launches nehalem next year) at any time.
  • solusstultus - Wednesday, October 29, 2008 - link

    While I don't claim to be an expert, that's the most recent study that I have seen that actually lists performance results from both techniques.

    If you have seen more recent results, do you have a link? I would be interested in reading it.

    From what I have seen, NPT addresses overheads associated with switching from the Guest to the VMM during page table updates (which can occur frequently when using small pages). However, the other main source of overhead cited in the paper that I referenced were traps into the VMMs on system calls which could be replaced by less expensive direct links to VMM routines in translated code. So unless the newer hardware support virtualization implementations address this (they might, I haven't looked at the documentation), it seems translation could still be potentially faster for some apps, and that an ideal implementation would make use of both in different situations.
  • Vidmo - Tuesday, October 28, 2008 - link

    Ahh I somehow missed the link to your hardware article.
    http://it.anandtech.com/IT/showdoc.aspx?i=3263&...">http://it.anandtech.com/IT/showdoc.aspx?i=3263&...

    Very well done. Would it be possible to update that article to reflect VT-d and possibly TV-i technologies as well?
  • LizVD - Tuesday, October 28, 2008 - link

    Thanks for the input!

    The real purpose of this article was to provide a "beginner-safe" intro into the things we have been discussing on Anandtech IT for the past couple of months, so in-depth discussion of each of the technologies is something we avoided on purpose, to keep focus on the basic differences without getting carried away.

    Your question is an interesting one, however, and of the sort we'd like to properly address in our blogs, so keep an eye on them, as we'll be looking into it.

Log in

Don't have an account? Sign up now