I read with great interest the posting on virtual computing as this is something I believe is the future of deployment models. Abstracting the physical nodes from the software architecture will definitely allow more flexibility. Of course there has been a natural progression of abstracting the hardware over the years. High level languages removed dependence on machine instructions. Virtual memory removed dependence upon physical memory constraints. Virtual machines complete with portable libraries like Java have further lifted software off dependence on operating systems and the underlying systems.
Each of these abstractions come at a cost though. Making an aspect of the system more opaque allows for better portability and hopefully longer life for applications. But it also means the applications are less well adapted to the physical environment where they run. Java applications have improved in performance and resource usage dramatically over the years but they still can't match a well written C++ program in total resource utilization. In most cases, the development efficiencies achieved by Java are far more important than the incremental resource utilization so Java is the preferred language. But it important to recognize achieving abstractions comes at the cost of efficiency in almost all cases.
Virtualizing techniques like Xen and Solaris Containers definitely provide much needed capabilities that will allow hardware utilization to be increased. Most applications won't be able to detect whether they are running in a single instance of the operating system on a system or they are one of many instances of an OS container. There are a few tricky places though where virtual containers will show up to confuse the developer.
Any application that is heavily biased towards hardware will potentially require careful design to not break in a virtualized world. Networking, storage, and resource monitoring components all have certain expectations they expect when they interact with the hardware. In some cases, these expectations are not met and the software gets surprised. Architecting for this can be a bit tricky with the current tools as they have created a largely opaque view of the physical resources. Going forward, it may be necessary to allow certain applications to pierce the veil of the virtual container, at least sufficiently to calibrate itself to the container. For example, if an application wants to throttle data based on processor or network utilization, the container's view of utilization may be insufficient to allow the application to truly avoid over running the available resources.
Abstracting the deployment definitely has promise. As we gain experience with this technology. The biggest challenge will be providing the appropriate level of abstraction without hiding the appropriate information from applications. Resource usage, networking topologies, latency, and potentially other concrete information may be pertinent to certain classes of applications. Providing this while leveraging the advantages of virtual platforms will be critical to the overall success.