Dan Kusnetzky
virtualization image
The Virtual Man
April 27th, 2007

VMware IPO - Why now?

Posted by Dan Kusnetzky @ 2:46 am Categories: virtual machine software, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

EMC Corporation has announced that it intends to sell 10% of VMware in an initial public offering. The announcement itself can be seen here. My question is why execute this move now? Here are reasons that may have contributed to the decision.

Market Pressures 

VMware is facing pressure from Microsoft. Microsoft is in the middle of a plan to incorporates its virtual machine software into its operating systems. This move is likely to place a great deal of pressure on other competitors if history is a guide. I remember what happened to suppliers of TCP/IP software, such as TGV, and memory management software, such as QuarterDeck Software, when Microsoft released Windows 95. Windows 95 contained both types of software. The other suppliers gradually faded away.  Their software often offered more features than the software being offered than the "free" software being offered by Microsoft. The word "free" is the operative word here. IT decision-makers were more than willing to adopt less funcitonal software if it was good enough to get by and it was included at no additional cost.

VMware is also facing pressure from Xen and if Simon Crosby, CTO of XenSource, is correct (see his blog entry here), Xen's commercial products outperform VMware ESX server at a considerably lower price. As Linux distributors, such as Red Hat and Novell, simply include Xen in their Linux distributions at no additional cost, IT decision makers who use Linux are likely to move in a Xen direction.

Another Linux virtual machine software offering, the Kernel-based Virtual Machine (KVM), is emerging as a potential threat to VMware.

VMware is facing many challenges from other suppliers offering management frameworks capable of managing an entire computing environment, not just the virtual machines that are running.  The list of management vendors reaching into this space include all of the management software suppliers and most of the virtual processing software suppliers. 

EMC Synergies

VMware was acquired by EMC and many were confused about the reasoning behind that move. At that time, EMC was known as a storage server powerhouse. Although EMC had a strong story in the area of storage virtualization, storage management, data replication and integration of their storage servers with all of the major hardware platforms and operating systems, the company was not really known for a broad view of virtualization. VMware was a vendor of virtual machine software and was trying to make a case for this technology on industry standard hardware platforms. Putting the two together didn't make much sense to many analysts. The fact that EMC acquired VMware and then set it up as a separate company rather than integrating it into their regular product lines showed that the two different business models were not easy to integrate without losing VMware's independance and rapid growth cycle.

Why an IPO Now?

VMware is facing a situation in which other technology is simply being included with key target operating environments, Windows and Linux. The executives over at VMware saw this coming and took a bold step of making their VM player available freely on the network in the hope of making it pervasive. The plan was to shift the stream of revenue that used to come from sales of its virtual machine to its management software and its virtual machine add ons, such as its multiprocessor extention. The problem VMware faces now is that they no longer can charge a premium price for management software. There are simply too many competitors.

It is my conjecture that the executives at EMC and at VMware looked at the stiff competition and the fact that it will no longer be possible to charge a premium price for any of their software and decided that if there was ever a time to issue an IPO, it was now. The value of the company is unlikely to be higher than now.

April 26th, 2007

Red Hat and Building the Open Source Virtualization Stack

Posted by Dan Kusnetzky @ 3:02 am Categories: virtualization, virtual machine software, virtual storage software, virtual processing software, clustering software, high performance computing, virtual access software, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

Since my conversation with the good folks at Red Hat (see Discussion with Red Hat’s Joel Berman and Nick Carr - 1st take) I've been thinking about the company's strategy and how that plays out in the real world. Red Hat presents the notion that a stack of open source software can address organizations' IT requirements nearly as well or as well as can proprietary software.

In many cases, it can be demonstrated that that notion plays out quite well in the real world. That is, open source software is often good enough to do the job. That being said, it is also true that proprietary tools might look more polished, fit into a specific vendor's environment a bit better or offer more bells and whistles.

IT management is made up of rather conservative folks for the most part. Once they find a set of tools that does the job they want done, they stick to them. This continues to be true when technology has marched on and when new tools might do the job better, cheaper or faster than their "old reliables." I believe the time-worn quote attributed to Abraham Maslow fits really well here - "If the only tool you have is a hammer, you tend to see every problem as a nail."

 

The challenge Red Hat faces (along with all of the other suppliers focused on open source solutions) is helping IT decision-makers to look at what open source solutions can do for the organization first and then consider the development and support methodology that produced the software.  After all, how many people really understand how HP, IBM, Microsoft or Oracle develop and support the software they're offering to the world? I would dare say that many employees of these fine companies don't understand that process in detail either.

So, I applaud Red Hat for doing its best to stick to open source software while building a full stack of virtualization technologies. At this point, they are able to offer application virtualization through creative uses of JBoss and Tomcat, processing virtualization through Xen, LVC and many third party tools and storage virtualization through the global file system (Red Hat GFS) it acquired.

There is a way yet to go for Red Hat, but they've made a good start. I'm hoping to learn more from them in future converstaions.

 

 

 

 

April 25th, 2007

Counter point to “Analysis: Can a Customer Be Too Small For Virtualization?”

Posted by Dan Kusnetzky @ 6:00 am Categories: virtualization, virtual machine software, virtual storage software, virtual processing software, clustering software, virtual access software
icn_balloon_154x48

+0

0 votes
Worthwhile?

Virtualization technology is just that, a technology. An organization's requirements and budget must be considered before a specific technology is selected.  A technological choice that is perfect for the needs of one organization might just be the wrong choice for another. That being said, Small to Medium Businesses (SMB) would be well served by considering the impact of each of the many different types of virtualization that are available on industry standard systems today. It's not really necessary to relegate a specific virtualization technology to the "good" or "bad" bucket.

I was reading Analysis: Can a Customer Be Too Small For Virtualization?, an article authored by Shelley Solheim of VARBusiness and came to the conclusion that the analysis was based upon a focus that was just too narrow. Throughout the article, the term "virtualization" was equated with server virtualization which in turn was equated with the use of virtual machine software. This focus ignores access virtualization, application virtualization, storage virtualization and even management of virtualized environments, all of which could be very useful in the IT infrastructure for a small to medium business (SMB).

Here are some examples:

 

  • Access virtualization, such as that offered by Microsoft, Citrix Systems, ClearCube and others, would make it possible for the staff of a SMB to access business critical applications that are hosted on systems in an organizational data center using secure, easy-to-use thin clients rather than a more traditional personal computer.  This would offer the SMB organization increased levels of security as well as the opportunity to reduce costs in several areas, such as PC administration, PC support and installing updates to PC, without focusing the SMB staff to learn new applications.
  • Application virtualization, such as that offered by Softricity (now part of Microsoft), DataSynapse and others, would make it possible for critical parts of the SMB's IT infrastructure to be replicated to increase scalability, reliability while keeping management costs in line.
  • Storage virtualization, such as that offered by a whole host (pun intended) of companies including HP and Network Appliance which would make it possible for the SMB's IT organization to centralize storage for both desktop systems and their servers. This would reduce the costs of typical administrative functions such as backup and allow storage to live beyond the life of any specific desktop or server.

I don't wish to belabor the point. I do, on the other hand, want organizations to first consider what they need, that is, their requirements, and then to consider what choices their budget  will allow before making the determination that a technology will or won't fit in their environment.

 

April 24th, 2007

SWsoft Once and Done Partitioning Software

Posted by Dan Kusnetzky @ 4:12 am Categories: virtual machine software, virtual processing software, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

An announcement from SWsoft just came to my attention. The company is offering a very simple way for small or medium sized organizations to deploy operating system partitioning or virtualization. It appears that the good folks over at SWsoft have thought long and hard about this and have come up with a new way to acquire Virtuozzo, SWsoft's operating system partitioning software. SWsoft is calling this new package the Virtuozzo Starter Pack.

Virtuozzo is not a virtual machine solution. It is something much closer to a partitioned operating system. It is more like Solaris Zones or AIX Logial Partitions (LPARs) than a virtual machine solution from VMware. SWsoft calls Virtuozzo an "OS virtualization solution." Virtuozzo creates isolated partitions on a single physical machine and operating system. Applications can then be made to consume only  a subset of the resources of that machine. Many of these virtual environments (VE) can be running on the same physical machine without interfering with one another.

This approach is different than encapsulating several whole stacks of software (operating system, data management software, application framework software, and application software) and using a hypervisor to juggle everything on a single machine. Only one operating system is in use so, a smaller, less expensive machine can be made to do the work. The benefit of this approach is that it would be much faster to switch from one "partition" or VE to another than switching from a virtual machine to another. A challenge to this approach is that all of the VEs hosted on the same machine must be running under the same operating environment. So, if the organizational requirements were to host Windows, Linux and Unix on a single machine, they'd be best advised to look for a different solution. On the other hand, if that organization wanted to centralize all of their Windows or Linux solutions onto its own single physical machine, SWsoft's approach makes a great deal of sense.

Although Virtuozzo has the capabilities to support hundreds of virtual environments or partitions on a single machine, the Virtuozzo Starter Pack is limited to supporting up to four environments and running on a single or dual processor machine. This seems to be about the right configuration for an organization just starting out or for distributed sites of a larger organization.

Virtuozzo Starter Pack is directly addressing one of the biggest challenges facing organizations that want to adopt some form of virtualiztion to improve the efficiency of their IT infrastructure, that is complexity. Although suppliers of any type of virtualiztion technology claim that they're offering simplicity, a serious level of complexity lurks just beneath the surface. If the organizational requirements for virtualization are even slightly different than the vendor's "norm", the complexity leaps to the surface, waves at everyone and then makes itself at home. You just know that can't be good. No one wants a guest that will wake up, walk over to the "IT refrigerator" and feast on what it finds there.

If I understand the SWsoft announcement correctly, the strengths of this package are that it is both easy to install and comes with tools that help the IT staff abstract a physical environment and make it virtual. It offers management tools making it easy for newcomers to virtualization to see and understand what's happening. Furthermore, it appears that SWsoft understands that medium size organizations are unlikely to be willing to pay a great deal of money the first time they try this type of virtualization.

Would the greater performance and lower cost hardware of the SWsoft Virtuozzo environment out weigh the fact that it is a single-operating-system-at-a-time environment in your organization?

April 23rd, 2007

XenSource: a virtual unknown?

Posted by Dan Kusnetzky @ 2:20 am Categories: virtual machine software, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

Even though Xen, virtual machine software for industry standard systems, is well known, The company behind this software, XenSource, is not as well known.

XenSource, with the help of the open source community, has developed software that securely juggle multiple virtual machines, each running its own operating environment, on a single physical system and produce close-to-native-machine performance. Xen is open source, and is released under terms of the General Public License from the Free Software Foundation

Although Xen is rapidly becoming pervasive in the world of Linux, people don't make a connection with XenSource.  If I asked 20 people at any random Linux conference to tell me about Xen, they'd certainly be able to answer. If I asked the same people who developed and supports this software, I'd surely hear answers ranging from Novell to Red Hat to IBM. It would be unlikely that I'd hear the name "XenSource."

Why do you suppose that's the case today?  What marketing, public relations or other steps could the company take that would equate Xen with XenSource rather than leading people to think about Novell, Red Hat or some other company that is using the open source version of Xen as the foundation for their product or services?

April 20th, 2007

DataSynapse and application virtualization

Posted by Dan Kusnetzky @ 6:51 am Categories: virtualization, virtual processing software, clustering software, high performance computing, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

I was offered the opportunity to speak with Kelly Vizzini, CMO of DataSynapse, the other day. She introduced me to a new member of her staff, Shayne Higdon, VP of Product Marketing. Kelly and I have been part of panel discussions at several conferences on virtualization, grid computing and the like over the years.  It was good to catch up with her and learn how DataSynapse is doing.

DataSynapse's focus is application virtualization, that is breaking the link between applications and their underlying infrastructure to offer organizations the benefits of improved scalability, higher levels of reliability, improved operational efficiency and providing reduced hardware and software costs.

DataSynapse would say that the goal of its products (FabricServer and GridServer) is building a datacenter in which all IT resources are shared and not isolated into application or departmental silos. This would mean, of course, that the utilization rates of these machines would be higher than what is traditionally seen in organizational datacenters. This would also mean that fewer machines would be needed to support the organization's workload.

This "next generation" datacenter, DataSynapse points out, would be managed in an automated fashion making sure that service levels are predictable and constant even though the world outside presents demands in a random or unpredictable way. The management of these IT resources would be based upon the organization's own priorities and guidelines. The company would distinguish this from infrastructure virtualization where only system services are virtualized.

I know, I know, that reads like the statements many other companies in this market are making today. DataSynapse, however, isn't just artfully reciting the best of today's industry hype. The company has a track record of success doing this very thing for their customers. When I requested one or two reference cases, Kelly and Shayne pointed out that they have dozens of them. These customer success stories can be found on the DataSynapse Web site. These customers help DataSynapse make a strong case that the company can actually do the things that they claim to be able to do out in the wild. Many others have only demonstrated this in captivity, that is, the safety of their own development center.

Are any of you DataSynapse customers?  If so, what are your experiences using this software?  Are the claims of DataSynapse true?

April 19th, 2007

Novell’s Orchestrator

Posted by Dan Kusnetzky @ 5:13 am Categories: virtualization, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

As I mentioned in my post, Overview of Novell's views on virtualization and again in my post, Conversations with Cassatt and Marathon, Thoughts on Novell and Red Hat, Novell appears to be focusing a great deal of attention on managing virtualized environments not just the tools of virtualization themselves. Along those lines, I had an opportunity to view a demo of Novell ZENworks Orchestrator presented by Richard Whitehead and Alan Murray of Novell the other day. Thanks guys for the interesting and informative session!

ZENWorks Orchestrator is a Java application that uses Python is used for the job definition language (JDL) to create built-in components and extensions. It can operate inside of management frameworks or as a separate tool for management of heterogeneous virtual machines. It appears that Novell has big plans for this technology and appears to be laying a foundation for a more comprehensive set of solutions in the future.

Novell is making an attempt to address one of the issues that is raised by the use of virtual machine software, the replacing of physical machine sprawl with an even more difficult to manage virtual machine sprawl.

Novell ZENworks Orchestrator takes an inventory of both the physical and virtual machines on the local network and then applies a rules-based/policy based approach to managing and scheduling tasks on this network. An intereresting wrinkle is that Novell has made provisions for reporting on this use and for billing for this use.

I was impressed with the simplicity of this tool and could easily envision IT administrators putting this tool to work. I was also impressed with the fact that Novell realized that large organizations and some medium sized organizations have already deployed a management framework and has designed Orchestrator to fit into those frameworks.

April 18th, 2007

Cassatt - moving virtualization up a notch

Posted by Dan Kusnetzky @ 6:19 am Categories: virtualization, virtual machine software, virtual processing software, clustering software, Managing virtualized environments
icn_balloon_154x48

+0

0 votes
Worthwhile?

I've been following Cassatt for a number of years and have always thought that the company has taken a fresh, different view of the concept of virtualization in the implementation of its product "Collage". In an attempt to catch up with what the company is doing now, I spoke with Jay Fry and Ken Oestreich. After the conversation, I was even more convinced that the company is taking a strategic, not tactical, approach to the concept of virtualization. In bullet form, here are the high points of our conversation.

  • Virtualization monitors are best thought of as operating systems that support other operating systems. That is, they must inventory available resources (down to a very granular level), optimize the use of these resources based upon an organization's policies and automatically respond to real time events to maintain service level objectives.
  • Physical and virtual environments are created on the fly based upon templates when needed.
  • It's important to take the broadest possible view and avoid point solutions. From this vantage point, a failure of some resource must be handled in the same way as any other condition that causes the configuration to no longer meet service level objectives.

I'm hoping to speak with some of Cassatt's customers and get their views on Cassatt, what they considered before acquiring the company's software, any tangible results they've gotten from the use of this software and what learning they would pass on to others.

Are there any other questions you'd like me to ask when I get the opportunity to speak with Cassatt's customers?

April 17th, 2007

Conversation with Marathon Technology

Posted by Dan Kusnetzky @ 8:25 am Categories: virtualization, virtual processing software
icn_balloon_154x48

+0

0 votes
Worthwhile?

A few days ago I had the opportunity to speak with Marathon Technology's, Michael Bilancieri, Director of Products, and Steve Keilen, Vice President of Marketing about creating both highly available (HA) and fully fault tolerant  (FT) environments using industry standard systems. It was a very interesting conversation. While at DEC (may they rest in pieces), I had many conversations with people involved in DEC's VAXft family of fault tolerant VAX systems. I was surprised to find out that some of those good folks are still working on FT and are over at Marathon.

Marathon seems focused on reducing the cost and the complexity involved with deploying FT systems. To that end, Marathon Technologies announced the v-Available™ initiative to help its partners better understand and deploy HA and FT solutions. I'm expecting to hear more interesting things from them over time.

As I thought about FT solutions of the past, they were often based upon single-vendor processors and involved a high level of expertise in industrial sorcery. Suppliers who offered this type of system Tandem and DEC (now part of HP), IBMStratus and others) built special-purpose systems that had been configured so that the processors ran in lock step.  This approach, by the way, often required the development of custom processors having the hardware features required to support this type of processing. The primary feature of these systems was how they treated the failure of a component. If one system failed, the others would simply pick up the work and continue. Applications would not be aware of the failure. This "fail through" process took a tiny fraction of a second making it highly desirable for applications that simply could not have down time, planned or unplanned.

Fault tolerant hardware typically was more costly than general purpose systems having similar processor, memory and storage configurations because every component was duplicated at least once. If an organization stood to loose enormous amounts of revenue due to a failure, it would purchase these systems regardless of the cost of the hardware. Since the cost of these configurations was high and these systems had to be treated as a single computer, many organizations turned to other types of virtualization, such as clustered systems, when their need for constant availability was not quite as high. While clusters took longer to deal with a "state change", the systems involved could all be productive rather than being treated as merely a "hot" backup.

Marathon's everRun™ FT creates a true fault tolerant environment using general purpose industry standard systems connected by Gigabit Ethernet. This means applications hosted in an everRun environment do not see failures. Processing "fails through" to remaining resources when something fails. Marathon is supporting Windows-based applications today and will support Linux-based applications in the future.

FT solutions fit monolithic applications best, you know, where the user interface, the application rules processing, the data(base) management and storage management are are part of the same image. Distributed applications may be able to offer similar levels of availability by using other types of virtualization software and redundant systems. That being said, developers and IT managers who are not familiar with Marathon or its products might find them to offer interesting solutions to difficult availability problems.

April 16th, 2007

Fault Tolerant and Fail Over is There a Difference?

Posted by Dan Kusnetzky @ 5:49 am Categories: virtualization, virtual machine software, virtual storage software, virtual processing software, clustering software, virtual access software, remote access software
icn_balloon_154x48

+0

0 votes
Worthwhile?

Fault tolerant (FT) solutions go beyond HA fail over solutions to present an environment that is never seen to fail not merely an environment that survives a failure.  Some suppliers of FT technology call this "fail through" rather than fail over. I thought that was a well known concept and was surprised to find that the distinction is still not clear to some.

While speaking with a potential client about how different forms of virtualization could address his organization's requirements, I detected that some of my comments created confusion rather than clarifying things.  As an aside, it appears that I have an innate ability to make some technology appear more complex that it really needs to be.

I'd like to offer a summary of the discussion while it is still fresh in my mind.

Virtualization technology, taken broadly, offers a number of approaches to availability. Here are a few of them.

 

  • Access to application solutions can be virtualized.  If the back end system fails, the individual using the application is connected to another system that offers the same application.  More sophisticated access virtualization software may make this process automatic. Even more sophisticated products in this area will remember the state of the application and give the impression that nothing ever failed. Doing this last bit, however, usually involves other forms of virtualization. This process, by the way, is unlikely to be instantaneous.
  • Application frameworks may offer load balancing and failover capabilities. The application framework monitor, upon detecting either a failure to meet service level objectives or some other type of failure, would start the application on another machine. Once again, the process could be automatic or require manual intervention. If other types of virtualization are in use, the actual state of the application could be saved during the process. While this process may happen quickly, it is likely that individuals using the application would notice a pause or a slow-down.
  • Processing virtualization, which includes clustering, parallel processing and virtual machine software, may offer similar load balancing and fail over capabilities to that offered by application framework virtualization for selected or all applications on a given system. The key difference between the levels of virtualization is that application framework virtualization only virtualizes applications running in that framework. Processing virtualization makes it possible for applications, data management products or even basic system services to fail over to another system. As with the other forms of virtualization, the fail over process can take some time.
  • Virtualizing storage often a necessity for all of the other forms of virtualization. After all, what good is moving an application over to another system, if the data it was processing is no longer available. Storage virtualization could be implemented using special purpose software on general purpose systems or by moving the entire storage function to a special purpose storage server.

All of these are well and good. What happens, however, when the requirement is that failures are never seen? This is the realm of FT systems.  In this case special purpose, redundant hardware configurations are deployed that are run in lock-step.  If one component of the system fails, the other continue working and the application does not fail.

Historically, FT solutions were quite expensive.  After all, every component of the system had to be replicated enough times to handle all expected failure scenarios. More recent solutions,  offered by suppliers such as Stratus and Maraton, are based upon industry standard systems and components. The use of off-the-shelf hardware significantly reduces the price of these solutions.

Does your organization deploy truly fault tolerant solutions or do one of the other forms of virtualization offer sufficient levels of reliability and availability?

Daniel Kusnetzky is the president of Kusnetzky Group (http://www.kusnetzky.net). He is responsible for research and publications on the topics of open source software, virtualization software and system software in general. He examines emerging technology trends, vendor strategies, research and development issues, and end-user integration requirements.
advertisement

Recent Entries

Most Popular Posts

advertisement

Archives

ZDNet Blogs

Popular white papers

advertisement
Click Here