This article describes the virtualization capabilities of the IBM® POWER5™ servers, provides examples that apply equally to both pSeries® p5 and eServer™ OpenPower™ systems, and shows how to set up and use the IBM Virtual I/O Server (VIO Server). The VIO Server is currently based on a subset of AIX® 5.3 that includes additional packages and services. It comes with the IBM optional packages called the Advanced POWER Virtualization (APV) for pSeries p5 machines or the Advanced OpenPower Virtualization (AOPV) packages for OpenPower machines. As these two versions have identical features and functions, I reference them as VIO Server throughout this article. The VIO Server provides a complete environment for the VIO Server, full support from IBM, and higher levels of performance.
Virtualization is a hot topic in the computing industry, with many widely different technologies and solutions being recommended, developed, and used. The POWER5-based machines have inherited the know how from the IBM mainframes to provide opportunities for a significant reduction in operating costs for complex environments. Unlike software solutions available from other vendors, the POWER5 implementation uses advanced processor features, firmware (also known as Hypervisor), and hardware features to create efficient and flexible virtualization capabilities. Uniquely, these capabilities are offered from the top to the bottom of the server range -- a powerful 64-way SMP (Symmetric multiprocessor) machine down to a two-way, desk-side system. The key to this virtualization is the VIO Server.
This article:
- Explains the VIO Server concepts and how it works between logical partitions (LPARs) for disk access and networks.
- Covers the advantages of using a VIO Server and typical usage scenarios.
- Shows, by example, how to set up the IBM VIO server and VIO clients.
pSeries servers from IBM have, since October 2001, allowed a machine to be divided into LPARs, with each LPAR running a different OS image -- effectively a server within a server. You can achieve this by logically splitting up a large machine into smaller units with CPU, memory, and PCI adapter slot allocations.
The new POWER5 machines (pSeries p5 and OpenPower servers) can also run an LPAR with less than one whole CPU -- up to ten LPARs per CPU. So, for example, on a four CPU machine, 20 LPARs can easily be running. With each LPAR needing a minimum of one SCSI adapter for disk I/O and one Ethernet adapter for networking, the example of 20 LPARs would require the server to have at least 40 PCI adapters. This is where the VIO Server helps.
The VIO Server owns real PCI adapters (Ethernet, SCSI, or SAN), but lets other LPARs share them remotely using the built-in Hypervisor services. These other LPARs are called Virtual I/O client partitions (VIO client). And because they don't need real physical disks or real physical Ethernet adapters to run, they can be created quickly and cheaply.
There are different VIO Server implementations:
- Both APV and AOPV versions of the VIO Server are special-purpose, single-function appliances and are not intended to run general applications.
- The Linux VIO Server for pSeries p5 or OpenPower hardware first became available with the SUSE SLES 9 distribution. Unlike the VIO Server, this is just a copy of the Linux operating system. This means it can run other central services such as NFS, network installation, DNS, an Apache Web site, or Samba services. Some care should be taken that these functions do not interfere with the performance of the VIO Server service. This software is also available on the Debian Linux for POWER distribution.
There are different implementations for VIO clients. Actually, these are just the regular operating systems, but they include the device drivers for running as a VIO client.
- AIX 5.3 (only supported by the APV or AOPV VIO Server)
- Linux -- SUSE SLES 9
- Linux -- Red Hat EL 3 update 3 onwards and Red Hat EL 4
- Linux -- Debian for POWER
This article covers the VIO Server and the AIX and Linux VIO clients.
The VIO Server provides a virtual SCSI disk service, as shown in Figure 1 below.
Figure 1. Virtual SCSI disk service
Figure 1 shows a single VIO Server providing virtual SCSI services to multiple VIO client partitions. Each VIO client operates as if it had a dedicated SCSI device but, in fact, each client device is a real disk partition (logical disk partition) on the VIO Server. Alternatively, on the VIO Server, it could use a complete disk (hdisk). The VIO Server and VIO client communicate using the internal pSeries Hypervisor firmware (PHYP) feature, which efficiently allows disk I/O requests to be transferred between the LPARs using a message-passing protocol.
In Figure 1 above, the VIO Server has a few disks that could be SCSI or fiber channel storage area network (SAN) disks. The disk subsystem hardware or a RAID5 SCSI adapter can provide data protection. The VIO clients use the VIO client device driver just as they would a regular local device disk to communicate with the matching server VIO device driver. Then the VIO Server actually does the disk transfers on behalf of the VIO client. There is a strict client/server relationship between the VIO client and the VIO Server.
The LPARs in the machine can use the virtual Ethernet switch service (in the Hypervisor) in a number of different ways.
- Case one: Internal only networks
- You can use the Virtual Ethernet to allow TCP/IP (Transmission Control Protocol/Internet Protocol) to communicate between the LPARs, as shown in Figure 2 below. This provides high-speed data transfer without any hardware adapters starting at roughly one Gbit per second (can be much higher), especially using larger block sizes. Figure 2 also shows that there is no client/server relationship between the LPARs -- all are equally using the Virtual Ethernet. There can be many Virtual Ethernets in one machine, where groups of LPARs can communicate only within the virtual Ethernet they're connected to, allowing fast communication and complete security without buying additional Ethernet adapters, cables, hubs, or routers.
Figure 2. Virtual Ethernet -- Private/internal only networks
- Case two: Routing to a physical LAN
- One LPAR on the Virtual Ethernet can also communicate externally to other machines using a real physical network on behalf of all the LPARs. In this case, this special LPAR is being used to route Ethernet packets between the internal Virtual Ethernet and the external physical Ethernet network. It will work well, but involves setting up TCP/IP routes between the two networks (internal and external) and can take time to set up. Figure 3 below shows one LPAR with a real physical Ethernet adapter providing standard network routing between the two Ethernets. Note that this is not using any VIO Server features.
Figure 3. Internal Virtual Ethernet with a bridge to the external LAN
- Case three: Shared Ethernet Adapter (SEA) to a physical LAN
- Here, the VIO Server is being used to bridge Ethernet packets between the internal Virtual Ethernet and the external physical Ethernet network so that all the LPARs appear as regular machines on the physical network. This is simple to set up and is the option used in the example in this article. In Figure 4, the VIO Server is being used to join the two networks using the SEA. Strictly speaking, the adapter is not shared. It's owned and controlled by the VIO Server; however, it also provides shared access to the real physical network.
Figure 4. Internal Virtual Ethernet with a SEA to the external LAN
- Case four: Bridging with virtual LANs (VLANs)
- This particular scenario is almost the same as Case three. The only difference is the number of VLANs within the machine using Virtual Ethernet. These are connected to VLANs on the external network with a bridging LPAR and a network router that supports VLAN. This complex scenario is beyond the scope of this article, but some hints are included and it's supported.
You can use a VIO Server in any number of scenarios. Below are five typical examples that would make good use of a VIO Server.
- Small machine with limited PCI slots
You have one set of internal SCSI disks or you can split the SCSI disks in two 4-packs on the OpenPower 720 or p5-550. This gives you two LPARs (at most) using the internal disks. So, you might run a VIO Server to support the other LPARs. For example, try a VIO Server (0.5 of a CPU) with four to six clients (0.1 to 1 CPU). Typically, clients might be small -- four to 16 GB virtual SCSI disks and one Virtual Ethernet for the whole machine. Figure 5 shows multiple LPARs running on a single disk pack.
Figure 5. Multiple LPARs - Mid-range machines with extra small workloads
This might be an eight or 16 CPU machine with large partitions for production use. But many system administrators also want a small number of extra LPARs. Rather than buy an extra machine, a VIO Server can easily host a half dozen smaller LPARs. For example, larger production LPARs might have one to four larger dedicated CPU(s), dedicated disk I/O(s), and network(s) each.The VIO Server is used for "bits and bobs" LPAR like test, development, training, practice, new application trials, and so on. Typically, VIO clients might have a couple of four GB to eight GB virtual SCSI disks and one or two virtual Ethernets.
In Figure 6, three large production LPARs are running (they would have dedicated disks and Ethernet) with a few extra small VIO clients and one VIO Server on the machine using spare capacity. This "spare" capacity could be demanded by the production LPARs during peaks in their workload.
Figure 6. Three large production LPARs - Ranch or server farm style
Lots of small server consolidation workloads from smaller or older machines or many small servers are required, but they are unlikely to peak at the same time. The machine is to run lots of LPARs, for example 10 to 20 clients on a four-way machine or many times that on larger machines. Each LPAR is for small applications, but not high demand (0.2, or 0.5 CPU up to 2 CPU). This could be server consolidation or, for example, the importance of data isolation from a collection of small Web servers.The VIO Server has one or two CPU(s), possible RAID 5 SCSI disks, or SAN disks. Typically, clients have one or more four GB virtual SCSI disks each and might have different groups of LPARs around a different Virtual Ethernet.
Figure 7 shows dozens of VIO clients with a medium-sized VIO Server supporting them on what might be several disk packs.
Figure 7. Different groups of LPARs - Serious I/O setup only once (to reduce setup and management)
The VIO Server has SAN disks connected by two to four Fibre Channel adapters and two Ethernet adapters to run Ether channel for redundancy and additional bandwidth. The VIO Server has load balancing and failover, but VIO clients have a much simpler disk and Ethernet setup. The VIO Server could have one to three CPU(s), but the VIO clients are larger, too.For example, one to eight CPU(s) run quite large applications. Typically, VIO clients could have hundreds of GB of virtual SCSI disks and many Virtual Ethernets. This complex setup is not covered in this article. Figure 8 shows two regular LPARs (it would have dedicated disks) and a fully configured VIO Server (large) with multiple paths to disks and Ethernet. This is supporting some large VIO client LPARs.
Figure 8. Regular LPAR - Serious with high availability backup
Same as above, but with a second VIO Server for availability/throughput. There are arguments that for very high availability you should spread your access to virtual SCSI and Virtual Ethernet across two VIO Servers in order to continue running in case one VIO Server goes down.The counter argument is that VIO Server is only running a few device drivers. Devices drivers are extremely reliable. Also, anything that would crash one VIO server could also crash the second one. Figure 9 shows that instead of using local physical device drivers, the VIO client uses the virtual resource device drivers to communicate with the VIO Server, which does the real I/O. Except the virtual VIO Server device drivers and the physical resource device drivers, there is very little code running on the VIO Server. Little can go wrong on the VIO Server side.
Figure 9. Figure 9. VIO Server
Comentarios
Publicar un comentario