Vios Next Generation

VIOS Next Generation phase 1
nagger | Dec 17 2010
You may have seen comments or even presentations with a few slides on VIOS Next Generation or VIOS NextGen in the past year. I did at the Power Technical Universities and was very interested.
Well, the first phase has been release but you may well have missed it as it appears as VIOS 2.2.0.11 FixPack 24 Service Pack 01 - not a catchy title and it slipped out below my radar! It is a regular service pack for the current VIOS and include some other features and fixes too. The first thing you have to note is that only a few features are available with the first release. Second, this is available for download from IBM Fix-Central or you can go to the VIOS home website and follow the links. It is 900 MB in size so there is some serious function in there.

The basic idea behind this technology, which is now called Shared Storage Pools is that VIOS's across machines can be clustered together and allocate disk blocks from large LUNs assigned to all of them rather than having to do this at the SAN storage level. This uses the vSCSI interface rather than the pass through NPIV method. It also reduces SAN admin required for Live Partition Mobility - you get the LUN available on all the VIOS and they organise access from there on. It also makes cloning LPARs, disk snapshots and rapid provisioning possible. Plus Thin provisioning i.e. disk blocks are added as and when required - thus saving lots of disk space. In this phase 1, the cluster is limited to one VIOS. Yes, a new definition of the word cluster :-)
I would class this as a Technology Preview version with more function to follow. It will let us learn the concepts, set up very simple examples and see some benefits and get ready for the full function release. To get you started:
Check the current limitations at https://www-304.ibm.com/support/docview.wss?rs=0&uid=isg400000373
Find the documentation at http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7hb1/iphb1config.htm
In this first release a VIOS has a repository for meta data and then the LUNs for handing out disk space to its client LPARs. The default of Thin Provisioning is used for this. This is available on machines running this very latest VIOS release. I have not seen any limitations on the type of hardware except it needs to be Fibre Channel LUNs.



Although, only one VIOS is allowed in the current phase 1, you can have more than one VIOS having this running at a time - they are isolated single node (VIOS) clusters.

In this release, the features are via the VIOS command line or you can use smitty.

The simple cluster is built using the Cluster Aware AIX command (this has been available from early this year on AIX but now includes extra options) like:
mkcluster -create -clustername clusterA -repopvs hdisk8 -spname poolA -sppvs hdisk2 -hostname viosAhostname
Disks will be renamed (like hdisk8 becomes cldisk1) for consistent names across ALL of the VIOS cluster
I think: -repopvs means repository physical volumes
References to "sp" are the storage pool and -sppvs reads storage pool physical volumes
I agree - not the prettiest syntax
Later we can add other VIOS by hostname (not in phase 1) and disks etc. with the chcluster command




To allocate disk space for a client LPAR it uses the same mkbdsp command as for File backed vSCSI creation (this is a feature we have had for years where files on the VIOS can be used as the virtual SCSI disks for client LPARs) and connecting to a particular LPAR via if virtual SCSI adapter. Although this can be done in two separate commands mkdbsp and mkvdev, if preferred.
Below the size is 30 GB and the backing device called vdisk100 and client LPAR is using virtual adapter vhost0
Example: mkbdsp -clustername clusterA -sp poolA 30G -bd vdisk100 -vadapter vhost0
The disk space is only assigned when the LPAR actually tries to write to the disk blocks, so this will make over provisioning for those small LPARs nice and simple - but do monitor to make sure you don't actually run out of space - the LPARs will think they have faulty disks at that point.

I hope this has got you interested enough to give it a go or at least take a further look.
Thanks, Nigel Griffiths.

Comentarios