GPFS - Quick install steps


Step 1: Verify Environment


  1. Verify nodes properly installed
    1. Check that the oslevel is supported
      On the system run oslevel
      Check the GPFS FAQ:http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp__
    2. Is the installed OS level supported by GPFS? Yes No
    3. Is there a specific GPFS patch level required for the installed OS? Yes No
    4. If so what patch level is required? ___________
  2. Verify nodes configured properly on the network(s)
    1. Write the name of Node1: ____________
    2. Write the name of Node2: ____________
    3. From node 1 ping node 2
    4. From node 2 ping node 1
      If the pings fail, resolve the issue before continuing.
  3. Verify node-to-node ssh communications (For this lab you will use ssh and scp for communications)
    1. On each node create an ssh-key. To do this use the command ssh-keygen; if you don't specify a blank passphrase, -N, then you need to press enter each time you are promoted to create a key with no passphrase until you are returned to a prompt. The result should look something like this:
      # ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
      Generating public/private rsa key pair.
      Created directory '/.ssh'.
      Your identification has been saved in /.ssh/id_rsa.
      Your public key has been saved in /.ssh/id_rsa.pub.
      The key fingerprint is:
      7d:06:95:45:9d:7b:7a:6c:64:48:70:2d:cb:78:ed:61
      sas@perf3-c2-aix

    2. On node1 copy the /.ssh/id_rsa.pub file to /.ssh/authorized_keys
      cp /.ssh/id_rsa.pub /.ssh/authorized_keys
    3. From node1 copy the /.ssh/id_rsa.pub file from node2 to /tmp/id_rsa.pub
      scp node2:/.ssh/id_rsa.pub /tmp/id_rsa.pub
    4. Add the public key from node2 to the authorized_keys file on node1
      cat /tmp/id_rsa.pub >> /.ssh/authorized_keys
    5. Copy the authorized key file from node1 to node2
      scp /.ssh/authorized_keys node2:/.ssh/authorized_keys
    6. To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2 until you are no longer prompted for a password or for addition to the known_hosts file.
      node1# ssh node1 date
      node1# ssh node2 date
      node2# ssh node1 date
      node2# ssh node2 date
    7. Supress ssh banners by creating a .hushlogin file in the root home directory
      touch /.hushlogin
  4. Verify the disks are available to the system
    For this lab you should have 4 disks available for use hdiskn-hdiskt.
    1. Use lspv to verify the disks exist
    2. Ensure you see 4 disks besides hdisk0 talk.

Step 2: Install the GPFS software




On node1


  1. Locate the GPFS software in /yourdir/software/base/
    cd /yourdir/software/base/
  2. Run the inutoc command to create the table of contents
    inutoc .
  3. Install the base GPFS code using the installp command
    installp -aXY -d/yourDir/software/basegpfs -f all
  4. Locate the latest GPFS patch level in /yourdir/software/PTF/
    cd /yourdir/software/PTF/
  5. Run the inutoc command to create the table of contents
    inutoc .
  6. Install the PTF GPFS code using the installp command
    installp -aXY -d/yourdir/software/PTFgpfs -f all
  7. Repeat Steps 1-7 on node28. On node1 and node2 confirm GPFS is installed using lslpp
    lslpp -L gpfs.\*

    the output should look similar to this

    # lslpp -L gpfs.\*
    Fileset Level State Type Description (Uninstaller)
    ----------------------------------------------------------------------------
    gpfs.base 3.3.0.3 A F GPFS File Manager
    gpfs.docs.data 3.3.0.3 A F GPFS Server Manpages and Documentation
    gpfs.gui 3.3.0.3 C F GPFS GUI
    gpfs.msg.en_US 3.3.0.1 A F GPFS Server Messages U.S. English

    Note: Exact versions of GPFS may vary from this example, the important part is that all three packages are present.

  8. Confirm the GPFS binaries are in your path using the mmlscluster command
    # mmlscluster

    mmlscluster: 6027-1382 This node does not belong to a GPFS cluster.
    mmlscluster: 6027-1639 Command failed. Examine previous error messages to determine cause.

    Note: The path to the GPFS binaries is: /usr/lpp/mmfs/bin


Step 3: Create the GPFS cluster


For this exercise the cluster is initially created with a single node. When creating the cluster make node1 the primary configuration server and give node1 the designations quorum and manager. Use ssh and scp as the remote shell and remote file copy commands.
*Primary Configuration server (node1): ________
*Verify fully qualified path to ssh and scp: ssh path________
scp path_____________

  1. Use the mmcrcluster command to create the cluster
    mmcrcluster -N _node01_:manager-quorum -p _node01_ -r /usr/bin/ssh -R /usr/bin/scp
  2. Run the mmlscluster command again to see that the cluster was created
    # mmlscluster

    GPFS cluster information
    ========================

    GPFS cluster name: node1.ibm.com
    GPFS cluster id: 13882390374179224464
    GPFS UID domain: node1.ibm.com
    Remote shell command: /usr/bin/ssh
    Remote file copy command: /usr/bin/scp

    GPFS cluster configuration servers:
    -----------------------------------

    Primary server: node1.ibm.com
    Secondary server: (none)

    Node Daemon node name IP address Admin node name Designation
    -----------------------------------------------------------------------------------------------
    1 perf3-c2-aix.bvnssg.net 10.0.0.1 node1.ibm.com quorum-manager


  3. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
    mmchlicense server --accept -N node01

Step 4: Start GPFS and verify the status of all nodes


  1. Start GPFS on all the nodes in the GPFS cluster using the mmstartup command
    mmstartup -a
  2. Check the status of the cluster using the mmgetstate command
    # mmgetstate -a

    Node number Node name GPFS state
    ------------------------------------------
    1 node1 active



Step 5: Add the second node to the cluster


  1. One node 1 use the mmaddnode command to add node2 to the cluster
    # mmaddnode -N node2
  2. Confirm the node was added to the cluster using the mmlscluster command
    # mmlscluster
  3. Use the mmchcluster command to set node2 as the secondary configuration server
    # mmchcluster -s node2
  4. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
    mmchlicense server --accept -N node02
  5. Start node2 using the mmstartup command
    # mmstartup -N node2
  6. Use the mmgetstate command to verify that both nodes are in the active state
    # mmgetstate -a


Step 6: Collect information about the cluster


Now we will take a moment to check a few things about the cluster. Examine the cluster configuration using the mmlscluster command

  1. What is the cluster name? ______________________
  2. What is the IP address of node2? _____________________
  3. What date was this version of GPFS "Built"? ________________
    Hint: look in the GPFS log file: /var/adm/ras/mmfs.log.latest

Step 7: Create NSDs


You will use the 4 hdisks.

  • Make sure they can all hold data and metadata
  • Leave the storage pool column blank.
  • Leave the Primary and Backup server fields blank
  • Sample input files are in /yourdir/samples
  1. On node 1 create directory /yourdir/data
  2. Create a disk descriptor file /yourdir/data/diskdesc.txt using the format:
    #DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePool
    hdiskw:::dataAndMetadata::nsd1:
    hdiskx:::dataAndMetadata::nsd2:
    hdisky:::dataAndMetadata::nsd3:
    hdiskz:::dataAndMetadata::nsd4:

    Note: hdisk numbers will vary per system.

  3. Create a backup copy of the disk descriptor file /yourdir/data/diskdesc_bak.txt
    cp /yourdir/data/diskdesc.txt /yourdir/data/diskdesc_bak.txt
  4. Create the NSD's using the mmcrnsd command
    mmcrnsd -F /yourdir/data/diskdesc.txt



Step 8: Collect information about the NSD's


Now collect some information about the NSD's you have created.

  1. Examine the NSD configuration using the mmlsnsd command
    1. What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?) associated with an NSD? _______

Step 9: Create a file system


Now that there is a GPFS cluster and some NSD's available you can create a file system. In this section we will create a file system.

  • Set the file system blocksize to 64kb
  • Mount the file system at /gpfs
  1. Create the file system using the mmcrfs command
    mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k
  2. Verify the file system was created correctly using the mmlsfs command
    mmlsfs fs1

    Is the file system automatically mounted when GPFS starts? _______________

  3. Mount the file system using the _mmmount_ command
    mmmount all -a
  4. Verify the file system is mounted using the df command
    # df -k
    Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
    /dev/hd4 65536 6508 91% 3375 64% /
    /dev/hd2 1769472 465416 74% 35508 24% /usr
    /dev/hd9var 131072 75660 43% 620 4% /var
    /dev/hd3 196608 192864 2% 37 1% /tmp
    /dev/hd1 65536 65144 1% 13 1% /home
    /proc - - - - - /proc
    /dev/hd10opt 327680 47572 86% 7766 41% /opt
    /dev/fs1 398929107 398929000 1% 1 1% /gpfs
  5. Use the mmdf command to get information on the file system.
    mmdf fs1

    How many inodes are currently used in the file system? ______________

Comentarios

Publicar un comentario