Friday, March 22, 2013

Solaris: Fibre Channel - device LUN cleanup on Solaris



Procedure to assist in removing Fiber Channel devices (on Solaris 10)
Gathering all the device information for failing devices and formulating the appropriate commands can be cumbersome.  We have an in-house tool to assist in the process that is non-destructive.  The tool will show the commands that you should use.  To use the tool simple run as root:    /install/veritas/vxvm/fc_show

 

Prereq for Veritas:    Remove device from VxFS/VxVM first . 

    If devices are under Veritas conrol:
  • first unmount the affected underlying VxFS filesystems . 
  • then remove associated disk from VxVM volume manager:   "vxdisk rm" on the device

 

General Procedure to remove devices SAN-attached storage

Description
When storage devices that present multiple luns to Solaris[TM] through a Storage Area Network(SAN) have some of those luns removed or made unavailable, Solaris device entries will still exist for those luns.
However Solaris may report "missing" or "failing" states for those luns. 
This document explains how to clean up the device entries and hence remove the error condition which is 
caused by Solaris trying to access the unavailable luns. 
This applies to Solaris 8 and Solaris 9 using the Sun StorEdge[TM] San Foundation Kit (SFK), 
also known as the Leadville driver stack. This document is specific to SAN-attached fibre channel storage and 
does not apply to direct-attached fibre channel storage.
Steps to Follow
The following commands will be presented:
- cfgadm -c configure [ap_id]
- cfgadm -al -o show_FCP_dev
- cfgadm -o unusable_FCP_dev -c unconfigure [ap_id]
- devfsadm -C
- ("luxadm -e offline " may also be needed)
The following output shows a system with 4 dual-pathed luns which are SAN-attached to a Solaris host:
cfgadm -al -o show_FCP_dev
Ap_Id Type Receptacle Occupant Condition
c2 fc-fabric connected configured unknown
c2::50060e8004274d20,0 disk connected configured unknown
c2::50060e8004274d20,1 disk connected configured unknown
c2::50060e8004274d20,2 disk connected configured unknown
c2::50060e8004274d20,3 disk connected configured unknown
c3 fc-fabric connected configured unknown
c3::50060e8004274d30,0 disk connected configured unknown
c3::50060e8004274d30,1 disk connected configured unknown
c3::50060e8004274d30,2 disk connected configured unknown
c3::50060e8004274d30,3 disk connected configured unknown
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
(output omitted for clarity)
4. c2t50060E8004274D20d0
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,0
5. c2t50060E8004274D20d1
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,1
6. c2t50060E8004274D20d2
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,2
7. c2t50060E8004274D20d3
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,3
8. c3t50060E8004274D30d0
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,0
9. c3t50060E8004274D30d1
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,1
10. c3t50060E8004274D30d2
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,2
11. c3t50060E8004274D30d3
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,3
In this example, using the native tools of the storage device, we will remove all of the odd numbered luns. Here the storage is a Sun StorEdge[TM] 9990, so we used Storage Navigator to remove the lun mappings from the host.
The following output shows the same system after the luns have been removed:
cfgadm -al -o show_FCP_dev
Ap_Id Type Receptacle Occupant Condition
c2 fc-fabric connected configured unknown
c2::50060e8004274d20,0 disk connected configured unknown
c2::50060e8004274d20,1 disk connected configured failing
c2::50060e8004274d20,2 disk connected configured unknown
c2::50060e8004274d20,3 disk connected configured failing
c3 fc-fabric connected configured unknown
c3::50060e8004274d30,0 disk connected configured unknown
c3::50060e8004274d30,1 disk connected configured failing
c3::50060e8004274d30,2 disk connected configured unknown
c3::50060e8004274d30,3 disk connected configured failing
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
(output omitted for clarity)
4. c2t50060E8004274D20d0
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,0
5. c2t50060E8004274D20d1
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,1
6. c2t50060E8004274D20d2
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,2
7. c2t50060E8004274D20d3
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,3
8. c3t50060E8004274D30d0
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,0
9. c3t50060E8004274D30d1
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,1
10. c3t50060E8004274D30d2
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,2
11. c3t50060E8004274D30d3
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,3
We can now see above, that "cfgadm -al -o show_FCP_dev" shows the "failing" state and format shows the device as "".
The first step in removing these devices is to change the state shown in the cfgadmoutput from "failing" to "unusable". This is done with the following command:
cfgadm -c configure c2 c3
cfgadm -al -o show_FCP_dev
Ap_Id Type Receptacle Occupant Condition
c2 fc-fabric connected configured unknown
c2::50060e8004274d20,0 disk connected configured unknown
c2::50060e8004274d20,1 disk connected configured unusable
c2::50060e8004274d20,2 disk connected configured unknown
c2::50060e8004274d20,3 disk connected configured unusable
c3 fc-fabric connected configured unknown
c3::50060e8004274d30,0 disk connected configured unknown
c3::50060e8004274d30,1 disk connected configured unusable
c3::50060e8004274d30,2 disk connected configured unknown
c3::50060e8004274d30,3 disk connected configured unusable
Possible extra step:
If the devices remaining in a "failing" state according to the above output fromcfgadm, and they do not move to an "unusable" state after running "cfgadm -c configure" as shown above, then the following command can also be tried:
luxadm -e offline /dev/dsk/c3t50060E8004274D30d3s2
(i.e. "luxadm -e offline )
Then re-run the previous cfgadm command (cfgadm -al -o show_FCP_dev) to check that the LUN state has changed from "failing" to "unusable". This luxadm command should then be repeated for each LUN which was previously shown in the "failing" state by cfgadm. Then carry on with the process below.
--oOo--
Now that the state of the inaccessible luns has been changed to "unusable" in the output from cfgadm, we can remove those entries from the list with the following command:
cfgadm -o unusable_FCP_dev -c unconfigure c2::50060e8004274d20
cfgadm -o unusable_FCP_dev -c unconfigure c3::50060e8004274d30

- If you tried remove device and you got some errors below, you'll use this:
# cfgadm -o unusable_FCP_dev -c unconfigure c3::50060e8004274d30
cfgadm: Library error: failed to offline: /devices/scsi_vhci/ssd@g600015d00005cc00000000000000f166
                    Resource                             Information    
------------------------------------------------  -------------------------
/dev/dsk/c6t600015D00005CC00000000000000F166d0s2  Device being used by VxVM
cfgadm -f -o unusable_FCP_dev -c unconfigure c3::50060e8004274d30

cfgadm -la -o show_FCP_dev
Ap_Id Type Receptacle Occupant Condition
c2 fc-fabric connected configured unknown
c2::50060e8004274d20,0 disk connected configured unknown
c2::50060e8004274d20,2 disk connected configured unknown
c3 fc-fabric connected configured unknown
c3::50060e8004274d30,0 disk connected configured unknown
c3::50060e8004274d30,2 disk connected configured unknown
format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
(output omitted for clarity)
4. c2t50060E8004274D20d0
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,0
5. c2t50060E8004274D20d2
/pci@23c,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e8004274d20,2
6. c3t50060E8004274D30d0
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,0
7. c3t50060E8004274D30d2
/pci@23c,600000/SUNW,qlc@1,1/fp@0,0/ssd@w50060e8004274d30,2
Now we see that the luns are no longer displayed in the format listing.
Even though the output of the format command looks good, there are still entries for the removed devices in /dev/disk and /dev/rdsk. These can be removed if desired, by using the devfsadm command.
ls /dev/dsk/c2t50060E8004274D20d*
/dev/dsk/c2t50060E8004274D20d0s0 /dev/dsk/c2t50060E8004274D20d2s4
/dev/dsk/c2t50060E8004274D20d0s1 /dev/dsk/c2t50060E8004274D20d2s5
/dev/dsk/c2t50060E8004274D20d0s2 /dev/dsk/c2t50060E8004274D20d2s6
/dev/dsk/c2t50060E8004274D20d0s3 /dev/dsk/c2t50060E8004274D20d2s7
/dev/dsk/c2t50060E8004274D20d0s4 /dev/dsk/c2t50060E8004274D20d3s0
/dev/dsk/c2t50060E8004274D20d0s5 /dev/dsk/c2t50060E8004274D20d3s1
/dev/dsk/c2t50060E8004274D20d0s6 /dev/dsk/c2t50060E8004274D20d3s2
/dev/dsk/c2t50060E8004274D20d0s7 /dev/dsk/c2t50060E8004274D20d3s3
/dev/dsk/c2t50060E8004274D20d1s0 /dev/dsk/c2t50060E8004274D20d3s4
/dev/dsk/c2t50060E8004274D20d1s1 /dev/dsk/c2t50060E8004274D20d3s5
/dev/dsk/c2t50060E8004274D20d1s2 /dev/dsk/c2t50060E8004274D20d3s6
/dev/dsk/c2t50060E8004274D20d1s3 /dev/dsk/c2t50060E8004274D20d3s7
/dev/dsk/c2t50060E8004274D20d1s4 /dev/dsk/c2t50060E8004274D20d4s0
/dev/dsk/c2t50060E8004274D20d1s5 /dev/dsk/c2t50060E8004274D20d4s1
/dev/dsk/c2t50060E8004274D20d1s6 /dev/dsk/c2t50060E8004274D20d4s2
/dev/dsk/c2t50060E8004274D20d1s7 /dev/dsk/c2t50060E8004274D20d4s3
/dev/dsk/c2t50060E8004274D20d2s0 /dev/dsk/c2t50060E8004274D20d4s4
/dev/dsk/c2t50060E8004274D20d2s1 /dev/dsk/c2t50060E8004274D20d4s5
/dev/dsk/c2t50060E8004274D20d2s2 /dev/dsk/c2t50060E8004274D20d4s6
/dev/dsk/c2t50060E8004274D20d2s3 /dev/dsk/c2t50060E8004274D20d4s7
devfsadm -C
ls /dev/dsk/c2t50060E8004274D20d*
/dev/dsk/c2t50060E8004274D20d0s0 /dev/dsk/c2t50060E8004274D20d2s0
/dev/dsk/c2t50060E8004274D20d0s1 /dev/dsk/c2t50060E8004274D20d2s1
/dev/dsk/c2t50060E8004274D20d0s2 /dev/dsk/c2t50060E8004274D20d2s2
/dev/dsk/c2t50060E8004274D20d0s3 /dev/dsk/c2t50060E8004274D20d2s3
/dev/dsk/c2t50060E8004274D20d0s4 /dev/dsk/c2t50060E8004274D20d2s4
/dev/dsk/c2t50060E8004274D20d0s5 /dev/dsk/c2t50060E8004274D20d2s5
/dev/dsk/c2t50060E8004274D20d0s6 /dev/dsk/c2t50060E8004274D20d2s6
/dev/dsk/c2t50060E8004274D20d0s7 /dev/dsk/c2t50060E8004274D20d2s7



source: http://xteams.oit.ncsu.edu/iso/lun_removal

Thursday, March 21, 2013

Linux/Solaris set password to non expiry


Solaris 10
server# passwd -s dmadmin
dmadmin   PS    01/16/13     7    84    28

server# passwd -x -1 dmadmin
passwd: password information changed for dmadmin

server# passwd -s dmadmin
dmadmin   PS

server# grep dmadmin /etc/shadow
dmadmin:aDHLqWdyzszdc:15721::::::
server#

Linux


server:~ # chage -l dmadmin
Minimum:        0
Maximum:        90
Warning:        7
Inactive:       180
Last Change:            Jan 17, 2013
Password Expires:       Apr 17, 2013
Password Inactive:      Oct 14, 2013
Account Expires:        Never

server:~ # chage -m 0 -M 99999 -I -1 -E -1 dmadmin
Aging information changed.

server:~ # chage -l dmadmin
Minimum:        0
Maximum:        99999
Warning:        7
Inactive:       -1
Last Change:            Jan 17, 2013
Password Expires:       Never
Password Inactive:      Never
Account Expires:        Never
server:~ #

Sunday, March 17, 2013

Solaris 10: zfs and nfs shares

By default, the root user on a client machine has restricted access to an NFS-mounted share.




Here's how to grant full access to local root users to NFS mounts:



zfs set sharenfs=rw=@192.168.1.0/24,root=@192.168.1.0/24 space



This gives full access for root users on any machine in the 192.168.1.0/24 subnet to the zfs dataset "space".





serverB# zfs list

NAME USED AVAIL REFER MOUNTPOINT

app 284G 157G 18K none

app/iwstoreAPAC 51.8G 48.2G 51.8G /opt/app/data/iw-store/APAC

app/iwstoreAmericas 27.6G 22.4G 27.6G /opt/app/data/iw-store/Americas

app/iwstoreEMEA 192G 57.6G 192G /opt/app/data/iw-store/EMEA

app/optapp 12.3G 87.7G 12.3G /opt/app



for set of network

zfs set sharenfs=rw=@153.88.177.0/24,root=@153.88.177.0/24 app/iwstoreAPAC

zfs set sharenfs=rw=@153.88.177.0/24,root=@153.88.177.0/24 app/iwstoreAmericas

zfs set sharenfs=rw=@153.88.177.0/24,root=@153.88.177.0/24 app/iwstoreEMEA





or for specific ips

zfs set sharenfs=rw=153.88.177.59,root=153.88.177.59 app/iwstoreAPAC

zfs set sharenfs=rw=153.88.177.59,root=153.88.177.59 app/iwstoreAmericas

zfs set sharenfs=rw=153.88.177.59,root=153.88.177.59 app/iwstoreEMEA





On serverA

mkdir /serverB_APAC

mkdir /serverB_Americas

mkdir /serverB_EMEA





serverA# dfshares serverB

RESOURCE SERVER ACCESS TRANSPORT

serverB:/opt/app/data/iw-store/Americas serverB - -

serverB:/iwserver serverB - -

serverB:/opt/app/data/iw-store/EMEA serverB - -

serverB:/opt/app/data/iw-store/APAC serverB - -

serverA# mount serverB:/opt/app/data/iw-store/Americas /serverB_Americas

serverA# mount serverB:/opt/app/data/iw-store/EMEA /serverB_EMEA

serverA# df -k

Thursday, February 28, 2013

Solaris: ZFS how to mirror zfs slice not whole disk


ZFS how to mirror zfs slice not whole disk

new disk:
c4t60060E80056F110000006F110000804Bd0
c4t60060E80056F110000006F1100006612d0  

serverA# zpool status phtoolvca1
  pool: phtoolvca1
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: none requested
config:

        NAME                                                 STATE     READ WRITE CKSUM
        phtoolvca1                                           ONLINE       0     0     0
          /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0  ONLINE       0     0     0
          c4t60060E80056F110000006F110000614Ad0              ONLINE       0     0     0

errors: No known data errors
serverA# zpool status | grep c4t60060E80056F110000006F110000804Bd0
serverA# zpool attach -f phtoolvca1 /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0 c4t60060E80056F110000006F110000804Bd0
cannot attach c4t60060E80056F110000006F110000804Bd0 to /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0: no such device in pool
serverA# zpool attach -f phtoolvca1 c4t60060E80056F110000006F11000081A6d0s0 c4t60060E80056F110000006F110000804Bd0
cannot attach c4t60060E80056F110000006F110000804Bd0 to c4t60060E80056F110000006F11000081A6d0s0: no such device in pool
serverA# man zppol
serverA#
serverA# prtvtoc /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s2 | fmthard -s - /dev/rdsk/c4t60060E80056F110000006F110000804Bd0s2
fmthard:  New volume table of contents now in place.
serverA#
serverA# zdb -C phtoolvca1

MOS Configuration:
        version: 10
        name: 'phtoolvca1'
        state: 0
        txg: 1516759
        pool_guid: 17149951849739077007
        hostid: 2238627050
        hostname: 'serverA'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 17149951849739077007
            children[0]:
                type: 'disk'
                id: 0
                guid: 8225298048714506169
                path: '/dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0'
                devid: 'id1,ssd@n60060e80056f110000006f11000081a6/a,raw'
                phys_path: '/scsi_vhci/ssd@g60060e80056f110000006f11000081a6:a,raw'
                whole_disk: 1
                metaslab_array: 14
                metaslab_shift: 28
                ashift: 9
                asize: 53674246144
                is_log: 0
                DTL: 104
            children[1]:
                type: 'disk'
                id: 1
                guid: 6410631449240489329
                path: '/dev/dsk/c4t60060E80056F110000006F110000614Ad0s0'
                devid: 'id1,ssd@n60060e80056f110000006f110000614a/a'
                phys_path: '/scsi_vhci/ssd@g60060e80056f110000006f110000614a:a'
                whole_disk: 1
                metaslab_array: 61
                metaslab_shift: 27
                ashift: 9
                asize: 16092758016
                is_log: 0
                DTL: 103
serverA#
serverA# zpool status phtoolvca1
  pool: phtoolvca1
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: none requested
config:

        NAME                                                 STATE     READ WRITE CKSUM
        phtoolvca1                                           ONLINE       0     0     0
          /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0  ONLINE       0     0     0
          c4t60060E80056F110000006F110000614Ad0              ONLINE       0     0     0

errors: No known data errors
serverA# zpool attach -f phtoolvca1 8225298048714506169 c4t60060E80056F110000006F110000804Bd0s0
serverA# zpool status phtoolvca1
  pool: phtoolvca1
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Sat Feb 16 05:34:08 2013
    1.56G scanned out of 10.7G at 133M/s, 0h1m to go
    1.56G scanned out of 10.7G at 133M/s, 0h1m to go
    1.55G resilvered, 14.59% done
config:

        NAME                                                   STATE     READ WRITE CKSUM
        phtoolvca1                                             ONLINE       0     0     0
          mirror-0                                             ONLINE       0     0     0
            /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0  ONLINE       0     0     0
            c4t60060E80056F110000006F110000804Bd0s0            ONLINE       0     0     0  (resilvering)
          c4t60060E80056F110000006F110000614Ad0                ONLINE       0     0     0
serverA# zpool status | grep c4t60060E80056F110000006F1100006612d0
serverA# zpool attach -f phtoolvca1 c4t60060E80056F110000006F110000614Ad0 c4t60060E80056F110000006F1100006612d0
serverA# zpool status phtoolvca1
  pool: phtoolvca1
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Sat Feb 16 05:36:48 2013
    5.71G scanned out of 10.7G at 487M/s, 0h0m to go
    5.71G scanned out of 10.7G at 487M/s, 0h0m to go
    1.20G resilvered, 53.37% done
config:

        NAME                                                   STATE     READ WRITE CKSUM
        phtoolvca1                                             ONLINE       0     0     0
          mirror-0                                             ONLINE       0     0     0
            /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0  ONLINE       0     0     0
            c4t60060E80056F110000006F110000804Bd0s0            ONLINE       0     0     0
          mirror-1                                             ONLINE       0     0     0
            c4t60060E80056F110000006F110000614Ad0              ONLINE       0     0     0
            c4t60060E80056F110000006F1100006612d0              ONLINE       0     0     0  (resilvering)

errors: No known data errors
serverA# zpool status phtoolvca1
  pool: phtoolvca1
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: resilvered 2.71G in 0h0m with 0 errors on Sat Feb 16 05:37:21 2013
config:

        NAME                                                   STATE     READ WRITE CKSUM
        phtoolvca1                                             ONLINE       0     0     0
          mirror-0                                             ONLINE       0     0     0
            /dev/rdsk/c4t60060E80056F110000006F11000081A6d0s0  ONLINE       0     0     0
            c4t60060E80056F110000006F110000804Bd0s0            ONLINE       0     0     0
          mirror-1                                             ONLINE       0     0     0
            c4t60060E80056F110000006F110000614Ad0              ONLINE       0     0     0
            c4t60060E80056F110000006F1100006612d0              ONLINE       0     0     0

Wednesday, January 23, 2013

Solaris: copy new disk label


Duplicate the label's content from the boot disk to the mirror disk f

root# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2

root# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Wednesday, January 9, 2013

Solaris: move or add new sds device to server


    Look at source server /etc/lvm/md.tab file for metadevice config setting and edit accordingly to new server md.tab. then run #metainit d1000 where d1000 is the metadevice


    old server

    d457 1 1 /dev/dsk/c3t60060E8015320C000001320C00006092d0s0

    new server ( with new metadevice name )

    d1000 1 1 /dev/dsk/c3t60060E8015320C000001320C00006092d0s0

    Reference

    md.tab File Options

      The following md.tab file options are supported:
      metadevice-name
      When the metainit command is run with a metadevice-name as its only argument, it searches the /etc/lvm/md.tab file to find that name and its corresponding entry. The order in which entries appear in the md.tab file is unimportant. For example, consider the following md.tab entry:

      d0 2 1 c1t0d0s0 1 c2t1d0s0
      When you run the command metainit d0, it configures metadevice d0 based on the configuration information found in themd.tab file.
      -a
      Activates all metadevices defined in the md.tab file.
      metainit does not maintain the state of the volumes that would have been created when metainit is run with both the -aand -n flags. If a device d0 is created in the first line of the md.tab file, and a later line in md.tab assumes the existence ofd0, the later line fails when metainit -an runs (even if it would succeed with metainit -a).

UNIX: How to print column nicely using printf

[user@hostfwnms1-oam tmp]# cat b.sh printf "%-26s %-19s %-8s %-8s %-s %-s\n" HOSTNAME IP PING SNMPWALK 0-ok 1-fail for i in `cat n...