Wednesday, November 23, 2011

Solaris: calculate diff filesystem size after transfer

#! /bin/sh
value1=`df -k /app/iwstoreEMEA_snap| grep snap| awk '{print $3}'`
value2=`df -k /server4_EMEA | grep _EMEA | awk '{print $3}'`
exprans=`expr $value1 - $value2`
echo "$exprans kb left"
sleep 60
value1=`df -k /app/iwstoreEMEA_snap| grep snap| awk '{print $3}'`
value2=`df -k /server_EMEA | grep _EMEA | awk '{print $3}'`
exprans1=`expr $value1 - $value2`
transfer=`expr $exprans - $exprans1`
echo "$transfer KB transferd"



$ sh a.sh
42318487 kb left
14891 KB transferd



Solaris: ssh tunnel and nc file transfer

You pipe the file to a listening socket on the server machine in the same way as before. It is assumed that an SSH server runs on this machine too.

$ cat backup.iso | nc -l 3333
On the client machine connect to the listening socket through an SSH tunnel:

$ ssh -f -L 23333:127.0.0.1:3333 me@192.168.0.1 sleep 10; \
        nc 127.0.0.1 23333 | pv -b > backup.iso

This way of creating and using the SSH tunnel has the advantage that the tunnel is automagically closed after file transfer finishes. For more information and explanation about it please read my article about auto-closing SSH tunnels.

Solaris: ssh and tar

How to transfer directories into remote server using tar and ssh combined.
$ tar cf - mydir/ | ssh gate 'cd /tmp && tar xpvf -'

Monday, November 21, 2011

Sendmail: how to masquarate from oracle@host.domain.com to oracle@domain.com

root@wbitdb1:/etc/mail # diff sendmail.cf sendmail.cf.21012008
986,987c986
< #R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2
< R$* < @ *LOCAL* > $*  $: $1 < @ $M . > $2
---
> R$* < @ *LOCAL* > $*  $: $1 < @ $j . > $2
root@wbitdb1:/etc/mail #


add the line shown below
--------------
###################################################################
###  Ruleset 94 -- convert envelope names to masqueraded form   ###
###################################################################

SMasqEnv=94
#R$* < @ *LOCAL* > $*   $: $1 < @ $j . > $2 <--------------- this line commented
R$* < @ *LOCAL* > $*    $: $1 < @ $M . > $2 <--------------- this line added


----------
Testing masquerading
sendmail's address test mode makes it easy to test masquerading.
====================================================

# sendmail -bt
/tryflags HS (to test the header sender address; other tryflags values would be ES, HR, and ER, for envelope sender, header recipient, and envelope recipient, respectively)
/try esmtp email_address_to_test

Example:
sendmail -bt
> /tryflags ES
> /try esmtp user@host.domain.com
Trying envelope sender address user@host.domain.com for mailer esmtp

(many lines omitted)

final            returns: user @ domain . com
Rcode = 0, addr = user@domain.com

Saturday, November 19, 2011

Solaris: Other tunnelling tricks

Connect to port 42 of host.example.com via an HTTP proxy at 10.2.3.4, port 8080. 

This example could also be used by ssh(1); see the ProxyCommand directive in ssh_config(5) for more information.

$ nc -x10.2.3.4:8080 -Xconnect host.example.com 42

Solaris: simple port scanning

PORT SCANNING

It may be useful to know which ports are open and running services on a target machine. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. For example:

$ nc -z host.example.com 20-30
Connection to host.example.com 22 port [tcp/ssh] succeeded!
Connection to host.example.com 25 port [tcp/smtp] succeeded!

The port range was specified to limit the search to ports 20 - 30.

Alternatively, it might be useful to know which server software is running, and which versions. This information is often contained within the greeting banners. In order to retrieve these, it is necessary to first make a connection, and then break the connection when the banner has been retrieved. This can be accomplished by specifying a small timeout with the -w flag, or perhaps by issuing a "QUIT" command to the server:

$ echo "QUIT" | nc host.example.com 20-30
SSH-1.99-OpenSSH_3.6.1p2
Protocol mismatch.
220 host.example.com IMS SMTP Receiver Version 0.84 Ready

Solaris: using nc to transfer file

DATA TRANSFER

The example in the previous section can be expanded to build a basic data transfermodel. Any information input into one end of the connection will be output to the other end,and input and output can be easily captured in order to emulate file transfer.
 
     Start by using nc to listen on a specific port, with output captured into
     a file:

           $ nc -l 1234 > filename.out

     Using a second machine, connect to the listening nc process, feeding it
     the file which is to be transferred:

           $ nc host.example.com 1234 < filename.in

     After the file has been transferred, the connection will close automati-
     cally.

Solaris: using nc and tar to transfer file between hosts

If you can't find a way for the controllers to talk to each other (as others have mentioned), you can try doing this:
On your destination server, run the following command:
destination-server# nc -l 9999 | tar xvzf -
Then, on your source server, run the following command:
source-server# tar cvzf -  | nc destination-server-ip 9999
The advantage to this is it avoids any encryption overhead that SSH/rsync gives, so you'll get a bit of a speed boost. This also compresses and decompresses on the source and destination servers in-stream, so it speeds up the transfer process at the expense of some CPU cycles.

CLIENT/SERVER MODEL

It is quite simple to build a very basic client/server model using nc.
     On one console, start nc listening on a specific port for a connection.
     For example:

           $ nc -l 1234

     nc is now listening on port 1234 for a connection.  On a second console
     (or a second machine), connect to the machine and port being listened on:

           $ nc 127.0.0.1 1234

     There should now be a connection between the ports.  Anything typed at
     the second console will be concatenated to the first, and vice-versa.
     After the connection has been set up, nc does not really care which side
     is being used as a `server' and which side is being used as a `client'.
     The connection may be terminated using an EOF (`^D').

   

Solaris: rsync in parallel


This does increase the amount of CPU and I/O that both your sending and receiving side use, but I’ve been able to run ~25 parallel instances without remotely degrading the rest of the system or slowing down the other RSYNC instances.


The key is to use the –include and –exclude command line switches to create selection criteria.


Example


drwxr-xr-x   2 root     root         179 Jul 19 16:22 directory_a
drwxr-xr-x   2 root     root         179 Aug 12 00:08 directory_b
If directory_a has 2,000,000 files underneath it. and directory_b also has 2,000,000 files, use the following idea to split them up. The –exclude option says in essence to “exclude everything that is not explicitly included”.


#!/bin/bash
rsync -av --include="/directory_a*" --exclude="/*" --progress remote::/ /localdir/ >  /tmp/myoutputa.log &
rsync -av --include="/directory_b*" --exclude="/*" --progress remote::/ /localdir/ >  /tmp/myoutputb.log &
The following will take about twice the amount of time gathering files than the above:


#!/bin/bash
rsync -av --progress remote::/ /localdir/ > /tmp/myoutput.log &

Unix: copy to remote via cpio example

cpio code for transferring files from one system to another:
for dir in $(cat /dir.txt)#file containing ACL directories
do
cd $dir
find . -print -depth | cpio -omPv | rsh "(cd $dir; cpio -idumvP
)"
done

Solaris: Using rsync to tranfer dir between two hosts


Transfer data from host3 to host4 via rsync and nfs
hostp3% more copyemea.sh
#! /bin/sh
date> copy.log
/opt/csw/bin/rsync -ar --stats /app/iwstoreEMEA_snap/ /hosta4_EMEA/ 2>&1 >>copy.log
date>> copy.log
hostp3% df -h /hosta4_EMEA
Filesystem             size   used  avail capacity  Mounted on
hosta4:/opt/app/data/iw-store/EMEA
                       250G   134G   116G    54%    /hosta4_EMEA
hostp3%
in this case
hosta4#zfs set sharenfs=rw=153.88.177.59,root=153.88.177.59 app/iwstoreEMEA
where 

153.88.177.59 is ip address of host3
on host3
host3# mount host4:/opt/app/data/iw-store/EMEA /host4_EMEA
Initial data on host4 must be empty, if not unexpected result might happens. Such as size final transfer > host3 dir

Solaris: ZFS snapshot between two hosts

Send the initial snapshot and create the filesystem on the remote host

  1. make sure that the receiving host is running Solaris updated after October 2008
  2. make sure the filesystem you are replicating does not exist already on the receving host
  3. zfs snapshot export/upload@zrep-00001 on the origin host
  4. zfs send -i export/upload@zrep-00001 | ssh otherservername "cat > /export/save/upload@zrep-00001"
  5. wait several days depending on the size of the dataset
  6. cat /export/save/upload@rzrepl-00001 | zfs recv export/upload


Statistics: able to complete a transfer of 6.1T in 10 days. (From ms1 (Sun Fire X4500) to ms5 (Sun Fire X4540), both hosts in the same rack.)

Send incremental covering the days that the first step took

  1. zfs snapshot export/upload@zrep-00002 on the origin host
  2. zfs send -i export/upload@zrep-00001 export/upload@zrep-00002 | ssh otherservername "cat > /export/save/upload@zrep-00002"
  3. cat /export/save/upload@rzrepl-00002 | zfs recv export/upload

Solaris 10: How to mount lofs from localzone

Sol10: How to mount lofs from localzone

On global zone, mounted file system looks like this

/dev/md/dsk/d113 530063064 65560 524696880 1% /zones/myzone/ftp/data2


run this zonecfg command
zonecfg -z myzone
> add fs
> set dir=/ftp/data2
> set special=/zones/myzone/ftp/data2
> set type=lofs
> end
> verify
> commit
> exit

run below command to confirm
#zonecfg -z myzone info
....
fs:
dir: /ftp/data2
special: /zones/myzone/ftp/data2
raw not specified
type: lofs
options: []


....

root@myzone # mount -F lofs /ftp/data2 /ftp/data2

root@myzone # df -k /ftp/data2
Filesystem kbytes used avail capacity Mounted on
/ftp/data2 530063064 65560 524696880 1% /ftp/data2

HPUX/LINUX: NFS mounting

Exporting and mounting file systems on Linux and HP-UX

If you want to use a shared file system on Linux® and HP-UX systems, you must export it from the system where it is located and mount it on every system on which you want to access it.
You must be logged on as root.
To export and mount file systems on Linux and HP-UX systems, complete these steps:
  1. Export a file system on Linux and HP-UX systems.
    1. Add the file system that you want to export to the file ⁄etc⁄exports.
    2. Export all entries in the file ⁄etc⁄exports by entering the command ⁄usr⁄sbin⁄exportfs -a.
    3. Verify that the file system is exported by entering the command ⁄usr⁄sbin⁄exportfs.
    All shared file systems are displayed.
  2. Mount a file system on HP-UX and Linux systems.
    1. If the file system that you want to mount is remote, ensure you have the permission to mount it by entering the command⁄usr⁄sbin⁄showmount - e
      where  is the name of the remote operating system.
    2. Choose an empty file system that serves as the mount point for the file system that you want to mount.If an empty file system does not exist, create it by entering the command mkdir /
      where  is the name of the local file system.
    3. Mount the file system on your local system by entering the corresponding command.
      • On HP-UX, enter the command/usr/sbin/mount -F nfs sourcehost:/sourcedir /destinationdir
      • On Linux systems, enter the command/bin/mount -t nfs sourcehost:/sourcedir /destinationdir
      where:
       is the name of the remote operating system
       is the name of the remote file system
       is the name of the local file system
To mount the remote file system after each reboot, add it to the /etc/fstab file.For a description of the file format of /etc/fstab, enter the command
man fstab.

example fstab:

server:/mnt /mnt nfs rw,hard 0 0 #mount from server

Friday, November 18, 2011

Solaris nfs throubleshooting

# showmount -a
     All mount points on local host:
     edcert20.ucs.indiana.edu:/home
     edcert21.ucs.indiana.edu:/usr/local

     # showmount -d
     Directories on local host:
     /home
     /usr/local

     # showmount -e
     Export list on local host
     /home           edcert21.ucs.indiana.edu edcert20.ucs.indiana.edu     
     /usr/local      edcert21.ucs.indiana.edu
# df -F nfs
     Filesystem                      Type  blocks     use   avail %use  Mounted on
     edcert21.ucs.indiana.edu:/home  nfs    68510   55804   12706  81%  /usr/share/help
Use the command nfsstat -s to display NFS activity on the server side. For example:
# nfsstat -s

     Server RPC:
     calls      badcalls   nullrecv   badlen     xdrcall    duphits    dupage
     50852      0          0          0          0          0          0.00    

     Server NFS:
     calls        badcalls     
     50852        0            
     null         getattr      setattr      root         lookup       readlink  
     1  0%        233  0%      0  0%        0  0%        1041  2%     0  0%  
     read         wrcache      write        create       remove       rename    
     49498 97%    0  0%        0  0%        0  0%        0  0%        0  0% 
     link         symlink      mkdir        rmdir        readdir      fsstat 
     0  0%        0  0%        0  0%        0  0%        75  0%       4  0%     
The output may be interpreted using the following guidelines.
  • badcalls > 0 - RPC requests are being rejected by the server. This could indicate authentication problems caused by having a user in too many groups, attempts to access exported file systems as root, or an improper Secure RPC configuration.
  • nullrecv > 0 - NFS requests are not arriving fast enough to keep all of the nfsd daemons busy. Reduce the number of NFS server daemons until nullrecv is not incremented.
  • symlink > 10% - Clients are making excessive use of symbolic links that are on file systems exported by the server. Replace the symbolic link with a directory, and mount both the underlying file system and the link's target on the client.
  • getattr > 60% - Check for non-default attribute caching (noac mount option) on NFS clients.
On the client side use the command nfsstat -c to display the client statistics. For example:
# nfsstat -c

     Client RPC:
     calls      badcalls   retrans    badxid     timeout    wait       newcred
     369003     62         1998       43         2053       0          0 

     Client NFS:
     calls        badcalls     nclget       nclsleep     
     368948       0            368948       0            
     null         getattr      setattr      root         lookup       readlink  
     0  0%        51732 14%    680  0%      0  0%        95069 25%    542  0% 
     read         wrcache      write        create       remove       rename 
     210187 56%   0  0%        2259  0%     1117  0%     805  0%      337  0%   
     link         symlink      mkdir        rmdir        readdir      fsstat    
     120  0%      0  0%        7  0%        0  0%        5510  1%     583  0% 
This output may be interpreted using the guidelines given below.
  • timeout > 5% - The client's RPC requests are timing out before the server can answer them, or the requests are not reaching the server. Check badxid to determine the problem.
  • badxid ~ timeout - RPC requests are being handled by the server, but too slowly. Increase timeo parameter value for this mount, or tune the server to reduce the average request service time.
  • badxid ~ 0 - With timeouts greater than 3%, this indicates that packets to and from the server are getting lost on the network. Reduce the read and write block sizes (mount parameters rsize andwsize) for this mount.
  • badxid > 0 - RPC calls on soft-mounted file systems are timing out. If the server is running, and badcalls is growing, then soft mounted file systems should use a larger timeo or retrans value.



Solaris vxvm move a disk group to another system

Move a disk group to another system
1. Unmount and stop all volumes in the disk group on the first system:
umount /mntdir
vxvol -g <diskgroup> stopall
2. Deport (disable all local access to) the disk group to be moved with this command:
vxdg deport <diskgroup>
3. Import (enable local access to) the disk group and its disks from the second system with:
vxdg import <diskgroup>
4. After the disk group is imported, start all volumes in the disk group with this command:
vxrecover -g <diskgroup> -sb
The options here indicate that VERITAS Volume Manager will start all the disabled volumes (-s) in the background (-b).

Solaris : vxfs large file

How to check for/enable largefile support on vxfs
To check if largefiles are enabled:
/usr/lib/fs/vxfs/fsadm /mount/pointnolargefiles
Unlike ufs, you can enable vxfs largefile support on the fly with fsadm:
/usr/lib/fs/vxfs/fsadm -o largefiles /mount/point

Remove a mirror from a volume
Dissociate and remove a mirror (plex) from a volume:
vxplex [-g diskgroup] -o rm dis plex
This command will remove the mirror (plex) and all associated subdisks.

HPUX LVM mirror


System Administration Guide for HP-UX 10.20
Part 4 - LVM Disk Mirroring

This note describes how to integrate a second disk into the system volume group and configure it as an alternative boot device, thereby providing LVM mirrored backup for the primary boot device.
Introduction
This note describes how to configure LVM mirroring of a system disk. In this particular example, the HP server is STSRV1, the primary boot device is SCSI=6 (/dev/dsk/c2t6d0) and the alternative mirrored boot device is SCSI=5 (/dev/dsk/c2t5d0). The following commands may be found in /sbin and /usr/sbin must be run as root.
Procedure - Create a System Mirror Disk
This procedure assumes that the HPUX-10.## Operating system and the HPUX LVM mirroring product has already been installed.

# ioscan -fnC disk                                         (identify mirror disk)
# pvcreate -Bf /dev/rdsk/c2t5d0                            (make a bootable physical volume)
# mkboot -l /dev/rdsk/c2t5d0                               (create LVM disk layout)
# mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c2t5d0  (-lq = switch off quorum)
# vgextend /dev/vg00 /dev/dsk/c2t5d0


# for P in 1 2 3 4 5 6 7 8 9 10
> do
> lvextend -m 1 /dev/vg00/lvol$P /dev/dsk/c2t5d0
> sleep 1
> done
Following the mirroring procedure, it is now essential to setup the critical partitions concerning root, swap and boot. It is useful to confirm the partition layout using the commands bdf and lvlnboot.

# bdf -l
# lvlnboot –v
Following changes seen under HPUX-10.20, the / (root) partition will appear first in the listings as /dev/vg00/lvol3 and the /stand (boot) partition will probably be reported as "PV Name" and /dev/vg00/lvol1. The first command below is destructive, in that it removes the "PV Name" boot entry. It should therefore be reinserted using the lvlnboot -b command below. Exercise extreme care with the following commands.

# lvlnboot -r /dev/vg00/lvol3                      (prepare a root LVM logical volume)
# lvlnboot -s /dev/vg00/lvol2                      (prepare a swap LVM logical volume)
# lvlnboot -b /dev/vg00/lvol1                      (prepare a boot LVM logical volume)
# vgcfgbackup vg00
# lifls -C /dev/rdsk/c2t5d0                        (confirms as a boot device)



Disk Crash - D-Class Procedure (Fast Recovery - Hot-Swap Disk )
The example below assumes that the system disk (/dev/dsk/c0t5d0) has crashed and has been replaced by a hot-swap disk (ie. It is not necessary to halt or boot the server). The procedure would be just the same for the mirrored disk as follows :

# pvcreate -Bf /dev/rdsk/c0td50
# vgcfgrestore -n /dev/vg00 /dev/rdsk/c0td50
# vgchange -a y /dev/vg00
# pvcreate -Bf /dev/rdsk/c2t5d0                            (make a bootable physical volume)
# mkboot -l /dev/rdsk/c2t5d0                               (create LVM disk layout)
# mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c2t5d0  (-lq = switch off quorum)
# vgsync /dev/vg00
[NB. It will only be necessary to run the mkboot commands above if it is a system disk replacement.]

When the synchronisation of the logical volumes is complete, reconfirm the following :

# lvlnboot -r /dev/vg00/lvol3                      (prepare a root LVM logical volume)
# lvlnboot -s /dev/vg00/lvol2                      (prepare a swap LVM logical volume)
# lvlnboot -b /dev/vg00/lvol1                      (prepare a boot LVM logical volume)
# vgcfgbackup vg00



Disk Crash - Night Procedure (Fast Recovery - No Disk Replacement)
The example below assumes that the system disk (/dev/dsk/c2t6d0) has crashed and has NOT been replaced. The procedure, however, would be just the same for a mirrored disk crash with the exception of the change in device name. Due to the quorum philosophy, in that half the disk space is now no longer available, it is necessary to boot up in single-user mode with the quorum argument unset as follows :
1.Escape from boot sequence
2. Choose mirrored disk (ie. P0 = /dev/dsk/c2td50)
3. Boot up in single user mode, without quorum, as follows :

... Select from menu : b P0 isl

ISL > hpux -is -lq (;0)/stand/vmunix


# init 4



Disk Crash - Day Procedure (Slow Recovery - Intrnal Disk Replacement)
The example below assumes that the system disk (/dev/dsk/c2t6d0) has crashed and been replaced. The procedure, however, would be just the same for a mirrored disk crash with the exception of the change in device name. Due to the quorum philosophy, in that half the disk space is now not available, it is necessary to boot up in single-user mode with the quorum argument unset as follows :
1. Escape from boot sequence
2. Choose mirrored disk (ie. P0 = /dev/dsk/c2td50)
3. Boot up in single user mode, without quorum, as follows :

... Select from menu : b p0 isl

ISL > hpux -is -lq (;0)/stand/vmunix


# PATH=$PATH:/sbin:/usr/sbin
# mount -a
# pvcreate -Bf /dev/rdsk/c2t6d0
# mkboot -l /dev/rdsk/c2t6d0
# mkboot -a "hpux (;0) /stand/vmunix" /dev/rdsk/c2t6d0
# vgcfgrestore -n /dev/vg00 /dev/rdsk/c2td60
# vgchange -a y /dev/vg00
# lvlnboot -r /dev/vg00/lvol3                      (prepare a root LVM logical volume)
# lvlnboot -s /dev/vg00/lvol2                      (prepare a swap LVM logical volume)
# lvlnboot -b /dev/vg00/lvol1                      (prepare a boot LVM logical volume)
# init 4

The machine will now boot up correctly and the disks will synchronize automatically. The replaced system disk will now mirror automatically from the original mirrored disk. There will be considerable disk activity at this time and the progress of the mirroring may be confirmed with :

# lvdisplay -v /dev/vg00/lvol1
This will probably show that the first volume is "current" and therefore successfully mirrored.

# lvdisplay -v /dev/vg00/lvol8
This will almost certainly show that the volume is "stale" and therefore not yet mirrored. When the disks are synchronized, reboot the machine, thereby ensuring that the future is secure and the original "primary boot path" is valid.

Procedure - Remove a Mirrored System DiskEssentially the reverse of the procedure above.


# for P in 1 2 3 4 5 6 7 8 9 10
> do
> lvreduce -m 0 /dev/vg00/lvol$P /dev/dsk/c2t5d0
> sleep 1
> done
# vgreduce /dev/vg00 /dev/dsk/c2t5d0

HPUX: Mirroring boot disks

PA-RISC Mirroring procedures

  1. Use ioscan to identify a suitable alternate boot disk. The disk should be about the same size as the primary boot disk and, preferably, on a separate I/O channel. When I do this, I typically set up variables for the character and block devices. The example commands that follow use that syntax.
  2. Create the boot partitions on the disk. The args to ignore quorum are used on the hpux command. If you're booting off of the alternate boot disk, there's a better than even chance your primary isn't there anymore...
    1. pvcreate -B ${rdsk}
    2. mkboot ${rdsk}
    3. mkboot -a "hpux -lq" ${rdsk}
  3. Add the disk to vg00 - or where ever your OS is
    vgextend vg00 ${dsk}
  4. Extend the logical volumes. I typically use an inline function for this:
    for lv in $(vgdisplay -v vg00 | grep -i 'lv name' | awk '{print $NF}')
    do
        echo "#################################"
        echo ${lv}
        lvextend -m 1 ${lv} ${dsk}
    done

    NOTE: Each of the lvs can take awhile particularly if they're sizeable. I will usually create a function in another window that tells me how many logical extents are stale. Rerunning the function will hopefully show a decreasing number of extents.
    hm()
    {  lvdisplay -v $1 | sed -n -e '/Logical extents/,$p' | grep stale | wc -l
    }
    
  5. Verification:
    1. Use setboot to verify the alternate boot or HA alternate boot, as appropriate, is set:
      # setboot
      Primary bootpath : 0/1/1/0.1.0
      HA Alternate bootpath : 0/1/1/0.0.0
      Alternate bootpath : 0/1/1/0.1.0
      
      Autoboot is ON (enabled)
      
    2. Use lvlnboot to verify the alternate boot disk is configured. Use lvlnboot -R /dev/vg00 to update.
      # lvlnboot -v vg00
      Boot Definitions for Volume Group /dev/vg00:
      Physical Volumes belonging in Root Volume Group:
              /dev/dsk/c2t1d0s2 (0/1/1/0.1.0) -- Boot Disk
              /dev/dsk/c2t0d0s2 (0/1/1/0.0.0) -- Boot Disk
      Boot: lvol1     on:     /dev/dsk/c2t1d0s2
                              /dev/dsk/c2t0d0s2
      Root: lvol3     on:     /dev/dsk/c2t1d0s2
                              /dev/dsk/c2t0d0s2
      Swap: lvol2     on:     /dev/dsk/c2t1d0s2
                              /dev/dsk/c2t0d0s2
      Dump: lvol2     on:     /dev/dsk/c2t1d0s2, 0
      

Itanium mirroring procedure

As stated previously, the ia64 mirroring procedure is quite different. Using the old disk naming schemes, you will end up creating disk partitions, ${rdsk}s2, for instance. The new naming conventions create partitions named /dev/rdisk/disk#_p#. The directions below demonstrate the legacy naming convention. The commands are the same for the new naming convention.
  1. Use ioscan to identify a suitable alternate boot disk. The disk should be about the same size as the primary boot disk and, preferably, on a separate I/O channel. When I do this, I typically set up variables for the character and block devices. The example commands that follow use that syntax.
  2. Create the boot partition on the disk and generate the partition device files:
    1. Create the partition template:
       cat > /tmp/part << eof
      > 3
      > EFI 500MB
      > HPUX 100%
      > HPSP 400MB
      > eof
      
    2. idisk -f /tmp/part -w ${rdsk}
    3. Verify the partitions: idisk ${rdsk}
    4. Create the device files: insf -eH ${hw_path}
  3. Create and populate the boot partitions:
    1. pvcreate -B ${rdsk}s2 NOTE: the s2 partition
    2. mkboot -e -l ${rdsk}
    3. mkboot -a "boot vmunix -lq" ${dsk} NOTE: *not* rdsk
  4. vgextend vg00 ${dsk}s2 #note the s2 partition again
  5. Extend the logical volumes. I typically use an inline function for this:
    for lv in $(vgdisplay -v vg00 | grep -i 'lv name' | awk '{print $NF}')
    do
        echo "#################################"
        echo ${lv}
        lvextend -m 1 ${lv} ${dsk}s2 # note the s2 partition again
    done

    NOTE: Each of the lvs can take awhile particularly if they're sizeable. I will usually create a function in another window that tells me how many logical extents are stale. Rerunning the function will hopefully show a decreasing number of extents.
    hm()
    {  lvdisplay -v $1 | sed -n -e '/Logical extents/,$p' | grep stale | wc -l
    }
    
  6. Verification:
    1. Use setboot to verify the alternate boot or HA alternate boot, as appropriate, is set:
      # setboot
      Primary bootpath : 0/1/1/0.1.0
      HA Alternate bootpath : 0/1/1/0.0.0
      Alternate bootpath : 0/1/1/0.1.0
      
      Autoboot is ON (enabled)
      
    2. Use lvlnboot to verify the alternate boot disk is configured. Use lvlnboot -R vg00 to update.
      # lvlnboot -v vg00
      Boot Definitions for Volume Group /dev/vg00:
      Physical Volumes belonging in Root Volume Group:
              /dev/dsk/c2t1d0s2 (0/1/1/0.1.0) -- Boot Disk
              /dev/dsk/c2t0d0s2 (0/1/1/0.0.0) -- Boot Disk
      Boot: lvol1     on:     /dev/dsk/c2t1d0s2
                              /dev/dsk/c2t0d0s2
      Root: lvol3     on:     /dev/dsk/c2t1d0s2
                              /dev/dsk/c2t0d0s2
      Swap: lvol2     on:     /dev/dsk/c2t1d0s2
                              /dev/dsk/c2t0d0s2
      Dump: lvol2     on:     /dev/dsk/c2t1d0s2, 0
      
  7. Update the boot loaderThis part is still a bit confusing for me. The alternate boot option in the efi boot menu points to the right disk; however, it doesn't know about the hpux.efi boot command so it doesn't work. You'd figure HP would make that a bit more automatic.
    So, note the hardware paths for the primary and alternate disks, then boot to the efi menu. Boot configuration -> Add a boot option,
    Identify your new mirror disk. when it says (pun1,lun0), that's going to be c#t1d0, (pun0,lun) will be c#t0d0, etc. Highlight the suspected mirror disk and press [enter]
    Assuming you got the right disk, you should be looking at something that looks like an old norton commander menu. Highlight EFI, press [enter], then highlight HPUX, press [enter], and finally, hpux.efi and (you guessed it), press [enter]. Enter a good description for the mirror disk.
    If you want, move the mirror disk entry to the top using the other options in the boot options menu. Once done, select the mirror disk entry to attempt to boot from it.
    Troubleshoot as necessary
Taken from: 
URL:http://olearycomputers.com/ll/hpux_mirror.html

Solaris zfs pool command example

# zpool history
History for 'app':
2010-05-18.05:01:14 zpool create app c3t60060E80056F110000006F110000669Cd0
2010-05-18.05:04:04 zfs set mountpoint=/opt/app app
2010-05-18.05:05:36 zfs set mountpoint=/opt/app app
2010-05-18.05:09:55 zfs create app/iwstoreAPAC
2010-05-18.05:10:00 zfs create app/iwstoreEMEA
2010-05-18.05:10:07 zfs create app/iwstoreAmericas
2010-05-18.05:10:44 zfs set mountpoint=/opt/app/data/iw-store/APAC app/iwstoreAPAC
2010-05-18.05:10:59 zfs set mountpoint=/opt/app/data/iw-store/EMEA app/iwstoreEMEA
2010-05-18.05:11:19 zfs set mountpoint=/opt/app/data/iw-store/Americas app/iwstoreAmericas
2010-05-18.05:12:44 zfs set quota=50GB app/iwstoreAPAC
2010-05-18.05:12:55 zfs set quota=100GB app/iwstoreEMEA
2010-05-18.05:13:05 zfs set quota=200GB app/iwstoreAmericas
2010-05-18.05:14:02 zfs set quota=100GB app
2010-05-18.05:15:05 zfs destroy app/iwstoreAPAC
2010-05-18.05:15:11 zfs destroy app/iwstoreEMEA
2010-05-18.05:15:18 zfs destroy app/iwstoreAmericas
2010-05-18.05:15:53 zfs set quota=none app
2010-05-18.05:16:14 zfs create app/opt
2010-05-18.05:18:04 zfs destroy app/opt
2010-05-18.05:18:32 zfs create app/optapp
2010-05-18.05:18:49 zfs set mountpoint=/opt/app app/optapp
2010-05-18.05:19:25 zfs set mountpoint=/opt/app app/optapp
2010-05-18.05:20:02 zfs set mountpoint=none app
2010-05-18.05:20:09 zfs set mountpoint=/opt/app app/optapp
2010-05-18.05:22:31 zfs create app/optapp/iwstoreAPAC
2010-05-18.05:22:36 zfs create app/optapp/iwstoreEMEA
2010-05-18.05:22:42 zfs create app/optapp/iwstoreAmericas
2010-05-18.05:23:04 zfs set mountpoint=/opt/app/data/iw-store/APAC app/optapp/iwstoreAPAC
2010-05-18.05:23:16 zfs set mountpoint=/opt/app/data/iw-store/EMEA app/optapp/iwstoreEMEA
2010-05-18.05:23:34 zfs set mountpoint=/opt/app/data/iw-store/Americas app/optapp/iwstoreAmericas
2010-05-18.05:24:06 zfs set quota=50GB app/optapp/iwstoreAPAC
2010-05-18.05:24:18 zfs set quota=100GB app/optapp/iwstoreEMEA
2010-05-18.05:24:31 zfs set quota=200GB app/optapp/iwstoreAmericas
2010-05-18.05:24:56 zfs set quota=100GB app/optapp
2010-05-18.05:25:24 zfs set quota=200GB app/optapp/iwstoreAmericas
2010-05-18.05:26:05 zfs set quota=200GB app/optapp
2010-05-18.05:26:12 zfs set quota=200GB app/optapp/iwstoreAmericas
2010-05-18.05:29:52 zfs rename app/optapp/iwstoreAPAC app/iwstoreAPAC
2010-05-18.05:30:18 zfs rename app/optapp/iwstoreEMEA app/iwstoreEMEA
2010-05-18.05:30:42 zfs rename app/optapp/iwstoreAmericas app/iwstoreAmericas
2010-05-18.05:31:07 zfs set quota=100GB app/optapp
2010-08-02.16:11:29 zfs set quota=250G app/iwstoreEMEA
2010-08-30.14:34:21 zfs set quota=50gb app/iwstoreAmericas
2010-08-30.14:34:37 zfs set quota=100gb app/iwstoreAPAC
2011-10-27.12:35:41 zfs set sharenfs=rw=@153.88.177.59,root=@153.88.177.59 app/iwstoreAPAC
2011-10-27.12:47:57 zfs set sharenfs=rw=@153.88.177.0/24,root=@153.88.177.0/24 app/iwstoreAPAC
2011-10-27.12:54:22 zfs set sharenfs=rw=@153.88.177.0/24,root=@153.88.177.0/24 app/iwstoreAmericas
2011-10-27.12:54:37 zfs set sharenfs=rw=@153.88.177.0/24,root=@153.88.177.0/24 app/iwstoreEMEA

#


Solaris Sending a ZFS Snapshot

Sending a ZFS Snapshot

You can use the zfs send command to send a copy of a snapshot stream and receive the snapshot stream in another pool on the same system or in another pool on a different system that is used to store backup data. For example, to send the snapshot stream on a different pool to the same system, use syntax similar to the following:
# zfs send tank/dana@snap1 | zfs recv spool/ds01
You can use zfs recv as an alias for the zfs receive command.
If you are sending the snapshot stream to a different system, pipe the zfs send output through the ssh command. For example:
host1# zfs send tank/dana@snap1 | ssh host2 zfs recv newtank/dana
When you send a full stream, the destination file system must not exist.
You can send incremental data by using the zfs send -i option. For example:
host1# zfs send -i tank/dana@snap1 tank/dana@snap2 | ssh host2 zfs recv newtank/dana

Solaris Remote Replication of ZFS Dat

Remote Replication of ZFS Data

You can use the zfs send and zfs recv commands to remotely copy a snapshot stream representation from one system to another system. For example:

This command sends the tank/cindy@today snapshot data and receives it into the sandbox/restfs file system and also creates arestfs@today snapshot on the newsys system. In this example, the user has been configured to use ssh on the remote system.


# zfs send tank/cindy@today | ssh newsys zfs recv sandbox/restfs@today

Solaris : ZFS clone 2

# zfs snapshot mypool/myfs@now
# zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs
\\another fs will automatically be created. It is now copying all data. 1.5TB

1) machine0$ zfs send mypool/myfs@now | ssh machine1 zfs receive anotherpool/anotherfs@anothersnap
2) machine0$ zfs send mypool/myfs@now | zfs receive anotherpool/anotherfs@anothersnap

Solaris: zfs snapshot

ZFS is a great filesystem. Amongst its many features, it has snapshots. Let’s see how to use them.

What are snapshots ?

Snapshots are instant copies of your filesystem at a fixed past time. They are designed so that they are lightning fast to create, and they use a minimal amount of disk thanks to the COW (Copy On Write) optimization.
Basically, when you take a snapshot of a ZFS filesystem, only the filesystem structure is copied, not the file data blocks. Afterwards, anytime the data of a file is going to be modified, the data block is copied first the snapshot will reference the copy instead of the current data block. This is the Copy-On-Write.
The volume occupied by the snapshot is merely the volume of the data blocks which have been modified since the snapshot was taken.

Snapshots in ZFS

To create a snapshot, issue the following command :
prodigy# zfs snapshot poolName/fileSystem@labelName
You can see that the snapshot was taken with zfs list :
# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
poolName                       5.63G   141G  27.5K  /poolName
poolName/fileSystem       2.97G   141G  1.92G  /poolName/fileSystem
poolName/fileSystem@label           0      -  1.92G  -
prodigy# 
To access the files of the snapshot, you can simply go in the .zfs/snapshot directory which is at the top level of your fileSystem. Take care, the .zfs directory is invisible by default.
At last, to delete the snapshot, simply use the zfs destroy command :
# zfs destroy poolName/fileSystem@label
# 

  • The following syntax creates recursive snapshots of all home directories in the tank/home file system. Then, you can use the zfs send -R command to create a recursive stream of the recursive home directory snapshot, which also includes the individual file system property settings.
# zfs snapshot -r tank/home@monday
# zfs send -R tank/home/@monday | ssh remote-system zfs receive -dvu pool

ZFS Clone

Clones can be only created from snapshot and they are actually new FS with initial content as original FS, so they are writable. 
# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
mypool                    1.49G   527M   528M  /mypool
mypool/home                993M   527M    20K  /mypool/home
mypool/home/user1          993M   527M   993M  /mypool/home/user1
mypool/home/user1@Monday      0      -   993M  -

# zfs clone mypool/home/user1@Monday mypool/home/user2

# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
mypool                    1.49G   527M   528M  /mypool
mypool/home                993M   527M    20K  /mypool/home
mypool/home/user1          993M   527M   993M  /mypool/home/user1
mypool/home/user1@Monday      0      -   993M  -
mypool/home/user2             0   527M   993M  /mypool/home/user2
Once clone is created from snapshot, the snapshot can not be deleted until clone exists.
# zfs get origin mypool/home/user2
NAME               PROPERTY  VALUE                      SOURCE
mypool/home/user2  origin    mypool/home/user1@Monday  -

# zfs destroy mypool/home/user1@Monday
cannot destroy 'mypool/home/user1@Monday': snapshot has dependent clones
use '-R' to destroy the following datasets:
mypool/home/user2

ZFS Snapshot

ZFS snapshot is read-only copy of the file system. 

When taken, it consumes no additional disk space, but when data changes, snapshot is growing since references to old data (unique to snapshot), so space cannot be freed. 

Let's take snapshot of FS mypool/home/user1. Snapshot name is Monday. So you can see what's command's syntax. 
# zfs snapshot mypool/home/user1@Monday

# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
mypool                    1.49G   527M   528M  /mypool
mypool/home                993M   527M    20K  /mypool/home
mypool/home/user1          993M   527M   993M  /mypool/home/user1
mypool/home/user1@Monday      0      -   993M  -   (consumes no additional data within zpool)
You even cannot destroy FS if it has any snapshot.
# zfs destroy mypool/home/user1
cannot destroy 'mypool/home/user1': filesystem has children
use '-r' to destroy the following datasets:
mypool/home/user1@Monday
Let’s change some data and see now that used data of snapshot is growing.
# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
mypool                    1.49G   527M   528M  /mypool
mypool/home                993M   527M    20K  /mypool/home
mypool/home/user1          993M   527M   993M  /mypool/home/user1
mypool/home/user1@Monday    28K      -   993M  -
Snapshot(s) is stored in hidden directory named .zfs/snapshot/, located in the root of FS. So you can go there and restore files if needed, remember it is read-only copy of FS.
# /mypool/home/user1/.zfs/snapshot> ls
total 3
dr-xr-xr-x   2 root     root           2 Nov 16 13:23 .
dr-xr-xr-x   3 root     root           3 Nov 16 13:23 ..
drwxr-xr-x   3 root     root           6 Nov 16 17:09 Monday
Beside restoring individual files from snapshot, you can roll back whole FS to previously taken snapshot.
# zfs list -t snapshot
NAME                       USED  AVAIL  REFER  MOUNTPOINT
mypool/home/user1@Monday   993M      -   993M  -

# zfs rollback mypool/home/user1@Monday
And all changes since snapshot Monday has been taken up to now are deleted and FS is same as it was on that Monday.

Thursday, November 17, 2011

Solaris: Rsync example

Backing up data using rsync command

rsync is a great tool for backing up and restoring files. I'll use some example to explain on how it works.

Example of the remote server and folder that needs to be backup or copied:
Remote host name: server01.comentum.com 
Remote folder: /home/user01/
Remote user: user01

rsync example for backing up / copying from remote server to local Linux computer:
rsync -arv user01@server01.comentum.com:/home/user01/ /home/bob/user01backup/
(/home/bob/user01backup/ is a local Linux folder path)
rsync example for backing up / copying from remote server to local Mac computer:
rsync -arv user01@server01.comentum.com:/home/user01/ /Users/bob/user01backup/
(/Users/bob/user01backup/ is a local Mac folder path)

rsync example for backing up / copying from remote server to local Mac computer and external USB drive:
rsync -arv user01@server01.comentum.com:/home/user01/ /Volumes/westerndigital-usb/user01backup/
(/Volumes/westerndigital-usb/user01backup/ is an external USB Drive path on a local Mac computer)

Here is what the "-arv" option does:
a = archive - means it preserves permissions (owners, groups), times, symbolic links, and devices.
r = recursive - means it copies directories and sub directories
v = verbose - means that it prints on the screen what is being copied
More Examples:
rsync -rv user01@server01.comentum.com:/home/user01/ /home/bob/user01backup/
(This example will copy folders and sub-folder but will not preserve permissions, times and symbolic links during the transfer)

rsync -arv --exclude 'logs' user01@server01.comentum.com:/home/user01/ /Users/bob/user01backup/
(This example will copy everything (folders, sub-folders, etc), will preserver permissions, times, links, but will exclude the folder /home/user01/logs/ from being copied)
Use of "/" at the end of path:
When using "/" at the end of source, rsync will copy the content of the last folder.
When not using "/" at the end of source, rsync will copy the last folder and the content of the folder.
When using "/" at the end of destination, rsync will paste the data inside the last folder.
When not using "/" at the end of destination, rsync will create a folder with the last destination folder name and paste the data inside that folder.
OPTIONS SUMMARY
       Here is a short summary of the options available in rsync. 

        -v, --verbose               increase verbosity
        -q, --quiet                 suppress non-error messages
            --no-motd               suppress daemon-mode MOTD (see caveat)
        -c, --checksum              skip based on checksum, not mod-time & size
        -a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
            --no-OPTION             turn off an implied OPTION (e.g. --no-D)
        -r, --recursive             recurse into directories
        -R, --relative              use relative path names
            --no-implied-dirs       don’t send implied dirs with --relative
        -b, --backup                make backups (see --suffix & --backup-dir)
            --backup-dir=DIR        make backups into hierarchy based in DIR
            --suffix=SUFFIX         backup suffix (default ~ w/o --backup-dir)
        -u, --update                skip files that are newer on the receiver
            --inplace               update destination files in-place
            --append                append data onto shorter files
            --append-verify         --append w/old data in file checksum
        -d, --dirs                  transfer directories without recursing
        -l, --links                 copy symlinks as symlinks
        -L, --copy-links            transform symlink into referent file/dir
            --copy-unsafe-links     only "unsafe" symlinks are transformed
            --safe-links            ignore symlinks that point outside the tree
        -k, --copy-dirlinks         transform symlink to dir into referent dir
        -K, --keep-dirlinks         treat symlinked dir on receiver as dir
        -H, --hard-links            preserve hard links
        -p, --perms                 preserve permissions
        -E, --executability         preserve executability
            --chmod=CHMOD           affect file and/or directory permissions
        -A, --acls                  preserve ACLs (implies -p)
        -X, --xattrs                preserve extended attributes
        -o, --owner                 preserve owner (super-user only)
        -g, --group                 preserve group
            --devices               preserve device files (super-user only)
            --specials              preserve special files
        -D                          same as --devices --specials
        -t, --times                 preserve modification times
        -O, --omit-dir-times        omit directories from --times
            --super                 receiver attempts super-user activities
            --fake-super            store/recover privileged attrs using xattrs
        -S, --sparse                handle sparse files efficiently
        -n, --dry-run               perform a trial run with no changes made
        -W, --whole-file            copy files whole (w/o delta-xfer algorithm)
        -x, --one-file-system       don’t cross filesystem boundaries
        -B, --block-size=SIZE       force a fixed checksum block-size
        -e, --rsh=COMMAND           specify the remote shell to use
            --rsync-path=PROGRAM    specify the rsync to run on remote machine
            --existing              skip creating new files on receiver
            --ignore-existing       skip updating files that exist on receiver
            --remove-source-files   sender removes synchronized files (non-dir)
            --del                   an alias for --delete-during
            --delete                delete extraneous files from dest dirs
            --delete-before         receiver deletes before transfer (default)
            --delete-during         receiver deletes during xfer, not before
            --delete-delay          find deletions during, delete after
            --delete-after          receiver deletes after transfer, not before
            --delete-excluded       also delete excluded files from dest dirs
            --ignore-errors         delete even if there are I/O errors
            --force                 force deletion of dirs even if not empty
            --max-delete=NUM        don’t delete more than NUM files
            --max-size=SIZE         don’t transfer any file larger than SIZE
            --min-size=SIZE         don’t transfer any file smaller than SIZE
            --partial               keep partially transferred files
            --partial-dir=DIR       put a partially transferred file into DIR
            --delay-updates         put all updated files into place at end
        -m, --prune-empty-dirs      prune empty directory chains from file-list
            --numeric-ids           don’t map uid/gid values by user/group name
            --timeout=SECONDS       set I/O timeout in seconds
            --contimeout=SECONDS    set daemon connection timeout in seconds
        -I, --ignore-times          don’t skip files that match size and time
            --size-only             skip files that match in size
            --modify-window=NUM     compare mod-times with reduced accuracy
        -T, --temp-dir=DIR          create temporary files in directory DIR
        -y, --fuzzy                 find similar file for basis if no dest file
            --compare-dest=DIR      also compare received files relative to DIR
            --copy-dest=DIR         ... and include copies of unchanged files
            --link-dest=DIR         hardlink to files in DIR when unchanged
        -z, --compress              compress file data during the transfer
            --compress-level=NUM    explicitly set compression level
            --skip-compress=LIST    skip compressing files with suffix in LIST
        -C, --cvs-exclude           auto-ignore files in the same way CVS does
        -f, --filter=RULE           add a file-filtering RULE
        -F                          same as --filter=’dir-merge /.rsync-filter’
                                    repeated: --filter=’- .rsync-filter’
            --exclude=PATTERN       exclude files matching PATTERN
            --exclude-from=FILE     read exclude patterns from FILE
            --include=PATTERN       don’t exclude files matching PATTERN
            --include-from=FILE     read include patterns from FILE
            --files-from=FILE       read list of source-file names from FILE
        -0, --from0                 all *from/filter files are delimited by 0s
        -s, --protect-args          no space-splitting; wildcard chars only
            --address=ADDRESS       bind address for outgoing socket to daemon
            --port=PORT             specify double-colon alternate port number
            --sockopts=OPTIONS      specify custom TCP options
            --blocking-io           use blocking I/O for the remote shell
            --stats                 give some file-transfer stats
        -8, --8-bit-output          leave high-bit chars unescaped in output
        -h, --human-readable        output numbers in a human-readable format
            --progress              show progress during transfer
        -P                          same as --partial --progress
        -i, --itemize-changes       output a change-summary for all updates
            --out-format=FORMAT     output updates using the specified FORMAT
            --log-file=FILE         log what we’re doing to the specified FILE
            --log-file-format=FMT   log updates using the specified FMT
            --password-file=FILE    read daemon-access password from FILE
            --list-only             list the files instead of copying them
            --bwlimit=KBPS          limit I/O bandwidth; KBytes per second
            --write-batch=FILE      write a batched update to FILE
            --only-write-batch=FILE like --write-batch but w/o updating dest
            --read-batch=FILE       read a batched update from FILE
            --protocol=NUM          force an older protocol version to be used
            --iconv=CONVERT_SPEC    request charset conversion of filenames
            --checksum-seed=NUM     set block/file checksum seed (advanced)
        -4, --ipv4                  prefer IPv4
        -6, --ipv6                  prefer IPv6
            --version               print version number
       (-h) --help                  show this help (see below for -h comment)

       Rsync can also be run as a daemon, in which case the following options are accepted:
       Rsync can also be run as a daemon, in which case the following options are accepted:

            --daemon                run as an rsync daemon
            --address=ADDRESS       bind to the specified address
            --bwlimit=KBPS          limit I/O bandwidth; KBytes per second
            --config=FILE           specify alternate rsyncd.conf file
            --no-detach             do not detach from the parent
            --port=PORT             listen on alternate port number
            --log-file=FILE         override the "log file" setting
            --log-file-format=FMT   override the "log format" setting
            --sockopts=OPTIONS      specify custom TCP options
        -v, --verbose               increase verbosity
        -4, --ipv4                  prefer IPv4
        -6, --ipv6                  prefer IPv6
        -h, --help                  show this help (if used after --daemon)