Monitoring pipe activity with pv
$ dd if=/dev/zero | pv > foo
522MB 0:00:06 [ 109MB/s] [ <=> ]
When pv is added to the pipeline, you get a continuous display of the amount of data that is being transferred between two pipe endpoints. I really dig this utility, and I am stoked that I found the catonmat website! Niiiiiiiiice!
UPDATE:
Try using pv when sending stuff over the network using dd. Neato.
[root@machine2 ~]# ssh machine1 “dd if=/dev/VolGroup00/domU2migrate”|pv -s 8G -petr|dd of=/dev/xen02vg/domU2migrate
0:00:30 [11.2MB/s] [====> ] 4% ETA :10:13
Want to rate limit the transfer so you don’t flood the pipe?
-L RATE, –rate-limit RATE
Limit the transfer to a maximum of RATE bytes per second. A suffix of “k”, “m”, “g”, or “t” can be added to denote kilobytes (*1024), megabytes, and so on.
-B BYTES, –buffer-size BYTES
Use a transfer buffer size of BYTES bytes. A suffix of “k”, “m”, “g”, or “t” can be added to denote kilobytes (*1024), megabytes, and so on. The default buffer size is the block size of the input file’s filesystem multiplied by 32 (512kb max), or 400kb if the block size cannot be determined.
Already have a transfer in progress and want to rate limit it without restarting?
-R PID, –remote PID
If PID is an instance of pv that is already running, -R PID will cause that instance to act as though it had been given this instance’s command line instead. For example, if pv -L 123k is running with process ID 9876, then running pv -R 9876 -L 321k will cause it to start using a rate limit of 321k instead of 123k. Note that some options cannot be changed while running, such as -c, -l, and -
Tuesday, December 6, 2011
Solaris 10: Adding a file system to a running zone
Since the global zone uses loopback mounts to present file systems to zones, adding a new file system was as easy as loopback mounting the file system into the zone’s file system:
$ mount -F lofs /filesystems/zone1oracle03 /zones/zone1/root/ora03
Once the file system was mounted, I added it to the zone configuration and then verified it was mounted:
$ mount | grep ora03/filesystems/zone1oracle03 on filesystems/zone1oracle0 read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=2d9000b on Sun Apr 12 10:43:19 2009 /zones/zone1/root/ora03 on /filesystems/zone1oracle03 read/write/setuid/devices/dev=2d9000b on Sun Apr 12 10:44:07 2009
With ZFS filesystem (mountpoint=legacy):
mount -F zfs zpool/fs /path/to/zone/root/fs
$ mount -F lofs /filesystems/zone1oracle03 /zones/zone1/root/ora03
Once the file system was mounted, I added it to the zone configuration and then verified it was mounted:
$ mount | grep ora03/filesystems/zone1oracle03 on filesystems/zone1oracle0 read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=2d9000b on Sun Apr 12 10:43:19 2009 /zones/zone1/root/ora03 on /filesystems/zone1oracle03 read/write/setuid/devices/dev=2d9000b on Sun Apr 12 10:44:07 2009
mount -F zfs zpool/fs /path/to/zone/root/fs
Linux: remount read only file system
$ mount -o remount,rw /
Once you can write to the file system you should be able to write out changes to the file system to correct the issue that prevented the server from booting. Viva la remount!
Once you can write to the file system you should be able to write out changes to the file system to correct the issue that prevented the server from booting. Viva la remount!
Solaris: coreadm core file management
Using the Solaris coreadm utility to control core file generation
Solaris has shipped with the coreadm utiltiy for quite some time, and this nifty little utility allows you to control every facet of core file generation. This includes the ability to control where core files are written, the name of core files, which portions of the processes address space will be written to the core file, and my favorite option, whether or not to generate a syslog entry indicating that a core file was generated.
To begin using coreadm, you will first need to run it wit the “-g” option to specify where core files should be stored, and the pattern that should be used when creating the core file:
$ coreadm -g /var/core/core.%f.%p
Once a directory and file pattern are specified, you can optionally adjust which portions of the processes address space (e.g., text segment, heap, ISM, etc.) will be written to the core file. To ease debugging, I like to configure coreadm to dump everything with the”-G all” option:
$ coreadm -G all
Since core files are typically created at odd working hours, I also like to configure coreadm to log messages to syslog indicating that a core file was created. This can be done by using the coreadm “-e log” option:
$ coreadm -e log
After these settings are adjusted, the coreadm “-e global” option can be used to enable global core file generation, and the coreadm utility can be run without any arguments to view the settings (which are stored in /etc/coreadm.conf):
$ coreadm -e global
$ coreadm
global core file pattern: /var/core/core.%f.%p global core file content: all init core file pattern: core init core file content: default global core dumps: enabled per-process core dumps: enabled global setid core dumps: disabled per-process setid core dumps: disabled global core dump logging: enabled
Once global core file support is enabled, each time a process receives a deadly signal (e.g., SIGSEGV, SIGBUS, etc.):
$ kill -SIGSEGV 4652
A core file will be written to /var/core:
$ ls -al /var/core/*4652
-rw------- 1 root root 4163953 Mar 9 11:51 /var/core/core.inetd.4652
And a message similar to the following will appear in the system log:
Mar 9 11:51:48 fubar genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[4652] core dumped: /var/core/core.inetd.4652
This is an amazingly useful feature, and can greatly simplify root causing software problems.
Solaris: killing defunct process
$ ps -ef | grep defunct
root 646 426 0 - ? 0:00 '
root 1489 12335 0 09:32:54 pts/1 0:00 grep defunct
$ preap 646
646: exited with status 0
This will cause the process to exit, and the kernel can then free up the resources that were allocated by that process.
$ preap 646
646: exited with status 0
This will cause the process to exit, and the kernel can then free up the resources that were allocated by that process.
Solaris: undelete file
How to undelete any open, deleted file on linux / solaris
Chris Dew wrote up a neat trick on how to recover files if deleted on Linux, yet still open by a process.
This works on Solaris as well. =)
$:~:uname -a
SunOS somehost.com 5.10 Generic_127112-11 i86pc i386 i86pc
SunOS somehost.com 5.10 Generic_127112-11 i86pc i386 i86pc
$:~:echo “sup prefetch.net folks?” > testfile
$:~:tail -f testfile &
[1] 17134
$:~:tail -f testfile &
[1] 17134
$:~:rm testfile
$:~:ls /proc/17134/fd/
0 1 2
$:~:cat /proc/17134/fd/0
sup prefetch.net folks?
$:~:cp !$ ./testfile
cp /proc/17134/fd/0 ./testfile
$:~:cat testfile
sup prefetch.net folks?
$:~:ls /proc/17134/fd/
0 1 2
$:~:cat /proc/17134/fd/0
sup prefetch.net folks?
$:~:cp !$ ./testfile
cp /proc/17134/fd/0 ./testfile
$:~:cat testfile
sup prefetch.net folks?
Solaris: using samba to access windows server folder
Accessing Windows shares from the Solaris/Linux command line
If Samba is installed on the system, this is easy to do with the smbclient utility. To access the Windows server named “milton” from the command line, you can run smbclient with the “-U” option, the name of the user to authenticate with, and the name of the server and share to access:
$ smbclient -U “domain\matty” //milton/foo
In this example, I am authenticating as the user matty in the domain “domain,” and accessing the share foo on the server milton. If smbclient is unable to resolve the server, you will need to make sure that you have defined a WINS server, or the server exists in the lmhosts file. To define a WINS server, you can add a line similar to the following (you can get the WINS server by looking at ipconfig /all on a Windows desktop, or by reviewing the LAN traffic with ethereal) to the smb.conf file:
wins server = 1.2.3.4
If you don’t want to use WINS to resolve names, you can add an entry similar to the following to the lmhosts file:
192.168.1.200 milton
Once you are connected to the server, you will be greeted with a “smb: \>” prompt. This prompt allows you to feed commands to the server, such as “pwd,” “dir,” “mget,” and “prompt.” To retrieve all of the files in the directory foo1, I can “cd” into the foo1 directory, use “prompt” to disable interactive prompts, and then run “mget” to retrieve all files in that directory:
smb: \> pwd
Current directory is \\server1\foo
Current directory is \\server1\foo
smb: \> dir
received 10 entries (eos=1) . DA 0 Mon May 22 07:19:21 2006 .. DA 0 Mon May 22 07:19:21 2006 foo1 DA 0 Sun Dec 11 04:51:12 2005 foo2 DA 0 Thu Nov 9 09:48:40 2006 < ..... >
smb: \> cd foo1
smb: \foo1\> prompt
prompting is now off
prompting is now off
smb: \foo1\> mget *
received 38 entries (eos=1) getting file \foo1\yikes.tar of size 281768 as yikes.tar 411.3 kb/s) (average 411.3 kb/s) < ..... >
smb: \foo1\> exit
The smbclient manual page documents all of the available commands, and provides a great introduction to this super useful utility. If you bump into any issues connecting to a remote Windows server, you can add “-d” and a debug level (I like debug level 3) to the smbclient command line. This is perfect for debugging connectivity issues.
Solaris: free space in Veritas diskgroups
Finding free space in Veritas diskgroups
The Veritas volume manager (VxVM) provides logical volume management capabilites across a variety of platforms. As you create new volumes, it is often helpful to know how much free space is available. You can find free space using two methods. The first method utilizes vxdg’s “free” option:
$ vxdg -g oradg free
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS oradg c3t20d1 c3t20d1s2 c3t20d1 104848640 1536 - oradg c3t20d3 c3t20d3s2 c3t20d3 104848640 1536 - oradg c3t20d5 c3t20d5s2 c3t20d5 104848640 1536 - oradg c3t20d7 c3t20d7s2 c3t20d7 104848640 1536 - oradg c3t20d9 c3t20d9s2 c3t20d9 104848640 1536 -
The “LENGTH” column displays the number of 512-byte blocks available on each disk drive in the disk group “oradg.”. If you don’t feel like using bc(1) to turn blocks into kilobytes, you can use vxassist’s “maxsize” option to print the number of blocks and Megabytes available:
$ vxassist -g oradg maxsize layout=concat
Maximum volume size: 6144 (3Mb)
Maximum volume size: 6144 (3Mb)
Now to find out what to do with 3 MB of disk storage :)
Solaris: Vxvm calculate free chunk
root $ sh a.sh
... Free chunk: rootmirr 25226.6 Meg
... Free chunk: rtdisk 5122.05 Meg
... Free chunk: rtdisk 0.250977 Meg
... Free chunk: rtdisk 0.255859 Meg
... Free chunk: rtdisk 2049.67 Meg
... Free chunk: rtdisk 550.151 Meg
root $ more a.sh
#! /bin/sh
vxdg -g rootdg free | nawk '
{
#if ($5/2/1024 > 100)
if ($5/2/1024 > 0)
print "... Free chunk: "$1" " $5/2/1024, "Meg"
... Free chunk: rootmirr 25226.6 Meg
... Free chunk: rtdisk 5122.05 Meg
... Free chunk: rtdisk 0.250977 Meg
... Free chunk: rtdisk 0.255859 Meg
... Free chunk: rtdisk 2049.67 Meg
... Free chunk: rtdisk 550.151 Meg
root $ more a.sh
#! /bin/sh
vxdg -g rootdg free | nawk '
{
#if ($5/2/1024 > 100)
if ($5/2/1024 > 0)
print "... Free chunk: "$1" " $5/2/1024, "Meg"
Solaris : VXVM quick mirror
VXVM quickly mirroring an empty volume
# vxassist make newvol 10m layout=concat-mirror init=active disk1 disk2
# vxassist make newvol 10m layout=concat-mirror init=active disk1 disk2
Solaris: Change NIS user password
how to change nis user password
root@t47s# passwd eddccma
Enter login(NIS) password:
passwd(SYSTEM): Sorry, wrong passwd
Permission denied
...just login to nis master server
root@t47s# ypwhich -m passwd
ededuun001
root@t47s# ssh ededuun001
imhas@ededuun001 # yppasswd eddccma
New Password:
Re-enter new Password:
passwd: password successfully changed for eddccma
imhas@ededuun001 #
root@t47s# passwd eddccma
Enter login(NIS) password:
passwd(SYSTEM): Sorry, wrong passwd
Permission denied
...just login to nis master server
root@t47s# ypwhich -m passwd
ededuun001
root@t47s# ssh ededuun001
imhas@ededuun001 # yppasswd eddccma
New Password:
Re-enter new Password:
passwd: password successfully changed for eddccma
imhas@ededuun001 #
Thursday, December 1, 2011
HPUX locating WWPN
Locating the WWPN for an HP-UX host
Complete this task to locate the WWPN for a Hewlett-Packard Server host.
1. Go to the root directory of your HP-UX host.
2. Type ioscan -fnC fc| more for information on the fibre-channel adapters installed on the host.
The following is example output:
fc 0 0/2/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td0
fc 1 0/4/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td1
fc 2 0/6/2/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td2
3. Look under the description for the Fibre Channel Mass Storage adapter.
For example, look for the device path name /dev/td1.
4. Type: fcmsutil /dev/td1 | grep world where /dev/td1 is the path.
The following is example output:
# fcmsutil /dev/td1 | grep World
N_Port Node World Wide Name = 0x50060b000024b139
N_Port Port World Wide Name = 0x50060b000024b138
(root@hpmain)/home/root# fcmsutil /dev/td0 | grep World
N_Port Node World Wide Name = 0x50060b000023a521
N_Port Port World Wide Name = 0x50060b000023a520
(root@hpmain)/home/root# fcmsutil /dev/td2 | grep World
N_Port Node World Wide Name = 0x50060b0000253a8f
N_Port Port World Wide Name = 0x50060b0000253a8e
(root@hpmain)/home/root#
Complete this task to locate the WWPN for a Hewlett-Packard Server host.
1. Go to the root directory of your HP-UX host.
2. Type ioscan -fnC fc| more for information on the fibre-channel adapters installed on the host.
The following is example output:
fc 0 0/2/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td0
fc 1 0/4/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td1
fc 2 0/6/2/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td2
3. Look under the description for the Fibre Channel Mass Storage adapter.
For example, look for the device path name /dev/td1.
4. Type: fcmsutil /dev/td1 | grep world where /dev/td1 is the path.
The following is example output:
# fcmsutil /dev/td1 | grep World
N_Port Node World Wide Name = 0x50060b000024b139
N_Port Port World Wide Name = 0x50060b000024b138
(root@hpmain)/home/root# fcmsutil /dev/td0 | grep World
N_Port Node World Wide Name = 0x50060b000023a521
N_Port Port World Wide Name = 0x50060b000023a520
(root@hpmain)/home/root# fcmsutil /dev/td2 | grep World
N_Port Node World Wide Name = 0x50060b0000253a8f
N_Port Port World Wide Name = 0x50060b0000253a8e
(root@hpmain)/home/root#
Solaris: checking hba qlogic status
checking hba on solaris qlogic
qlc(0) is the internal controllers for the dual SCSI disks or if it's
the Qlogic IPS2200 add-in card.
OS: Solaris 10 (11/06)
Kernel: 125100-07
luxadm -e port
/devices/pci-at-9,600000/SUNW,qlc-at-2/fp-at-0,0:devctl CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-4/fp-at-0,0:devctl NOT CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-5/fp-at-0,0:devctl CONNECTED
qlc(0) is the internal controllers for the dual SCSI disks or if it's
the Qlogic IPS2200 add-in card.
OS: Solaris 10 (11/06)
Kernel: 125100-07
luxadm -e port
/devices/pci-at-9,600000/SUNW,qlc-at-2/fp-at-0,0:devctl CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-4/fp-at-0,0:devctl NOT CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-5/fp-at-0,0:devctl CONNECTED
Solaris: metadb repair
$ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d100 10086628 8051562 1934200 81% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 10513824 1072 10512752 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 6146048 4469136 1676912 73% /tmp
swap 10512784 32 10512752 1% /var/run
/dev/md/dsk/d102 10237036 7516386 2618280 75% /opt
/dev/md/dsk/d104 70569513 50785798 19078020 73% /opt/app
/dev/md/dsk/d103 1988623 29696 1899269 2% /var/tmp
$ uname -a
SunOS intdev02 5.10 Generic_125100-10 sun4u sparc SUNW,Sun-Fire-V240
[03:35:01] root@intdev02[1]# grep -i warn /var/adm/messages
Jun 27 21:29:32 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:37 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:37 intdev02 glm: [ID 401478 kern.warning] WARNING: ID[SUNWpd.glm.cmd_timeout.6018]
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd3):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@2,0 (sd1):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@3,0 (sd2):
[03:35:01] root@intdev02[2]# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1c,600000/scsi@2/sd@0,0
1. c1t1d0
/pci@1c,600000/scsi@2/sd@1,0
2. c1t2d0
/pci@1c,600000/scsi@2/sd@2,0
3. c1t3d0
/pci@1c,600000/scsi@2/sd@3,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of SVM volume stripe:d300. Please see metaclear(1M).
/dev/dsk/c1t1d0s1 is part of SVM volume stripe:d301. Please see metaclear(1M).
/dev/dsk/c1t1d0s3 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s4 is part of SVM volume stripe:d303. Please see metaclear(1M).
/dev/dsk/c1t1d0s5 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s7 contains an SVM mdb. Please see metadb(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
quit
format>
[03:35:01] root@intdev02[3]#
[03:35:01] root@intdev02[5]# date
Tue Jul 1 03:39:06 MEST 2008
[03:35:01] root@intdev02[6]# iostat -En | more
c1t1d0 Soft Errors: 0 Hard Errors: 21 Transport Errors: 2
Vendor: SEAGATE Product: ST373207LSUN72G Revision: 045A Serial No: 05433302ZN
Size: 73.40GB <73400057856 bytes>
Media Error: 0 Device Not Ready: 19 No Device: 1 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
[03:35:01] root@intdev02[9]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[10]#
# metadb -a c0t2d0s3 c1t1d0s3
Example 3: Deleting Two Replicas
This example shows how to delete two replicas from the sys-
tem. Assume that replicas have been set up on
/dev/dsk/c0t2d0s3 and /dev/dsk/c1t1d0s3.
# metadb -d c0t2d0s3 c1t1d0s3
[03:35:01] root@intdev02[18]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[19]# metadb -d c1t1d0s7
[03:35:01] root@intdev02[20]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[21]# metadb -a c1t1d0s7
[03:35:01] root@intdev02[22]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a u 16 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[23]#
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d100 10086628 8051562 1934200 81% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 10513824 1072 10512752 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 6146048 4469136 1676912 73% /tmp
swap 10512784 32 10512752 1% /var/run
/dev/md/dsk/d102 10237036 7516386 2618280 75% /opt
/dev/md/dsk/d104 70569513 50785798 19078020 73% /opt/app
/dev/md/dsk/d103 1988623 29696 1899269 2% /var/tmp
$ uname -a
SunOS intdev02 5.10 Generic_125100-10 sun4u sparc SUNW,Sun-Fire-V240
[03:35:01] root@intdev02[1]# grep -i warn /var/adm/messages
Jun 27 21:29:32 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:37 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:37 intdev02 glm: [ID 401478 kern.warning] WARNING: ID[SUNWpd.glm.cmd_timeout.6018]
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd3):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@2,0 (sd1):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@3,0 (sd2):
[03:35:01] root@intdev02[2]# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1c,600000/scsi@2/sd@0,0
1. c1t1d0
/pci@1c,600000/scsi@2/sd@1,0
2. c1t2d0
/pci@1c,600000/scsi@2/sd@2,0
3. c1t3d0
/pci@1c,600000/scsi@2/sd@3,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of SVM volume stripe:d300. Please see metaclear(1M).
/dev/dsk/c1t1d0s1 is part of SVM volume stripe:d301. Please see metaclear(1M).
/dev/dsk/c1t1d0s3 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s4 is part of SVM volume stripe:d303. Please see metaclear(1M).
/dev/dsk/c1t1d0s5 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s7 contains an SVM mdb. Please see metadb(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!
quit
format>
[03:35:01] root@intdev02[3]#
[03:35:01] root@intdev02[5]# date
Tue Jul 1 03:39:06 MEST 2008
[03:35:01] root@intdev02[6]# iostat -En | more
c1t1d0 Soft Errors: 0 Hard Errors: 21 Transport Errors: 2
Vendor: SEAGATE Product: ST373207LSUN72G Revision: 045A Serial No: 05433302ZN
Size: 73.40GB <73400057856 bytes>
Media Error: 0 Device Not Ready: 19 No Device: 1 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
[03:35:01] root@intdev02[9]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[10]#
# metadb -a c0t2d0s3 c1t1d0s3
Example 3: Deleting Two Replicas
This example shows how to delete two replicas from the sys-
tem. Assume that replicas have been set up on
/dev/dsk/c0t2d0s3 and /dev/dsk/c1t1d0s3.
# metadb -d c0t2d0s3 c1t1d0s3
[03:35:01] root@intdev02[18]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[19]# metadb -d c1t1d0s7
[03:35:01] root@intdev02[20]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[21]# metadb -a c1t1d0s7
[03:35:01] root@intdev02[22]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a u 16 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[23]#
Linux: Lun detection
LUN detection procedures
This topic describes LUN detection procedures for the Linux host system.
If you have a Linux driver that does not automatically configure any LUNs other than LUN 0, you can manually configure the other LUNs, depending on the parameters and settings used for the SCSI mid-layer driver. Figure 1 shows an example of the /proc/scsi/scsi file for a Linux host that only configures the first LUN, LUN 0, on each host adapter port.
Figure 1. Example of a /proc/scsi/scsi file from a Linux host that only configures LUN 0
# cat proc/scsi/scsi
...
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM-PSG Model: DPSS-318350M F Rev: S9HA
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 15 Lun: 00
Vendor: IBM Model: TP4.6 V41b3 Rev: 4.1b
Type: Processor ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
There are two ways to workaround the issue of only having LUN 0 configured:
1. Create a script to manually add devices into /proc/scsi/scsi
2. Detect LUNs automatically at system boot by modifying the initial ram-disk (initrd)
Create a script to echo the /proc filesystem
Use the scsi add-single-device command to consecutively configure all of the LUNs that are assigned to your host system. Write a script that repeats the scsi add-single-device command for each LUN on each ID for each host adapter. The script must scan all host adapter ports and identify all of the LUNs that are assigned to each port.
After you run the script, you can view all of the assigned LUNs in the /proc/scsi/scsi file.
Figure 2 shows an excerpt of an example /proc/scsi/scsi file for a Linux host after a script has configured every LUN.
Figure 2. Example of a /proc/scsi/scsi file for a Linux host with configured LUNs
# cat proc/scsi/scsi
...
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 03
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 04
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
...
Detect LUNs automatically at system boot
The second method of configuring LUNs for a Linux system with only LUN 0 configured involves setting the parameter for the SCSI mid-layer driver that controls how many LUNs are scanned during a SCSI bus scan. The following procedure works for both 2.4 and 2.6 kernels, but it assumes the SCSI mid-layer driver is compiled as a scsi_mod module that is loaded automatically at system boot time. For Linux 2.4 kernels, to set the maximum number of disk devices under Linux to properly detect all volumes, you need to set the max_scsi_luns option for the SCSI mid-layer driver. For example, if max_scsi_luns is set to 1 this limits SCSI bus scans to only LUN 0. This value should be set to the respective maximum number of disks the kernel can support, for example, 128 or 256. In Linux 2.6 kernels, the same procedure applies, except that the parameter has been renamed from max_scsi_luns to max_luns.
1. Edit the /etc/modules.conf file.
2. Add the following line:
* options scsi_mod max_scsi_luns= (where is the total number of luns to probe.
3. Save the file.
4. Run the mkinitrd command to rebuild the ram-disk associated with the current kernel. You can use the following figures examples of what mkinitrd command to run for your operating system. refers to the ‘uname –r’ output which displays the currently running kernel level, for example:. 2.4.21-292-smp.
For SUSE distributions, use the following command:
cd /boot
mkinitrd –k vmlinuz- -i initrd-
For Red Hat distributions, use the following command:
cd /boot
mkinitrd –v initrd-.img
5. Reboot the host.
6. Verify that the boot files are correctly configured for the newly created initrd image in the /boot/grub/menu.lst file.
This topic describes LUN detection procedures for the Linux host system.
If you have a Linux driver that does not automatically configure any LUNs other than LUN 0, you can manually configure the other LUNs, depending on the parameters and settings used for the SCSI mid-layer driver. Figure 1 shows an example of the /proc/scsi/scsi file for a Linux host that only configures the first LUN, LUN 0, on each host adapter port.
Figure 1. Example of a /proc/scsi/scsi file from a Linux host that only configures LUN 0
# cat proc/scsi/scsi
...
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM-PSG Model: DPSS-318350M F Rev: S9HA
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 15 Lun: 00
Vendor: IBM Model: TP4.6 V41b3 Rev: 4.1b
Type: Processor ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
There are two ways to workaround the issue of only having LUN 0 configured:
1. Create a script to manually add devices into /proc/scsi/scsi
2. Detect LUNs automatically at system boot by modifying the initial ram-disk (initrd)
Create a script to echo the /proc filesystem
Use the scsi add-single-device command to consecutively configure all of the LUNs that are assigned to your host system. Write a script that repeats the scsi add-single-device command for each LUN on each ID for each host adapter. The script must scan all host adapter ports and identify all of the LUNs that are assigned to each port.
After you run the script, you can view all of the assigned LUNs in the /proc/scsi/scsi file.
Figure 2 shows an excerpt of an example /proc/scsi/scsi file for a Linux host after a script has configured every LUN.
Figure 2. Example of a /proc/scsi/scsi file for a Linux host with configured LUNs
# cat proc/scsi/scsi
...
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 03
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 04
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
...
Detect LUNs automatically at system boot
The second method of configuring LUNs for a Linux system with only LUN 0 configured involves setting the parameter for the SCSI mid-layer driver that controls how many LUNs are scanned during a SCSI bus scan. The following procedure works for both 2.4 and 2.6 kernels, but it assumes the SCSI mid-layer driver is compiled as a scsi_mod module that is loaded automatically at system boot time. For Linux 2.4 kernels, to set the maximum number of disk devices under Linux to properly detect all volumes, you need to set the max_scsi_luns option for the SCSI mid-layer driver. For example, if max_scsi_luns is set to 1 this limits SCSI bus scans to only LUN 0. This value should be set to the respective maximum number of disks the kernel can support, for example, 128 or 256. In Linux 2.6 kernels, the same procedure applies, except that the parameter has been renamed from max_scsi_luns to max_luns.
1. Edit the /etc/modules.conf file.
2. Add the following line:
* options scsi_mod max_scsi_luns=
3. Save the file.
4. Run the mkinitrd command to rebuild the ram-disk associated with the current kernel. You can use the following figures examples of what mkinitrd command to run for your operating system.
For SUSE distributions, use the following command:
cd /boot
mkinitrd –k vmlinuz-
For Red Hat distributions, use the following command:
cd /boot
mkinitrd –v initrd-
5. Reboot the host.
6. Verify that the boot files are correctly configured for the newly created initrd image in the /boot/grub/menu.lst file.
Wednesday, November 23, 2011
Solaris: calculate diff filesystem size after transfer
#! /bin/sh
value1=`df -k /app/iwstoreEMEA_snap| grep snap| awk '{print $3}'`
value2=`df -k /server4_EMEA | grep _EMEA | awk '{print $3}'`
exprans=`expr $value1 - $value2`
echo "$exprans kb left"
sleep 60
value1=`df -k /app/iwstoreEMEA_snap| grep snap| awk '{print $3}'`
value2=`df -k /server_EMEA | grep _EMEA | awk '{print $3}'`
exprans1=`expr $value1 - $value2`
transfer=`expr $exprans - $exprans1`
echo "$transfer KB transferd"
value1=`df -k /app/iwstoreEMEA_snap| grep snap| awk '{print $3}'`
value2=`df -k /server4_EMEA | grep _EMEA | awk '{print $3}'`
exprans=`expr $value1 - $value2`
echo "$exprans kb left"
sleep 60
value1=`df -k /app/iwstoreEMEA_snap| grep snap| awk '{print $3}'`
value2=`df -k /server_EMEA | grep _EMEA | awk '{print $3}'`
exprans1=`expr $value1 - $value2`
transfer=`expr $exprans - $exprans1`
echo "$transfer KB transferd"
$ sh a.sh
42318487 kb left
14891 KB transferd
Solaris: ssh tunnel and nc file transfer
You pipe the file to a listening socket on the server machine in the same way as before. It is assumed that an SSH server runs on this machine too.
$ cat backup.iso | nc -l 3333
On the client machine connect to the listening socket through an SSH tunnel:
$ ssh -f -L 23333:127.0.0.1:3333 me@192.168.0.1 sleep 10; \
nc 127.0.0.1 23333 | pv -b > backup.iso
This way of creating and using the SSH tunnel has the advantage that the tunnel is automagically closed after file transfer finishes. For more information and explanation about it please read my article about auto-closing SSH tunnels.
$ cat backup.iso | nc -l 3333
On the client machine connect to the listening socket through an SSH tunnel:
$ ssh -f -L 23333:127.0.0.1:3333 me@192.168.0.1 sleep 10; \
nc 127.0.0.1 23333 | pv -b > backup.iso
This way of creating and using the SSH tunnel has the advantage that the tunnel is automagically closed after file transfer finishes. For more information and explanation about it please read my article about auto-closing SSH tunnels.
Solaris: ssh and tar
How to transfer directories into remote server using tar and ssh combined.
$ tar cf - mydir/ | ssh gate 'cd /tmp && tar xpvf -'
Monday, November 21, 2011
Sendmail: how to masquarate from oracle@host.domain.com to oracle@domain.com
root@wbitdb1:/etc/mail # diff sendmail.cf sendmail.cf.21012008
986,987c986
< #R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2
< R$* < @ *LOCAL* > $* $: $1 < @ $M . > $2
---
> R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2
root@wbitdb1:/etc/mail #
add the line shown below
--------------
###################################################################
### Ruleset 94 -- convert envelope names to masqueraded form ###
###################################################################
SMasqEnv=94
#R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2 <--------------- this line commented
R$* < @ *LOCAL* > $* $: $1 < @ $M . > $2 <--------------- this line added
----------
Testing masquerading
sendmail's address test mode makes it easy to test masquerading.
====================================================
# sendmail -bt
/tryflags HS (to test the header sender address; other tryflags values would be ES, HR, and ER, for envelope sender, header recipient, and envelope recipient, respectively)
/try esmtp email_address_to_test
Example:
sendmail -bt
> /tryflags ES
> /try esmtp user@host.domain.com
Trying envelope sender address user@host.domain.com for mailer esmtp
(many lines omitted)
final returns: user @ domain . com
Rcode = 0, addr = user@domain.com
986,987c986
< #R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2
< R$* < @ *LOCAL* > $* $: $1 < @ $M . > $2
---
> R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2
root@wbitdb1:/etc/mail #
add the line shown below
--------------
###################################################################
### Ruleset 94 -- convert envelope names to masqueraded form ###
###################################################################
SMasqEnv=94
#R$* < @ *LOCAL* > $* $: $1 < @ $j . > $2 <--------------- this line commented
R$* < @ *LOCAL* > $* $: $1 < @ $M . > $2 <--------------- this line added
----------
Testing masquerading
sendmail's address test mode makes it easy to test masquerading.
====================================================
# sendmail -bt
/tryflags HS (to test the header sender address; other tryflags values would be ES, HR, and ER, for envelope sender, header recipient, and envelope recipient, respectively)
/try esmtp email_address_to_test
Example:
sendmail -bt
> /tryflags ES
> /try esmtp user@host.domain.com
Trying envelope sender address user@host.domain.com for mailer esmtp
(many lines omitted)
final returns: user @ domain . com
Rcode = 0, addr = user@domain.com
Saturday, November 19, 2011
Solaris: Other tunnelling tricks
Connect to port 42 of host.example.com via an HTTP proxy at 10.2.3.4, port 8080.
This example could also be used by ssh(1); see the ProxyCommand directive in ssh_config(5) for more information.
$ nc -x10.2.3.4:8080 -Xconnect host.example.com 42
Solaris: simple port scanning
PORT SCANNING
It may be useful to know which ports are open and running services on a target machine. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. For example:
$ nc -z host.example.com 20-30
Connection to host.example.com 22 port [tcp/ssh] succeeded!
Connection to host.example.com 25 port [tcp/smtp] succeeded!
The port range was specified to limit the search to ports 20 - 30.
Alternatively, it might be useful to know which server software is running, and which versions. This information is often contained within the greeting banners. In order to retrieve these, it is necessary to first make a connection, and then break the connection when the banner has been retrieved. This can be accomplished by specifying a small timeout with the -w flag, or perhaps by issuing a "QUIT" command to the server:
$ echo "QUIT" | nc host.example.com 20-30
SSH-1.99-OpenSSH_3.6.1p2
Protocol mismatch.
220 host.example.com IMS SMTP Receiver Version 0.84 Ready
It may be useful to know which ports are open and running services on a target machine. The -z flag can be used to tell nc to report open ports, rather than initiate a connection. For example:
$ nc -z host.example.com 20-30
Connection to host.example.com 22 port [tcp/ssh] succeeded!
Connection to host.example.com 25 port [tcp/smtp] succeeded!
The port range was specified to limit the search to ports 20 - 30.
Alternatively, it might be useful to know which server software is running, and which versions. This information is often contained within the greeting banners. In order to retrieve these, it is necessary to first make a connection, and then break the connection when the banner has been retrieved. This can be accomplished by specifying a small timeout with the -w flag, or perhaps by issuing a "QUIT" command to the server:
$ echo "QUIT" | nc host.example.com 20-30
SSH-1.99-OpenSSH_3.6.1p2
Protocol mismatch.
220 host.example.com IMS SMTP Receiver Version 0.84 Ready
Solaris: using nc to transfer file
DATA TRANSFER
The example in the previous section can be expanded to build a basic data transfermodel. Any information input into one end of the connection will be output to the other end,and input and output can be easily captured in order to emulate file transfer. Start by using nc to listen on a specific port, with output captured into a file: $ nc -l 1234 > filename.out Using a second machine, connect to the listening nc process, feeding it the file which is to be transferred: $ nc host.example.com 1234 < filename.in After the file has been transferred, the connection will close automati- cally.
Solaris: using nc and tar to transfer file between hosts
If you can't find a way for the controllers to talk to each other (as others have mentioned), you can try doing this:
On your destination server, run the following command:
destination-server# nc -l 9999 | tar xvzf -
Then, on your source server, run the following command:
source-server# tar cvzf - | nc destination-server-ip 9999
The advantage to this is it avoids any encryption overhead that SSH/rsync gives, so you'll get a bit of a speed boost. This also compresses and decompresses on the source and destination servers in-stream, so it speeds up the transfer process at the expense of some CPU cycles.
CLIENT/SERVER MODEL
It is quite simple to build a very basic client/server model using nc. On one console, start nc listening on a specific port for a connection. For example: $ nc -l 1234 nc is now listening on port 1234 for a connection. On a second console (or a second machine), connect to the machine and port being listened on: $ nc 127.0.0.1 1234 There should now be a connection between the ports. Anything typed at the second console will be concatenated to the first, and vice-versa. After the connection has been set up, nc does not really care which side is being used as a `server' and which side is being used as a `client'. The connection may be terminated using an EOF (`^D').
Solaris: rsync in parallel
This does increase the amount of CPU and I/O that both your sending and receiving side use, but I’ve been able to run ~25 parallel instances without remotely degrading the rest of the system or slowing down the other RSYNC instances.
The key is to use the –include and –exclude command line switches to create selection criteria.
Example
drwxr-xr-x 2 root root 179 Jul 19 16:22 directory_a
drwxr-xr-x 2 root root 179 Aug 12 00:08 directory_b
If directory_a has 2,000,000 files underneath it. and directory_b also has 2,000,000 files, use the following idea to split them up. The –exclude option says in essence to “exclude everything that is not explicitly included”.
#!/bin/bash
rsync -av --include="/directory_a*" --exclude="/*" --progress remote::/ /localdir/ > /tmp/myoutputa.log &
rsync -av --include="/directory_b*" --exclude="/*" --progress remote::/ /localdir/ > /tmp/myoutputb.log &
The following will take about twice the amount of time gathering files than the above:
#!/bin/bash
rsync -av --progress remote::/ /localdir/ > /tmp/myoutput.log &
Unix: copy to remote via cpio example
cpio code for transferring files from one system to another:
for dir in $(cat /dir.txt)#file containing ACL directories
do
cd $dir
find . -print -depth | cpio -omPv | rsh "(cd $dir; cpio -idumvP
)"
done
for dir in $(cat /dir.txt)#file containing ACL directories
do
cd $dir
find . -print -depth | cpio -omPv | rsh "(cd $dir; cpio -idumvP
)"
done
Solaris: Using rsync to tranfer dir between two hosts
Transfer data from host3 to host4 via rsync and nfs
hostp3% more copyemea.sh
#! /bin/sh
date> copy.log
/opt/csw/bin/rsync -ar --stats /app/iwstoreEMEA_snap/ /hosta4_EMEA/ 2>&1 >>copy.log
date>> copy.log
hostp3% df -h /hosta4_EMEA
Filesystem size used avail capacity Mounted on
hosta4:/opt/app/data/iw-store/EMEA
250G 134G 116G 54% /hosta4_EMEA
hostp3%
in this case
hosta4#zfs set sharenfs=rw=153.88.177.59,root=153.88.177.59 app/iwstoreEMEA
where
153.88.177.59 is ip address of host3
on host3host3# mount host4:/opt/app/data/iw-store/EMEA /host4_EMEA
Initial data on host4 must be empty, if not unexpected result might happens. Such as size final transfer > host3 dir
Solaris: ZFS snapshot between two hosts
Send the initial snapshot and create the filesystem on the remote host
- make sure that the receiving host is running Solaris updated after October 2008
- make sure the filesystem you are replicating does not exist already on the receving host
- zfs snapshot export/upload@zrep-00001 on the origin host
- zfs send -i export/upload@zrep-00001 | ssh otherservername "cat > /export/save/upload@zrep-00001"
- wait several days depending on the size of the dataset
- cat /export/save/upload@rzrepl-00001 | zfs recv export/upload
Statistics: able to complete a transfer of 6.1T in 10 days. (From ms1 (Sun Fire X4500) to ms5 (Sun Fire X4540), both hosts in the same rack.)
Send incremental covering the days that the first step took
- zfs snapshot export/upload@zrep-00002 on the origin host
- zfs send -i export/upload@zrep-00001 export/upload@zrep-00002 | ssh otherservername "cat > /export/save/upload@zrep-00002"
- cat /export/save/upload@rzrepl-00002 | zfs recv export/upload
Solaris 10: How to mount lofs from localzone
Sol10: How to mount lofs from localzone
On global zone, mounted file system looks like this
/dev/md/dsk/d113 530063064 65560 524696880 1% /zones/myzone/ftp/data2
run this zonecfg command
zonecfg -z myzone
> add fs
> set dir=/ftp/data2
> set special=/zones/myzone/ftp/data2
> set type=lofs
> end
> verify
> commit
> exit
run below command to confirm
#zonecfg -z myzone info
....
fs:
dir: /ftp/data2
special: /zones/myzone/ftp/data2
raw not specified
type: lofs
options: []
....
root@myzone # mount -F lofs /ftp/data2 /ftp/data2
root@myzone # df -k /ftp/data2
Filesystem kbytes used avail capacity Mounted on
/ftp/data2 530063064 65560 524696880 1% /ftp/data2
On global zone, mounted file system looks like this
/dev/md/dsk/d113 530063064 65560 524696880 1% /zones/myzone/ftp/data2
run this zonecfg command
zonecfg -z myzone
> add fs
> set dir=/ftp/data2
> set special=/zones/myzone/ftp/data2
> set type=lofs
> end
> verify
> commit
> exit
run below command to confirm
#zonecfg -z myzone info
....
fs:
dir: /ftp/data2
special: /zones/myzone/ftp/data2
raw not specified
type: lofs
options: []
....
root@myzone # mount -F lofs /ftp/data2 /ftp/data2
root@myzone # df -k /ftp/data2
Filesystem kbytes used avail capacity Mounted on
/ftp/data2 530063064 65560 524696880 1% /ftp/data2
HPUX/LINUX: NFS mounting
Exporting and mounting file systems on Linux and HP-UX
If you want to use a shared file system on Linux® and HP-UX systems, you must export it from the system where it is located and mount it on every system on which you want to access it.
You must be logged on as root.
To export and mount file systems on Linux and HP-UX systems, complete these steps:
- Export a file system on Linux and HP-UX systems.
- Add the file system that you want to export to the file ⁄etc⁄exports.
- Export all entries in the file ⁄etc⁄exports by entering the command ⁄usr⁄sbin⁄exportfs -a.
- Verify that the file system is exported by entering the command ⁄usr⁄sbin⁄exportfs.
- Mount a file system on HP-UX and Linux systems.
- If the file system that you want to mount is remote, ensure you have the permission to mount it by entering the command⁄usr⁄sbin⁄showmount - e
whereis the name of the remote operating system. - Choose an empty file system that serves as the mount point for the file system that you want to mount.If an empty file system does not exist, create it by entering the command mkdir /
whereis the name of the local file system. - Mount the file system on your local system by entering the corresponding command.
- On HP-UX, enter the command/usr/sbin/mount -F nfs sourcehost:
/ sourcedir /destinationdir - On Linux systems, enter the command/bin/mount -t nfs sourcehost:
/ sourcedir /destinationdir
is the name of the remote operating system
is the name of the remote file system
is the name of the local file system - On HP-UX, enter the command/usr/sbin/mount -F nfs sourcehost:
- If the file system that you want to mount is remote, ensure you have the permission to mount it by entering the command⁄usr⁄sbin⁄showmount - e
To mount the remote file system after each reboot, add it to the /etc/fstab file.For a description of the file format of /etc/fstab, enter the command
man fstab.
example fstab:
man fstab.
example fstab:
server:/mnt /mnt nfs rw,hard 0 0 #mount from server
Friday, November 18, 2011
Solaris nfs throubleshooting
# showmount -a All mount points on local host: edcert20.ucs.indiana.edu:/home edcert21.ucs.indiana.edu:/usr/local # showmount -d Directories on local host: /home /usr/local # showmount -e Export list on local host /home edcert21.ucs.indiana.edu edcert20.ucs.indiana.edu /usr/local edcert21.ucs.indiana.edu
# df -F nfs Filesystem Type blocks use avail %use Mounted on edcert21.ucs.indiana.edu:/home nfs 68510 55804 12706 81% /usr/share/helpUse the command nfsstat -s to display NFS activity on the server side. For example:# nfsstat -s Server RPC: calls badcalls nullrecv badlen xdrcall duphits dupage 50852 0 0 0 0 0 0.00 Server NFS: calls badcalls 50852 0 null getattr setattr root lookup readlink 1 0% 233 0% 0 0% 0 0% 1041 2% 0 0% read wrcache write create remove rename 49498 97% 0 0% 0 0% 0 0% 0 0% 0 0% link symlink mkdir rmdir readdir fsstat 0 0% 0 0% 0 0% 0 0% 75 0% 4 0%The output may be interpreted using the following guidelines.
- badcalls > 0 - RPC requests are being rejected by the server. This could indicate authentication problems caused by having a user in too many groups, attempts to access exported file systems as root, or an improper Secure RPC configuration.
- nullrecv > 0 - NFS requests are not arriving fast enough to keep all of the nfsd daemons busy. Reduce the number of NFS server daemons until nullrecv is not incremented.
- symlink > 10% - Clients are making excessive use of symbolic links that are on file systems exported by the server. Replace the symbolic link with a directory, and mount both the underlying file system and the link's target on the client.
- getattr > 60% - Check for non-default attribute caching (noac mount option) on NFS clients.
# nfsstat -c Client RPC: calls badcalls retrans badxid timeout wait newcred 369003 62 1998 43 2053 0 0 Client NFS: calls badcalls nclget nclsleep 368948 0 368948 0 null getattr setattr root lookup readlink 0 0% 51732 14% 680 0% 0 0% 95069 25% 542 0% read wrcache write create remove rename 210187 56% 0 0% 2259 0% 1117 0% 805 0% 337 0% link symlink mkdir rmdir readdir fsstat 120 0% 0 0% 7 0% 0 0% 5510 1% 583 0%This output may be interpreted using the guidelines given below.
- timeout > 5% - The client's RPC requests are timing out before the server can answer them, or the requests are not reaching the server. Check badxid to determine the problem.
- badxid ~ timeout - RPC requests are being handled by the server, but too slowly. Increase timeo parameter value for this mount, or tune the server to reduce the average request service time.
- badxid ~ 0 - With timeouts greater than 3%, this indicates that packets to and from the server are getting lost on the network. Reduce the read and write block sizes (mount parameters rsize andwsize) for this mount.
- badxid > 0 - RPC calls on soft-mounted file systems are timing out. If the server is running, and badcalls is growing, then soft mounted file systems should use a larger timeo or retrans value.
Solaris vxvm move a disk group to another system
Move a disk group to another system
1. Unmount and stop all volumes in the disk group on the first system:
# umount /mntdir
# vxvol -g <diskgroup> stopall
2. Deport (disable all local access to) the disk group to be moved with this command:
# vxdg deport <diskgroup>
3. Import (enable local access to) the disk group and its disks from the second system with:
# vxdg import <diskgroup>
4. After the disk group is imported, start all volumes in the disk group with this command:
# vxrecover -g <diskgroup> -sb
The options here indicate that VERITAS Volume Manager will start all the disabled volumes (-s) in the background (-b).
Solaris : vxfs large file
How to check for/enable largefile support on vxfs
To check if largefiles are enabled:
# /usr/lib/fs/vxfs/fsadm /mount/point
nolargefilesUnlike ufs, you can enable vxfs largefile support on the fly with fsadm:
# /usr/lib/fs/vxfs/fsadm -o largefiles /mount/point
Remove a mirror from a volume
Dissociate and remove a mirror (plex) from a volume:
# vxplex [-g diskgroup] -o rm dis plex
This command will remove the mirror (plex) and all associated subdisks.
HPUX LVM mirror
System Administration Guide for HP-UX 10.20
Part 4 - LVM Disk Mirroring
Part 4 - LVM Disk Mirroring
This note describes how to integrate a second disk into the system volume group and configure it as an alternative boot device, thereby providing LVM mirrored backup for the primary boot device.
IntroductionThis note describes how to configure LVM mirroring of a system disk. In this particular example, the HP server is STSRV1, the primary boot device is SCSI=6 (/dev/dsk/c2t6d0) and the alternative mirrored boot device is SCSI=5 (/dev/dsk/c2t5d0). The following commands may be found in /sbin and /usr/sbin must be run as root.
Procedure - Create a System Mirror DiskThis procedure assumes that the HPUX-10.## Operating system and the HPUX LVM mirroring product has already been installed.
# ioscan -fnC disk (identify mirror disk) # pvcreate -Bf /dev/rdsk/c2t5d0 (make a bootable physical volume) # mkboot -l /dev/rdsk/c2t5d0 (create LVM disk layout) # mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c2t5d0 (-lq = switch off quorum) # vgextend /dev/vg00 /dev/dsk/c2t5d0 # for P in 1 2 3 4 5 6 7 8 9 10 > do > lvextend -m 1 /dev/vg00/lvol$P /dev/dsk/c2t5d0 > sleep 1 > done
Following the mirroring procedure, it is now essential to setup the critical partitions concerning root, swap and boot. It is useful to confirm the partition layout using the commands bdf and lvlnboot.
# bdf -l # lvlnboot –v
Following changes seen under HPUX-10.20, the / (root) partition will appear first in the listings as /dev/vg00/lvol3 and the /stand (boot) partition will probably be reported as "PV Name" and /dev/vg00/lvol1. The first command below is destructive, in that it removes the "PV Name" boot entry. It should therefore be reinserted using the lvlnboot -b command below. Exercise extreme care with the following commands.
# lvlnboot -r /dev/vg00/lvol3 (prepare a root LVM logical volume) # lvlnboot -s /dev/vg00/lvol2 (prepare a swap LVM logical volume) # lvlnboot -b /dev/vg00/lvol1 (prepare a boot LVM logical volume) # vgcfgbackup vg00 # lifls -C /dev/rdsk/c2t5d0 (confirms as a boot device)
Disk Crash - D-Class Procedure (Fast Recovery - Hot-Swap Disk )
The example below assumes that the system disk (/dev/dsk/c0t5d0) has crashed and has been replaced by a hot-swap disk (ie. It is not necessary to halt or boot the server). The procedure would be just the same for the mirrored disk as follows :
# pvcreate -Bf /dev/rdsk/c0td50 # vgcfgrestore -n /dev/vg00 /dev/rdsk/c0td50 # vgchange -a y /dev/vg00 # pvcreate -Bf /dev/rdsk/c2t5d0 (make a bootable physical volume) # mkboot -l /dev/rdsk/c2t5d0 (create LVM disk layout) # mkboot -a "hpux -lq (;0)/stand/vmunix" /dev/rdsk/c2t5d0 (-lq = switch off quorum) # vgsync /dev/vg00[NB. It will only be necessary to run the mkboot commands above if it is a system disk replacement.]
When the synchronisation of the logical volumes is complete, reconfirm the following :
# lvlnboot -r /dev/vg00/lvol3 (prepare a root LVM logical volume) # lvlnboot -s /dev/vg00/lvol2 (prepare a swap LVM logical volume) # lvlnboot -b /dev/vg00/lvol1 (prepare a boot LVM logical volume) # vgcfgbackup vg00
Disk Crash - Night Procedure (Fast Recovery - No Disk Replacement)
The example below assumes that the system disk (/dev/dsk/c2t6d0) has crashed and has NOT been replaced. The procedure, however, would be just the same for a mirrored disk crash with the exception of the change in device name. Due to the quorum philosophy, in that half the disk space is now no longer available, it is necessary to boot up in single-user mode with the quorum argument unset as follows :
1.Escape from boot sequence2. Choose mirrored disk (ie. P0 = /dev/dsk/c2td50)
3. Boot up in single user mode, without quorum, as follows :
... Select from menu : b P0 isl
ISL > hpux -is -lq (;0)/stand/vmunix
# init 4
Disk Crash - Day Procedure (Slow Recovery - Intrnal Disk Replacement)
The example below assumes that the system disk (/dev/dsk/c2t6d0) has crashed and been replaced. The procedure, however, would be just the same for a mirrored disk crash with the exception of the change in device name. Due to the quorum philosophy, in that half the disk space is now not available, it is necessary to boot up in single-user mode with the quorum argument unset as follows :
1. Escape from boot sequence2. Choose mirrored disk (ie. P0 = /dev/dsk/c2td50)
3. Boot up in single user mode, without quorum, as follows :
... Select from menu : b p0 isl
ISL > hpux -is -lq (;0)/stand/vmunix
# PATH=$PATH:/sbin:/usr/sbin # mount -a # pvcreate -Bf /dev/rdsk/c2t6d0 # mkboot -l /dev/rdsk/c2t6d0 # mkboot -a "hpux (;0) /stand/vmunix" /dev/rdsk/c2t6d0 # vgcfgrestore -n /dev/vg00 /dev/rdsk/c2td60 # vgchange -a y /dev/vg00 # lvlnboot -r /dev/vg00/lvol3 (prepare a root LVM logical volume) # lvlnboot -s /dev/vg00/lvol2 (prepare a swap LVM logical volume) # lvlnboot -b /dev/vg00/lvol1 (prepare a boot LVM logical volume) # init 4
The machine will now boot up correctly and the disks will synchronize automatically. The replaced system disk will now mirror automatically from the original mirrored disk. There will be considerable disk activity at this time and the progress of the mirroring may be confirmed with :
# lvdisplay -v /dev/vg00/lvol1
This will probably show that the first volume is "current" and therefore successfully mirrored.
# lvdisplay -v /dev/vg00/lvol8
This will almost certainly show that the volume is "stale" and therefore not yet mirrored. When the disks are synchronized, reboot the machine, thereby ensuring that the future is secure and the original "primary boot path" is valid.
# for P in 1 2 3 4 5 6 7 8 9 10 > do > lvreduce -m 0 /dev/vg00/lvol$P /dev/dsk/c2t5d0 > sleep 1 > done # vgreduce /dev/vg00 /dev/dsk/c2t5d0
Subscribe to:
Posts (Atom)
UNIX: How to print column nicely using printf
[user@hostfwnms1-oam tmp]# cat b.sh printf "%-26s %-19s %-8s %-8s %-s %-s\n" HOSTNAME IP PING SNMPWALK 0-ok 1-fail for i in `cat n...
-
This does increase the amount of CPU and I/O that both your sending and receiving side use, but I’ve been able to run ~25 parallel instance...
-
syntax: rmvterm –m {msys} –p {lpar} # rmvterm -m Server-9117-570-SN103FACD_B -p WBITVIO2
-
Cluster operations Start VCS hastart [-force-stale] hasys -force system Stop VCS hastop -local [-force-evacuate] hastop -sys system [-force-...