Monitoring pipe activity with pv
$ dd if=/dev/zero | pv > foo
522MB 0:00:06 [ 109MB/s] [ <=> ]
When pv is added to the pipeline, you get a continuous display of the amount of data that is being transferred between two pipe endpoints. I really dig this utility, and I am stoked that I found the catonmat website! Niiiiiiiiice!
UPDATE:
Try using pv when sending stuff over the network using dd. Neato.
[root@machine2 ~]# ssh machine1 “dd if=/dev/VolGroup00/domU2migrate”|pv -s 8G -petr|dd of=/dev/xen02vg/domU2migrate
0:00:30 [11.2MB/s] [====> ] 4% ETA :10:13
Want to rate limit the transfer so you don’t flood the pipe?
-L RATE, –rate-limit RATE
Limit the transfer to a maximum of RATE bytes per second. A suffix of “k”, “m”, “g”, or “t” can be added to denote kilobytes (*1024), megabytes, and so on.
-B BYTES, –buffer-size BYTES
Use a transfer buffer size of BYTES bytes. A suffix of “k”, “m”, “g”, or “t” can be added to denote kilobytes (*1024), megabytes, and so on. The default buffer size is the block size of the input file’s filesystem multiplied by 32 (512kb max), or 400kb if the block size cannot be determined.
Already have a transfer in progress and want to rate limit it without restarting?
-R PID, –remote PID
If PID is an instance of pv that is already running, -R PID will cause that instance to act as though it had been given this instance’s command line instead. For example, if pv -L 123k is running with process ID 9876, then running pv -R 9876 -L 321k will cause it to start using a rate limit of 321k instead of 123k. Note that some options cannot be changed while running, such as -c, -l, and -
Tuesday, December 6, 2011
Solaris 10: Adding a file system to a running zone
Since the global zone uses loopback mounts to present file systems to zones, adding a new file system was as easy as loopback mounting the file system into the zone’s file system:
$ mount -F lofs /filesystems/zone1oracle03 /zones/zone1/root/ora03
Once the file system was mounted, I added it to the zone configuration and then verified it was mounted:
$ mount | grep ora03/filesystems/zone1oracle03 on filesystems/zone1oracle0 read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=2d9000b on Sun Apr 12 10:43:19 2009 /zones/zone1/root/ora03 on /filesystems/zone1oracle03 read/write/setuid/devices/dev=2d9000b on Sun Apr 12 10:44:07 2009
With ZFS filesystem (mountpoint=legacy):
mount -F zfs zpool/fs /path/to/zone/root/fs
$ mount -F lofs /filesystems/zone1oracle03 /zones/zone1/root/ora03
Once the file system was mounted, I added it to the zone configuration and then verified it was mounted:
$ mount | grep ora03/filesystems/zone1oracle03 on filesystems/zone1oracle0 read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=2d9000b on Sun Apr 12 10:43:19 2009 /zones/zone1/root/ora03 on /filesystems/zone1oracle03 read/write/setuid/devices/dev=2d9000b on Sun Apr 12 10:44:07 2009
mount -F zfs zpool/fs /path/to/zone/root/fs
Linux: remount read only file system
$ mount -o remount,rw /
Once you can write to the file system you should be able to write out changes to the file system to correct the issue that prevented the server from booting. Viva la remount!
Once you can write to the file system you should be able to write out changes to the file system to correct the issue that prevented the server from booting. Viva la remount!
Solaris: coreadm core file management
Using the Solaris coreadm utility to control core file generation
Solaris has shipped with the coreadm utiltiy for quite some time, and this nifty little utility allows you to control every facet of core file generation. This includes the ability to control where core files are written, the name of core files, which portions of the processes address space will be written to the core file, and my favorite option, whether or not to generate a syslog entry indicating that a core file was generated.
To begin using coreadm, you will first need to run it wit the “-g” option to specify where core files should be stored, and the pattern that should be used when creating the core file:
$ coreadm -g /var/core/core.%f.%p
Once a directory and file pattern are specified, you can optionally adjust which portions of the processes address space (e.g., text segment, heap, ISM, etc.) will be written to the core file. To ease debugging, I like to configure coreadm to dump everything with the”-G all” option:
$ coreadm -G all
Since core files are typically created at odd working hours, I also like to configure coreadm to log messages to syslog indicating that a core file was created. This can be done by using the coreadm “-e log” option:
$ coreadm -e log
After these settings are adjusted, the coreadm “-e global” option can be used to enable global core file generation, and the coreadm utility can be run without any arguments to view the settings (which are stored in /etc/coreadm.conf):
$ coreadm -e global
$ coreadm
global core file pattern: /var/core/core.%f.%p global core file content: all init core file pattern: core init core file content: default global core dumps: enabled per-process core dumps: enabled global setid core dumps: disabled per-process setid core dumps: disabled global core dump logging: enabled
Once global core file support is enabled, each time a process receives a deadly signal (e.g., SIGSEGV, SIGBUS, etc.):
$ kill -SIGSEGV 4652
A core file will be written to /var/core:
$ ls -al /var/core/*4652
-rw------- 1 root root 4163953 Mar 9 11:51 /var/core/core.inetd.4652
And a message similar to the following will appear in the system log:
Mar 9 11:51:48 fubar genunix: [ID 603404 kern.notice] NOTICE: core_log: inetd[4652] core dumped: /var/core/core.inetd.4652
This is an amazingly useful feature, and can greatly simplify root causing software problems.
Solaris: killing defunct process
$ ps -ef | grep defunct
root 646 426 0 - ? 0:00 '
root 1489 12335 0 09:32:54 pts/1 0:00 grep defunct
$ preap 646
646: exited with status 0
This will cause the process to exit, and the kernel can then free up the resources that were allocated by that process.
$ preap 646
646: exited with status 0
This will cause the process to exit, and the kernel can then free up the resources that were allocated by that process.
Solaris: undelete file
How to undelete any open, deleted file on linux / solaris
Chris Dew wrote up a neat trick on how to recover files if deleted on Linux, yet still open by a process.
This works on Solaris as well. =)
$:~:uname -a
SunOS somehost.com 5.10 Generic_127112-11 i86pc i386 i86pc
SunOS somehost.com 5.10 Generic_127112-11 i86pc i386 i86pc
$:~:echo “sup prefetch.net folks?” > testfile
$:~:tail -f testfile &
[1] 17134
$:~:tail -f testfile &
[1] 17134
$:~:rm testfile
$:~:ls /proc/17134/fd/
0 1 2
$:~:cat /proc/17134/fd/0
sup prefetch.net folks?
$:~:cp !$ ./testfile
cp /proc/17134/fd/0 ./testfile
$:~:cat testfile
sup prefetch.net folks?
$:~:ls /proc/17134/fd/
0 1 2
$:~:cat /proc/17134/fd/0
sup prefetch.net folks?
$:~:cp !$ ./testfile
cp /proc/17134/fd/0 ./testfile
$:~:cat testfile
sup prefetch.net folks?
Solaris: using samba to access windows server folder
Accessing Windows shares from the Solaris/Linux command line
If Samba is installed on the system, this is easy to do with the smbclient utility. To access the Windows server named “milton” from the command line, you can run smbclient with the “-U” option, the name of the user to authenticate with, and the name of the server and share to access:
$ smbclient -U “domain\matty” //milton/foo
In this example, I am authenticating as the user matty in the domain “domain,” and accessing the share foo on the server milton. If smbclient is unable to resolve the server, you will need to make sure that you have defined a WINS server, or the server exists in the lmhosts file. To define a WINS server, you can add a line similar to the following (you can get the WINS server by looking at ipconfig /all on a Windows desktop, or by reviewing the LAN traffic with ethereal) to the smb.conf file:
wins server = 1.2.3.4
If you don’t want to use WINS to resolve names, you can add an entry similar to the following to the lmhosts file:
192.168.1.200 milton
Once you are connected to the server, you will be greeted with a “smb: \>” prompt. This prompt allows you to feed commands to the server, such as “pwd,” “dir,” “mget,” and “prompt.” To retrieve all of the files in the directory foo1, I can “cd” into the foo1 directory, use “prompt” to disable interactive prompts, and then run “mget” to retrieve all files in that directory:
smb: \> pwd
Current directory is \\server1\foo
Current directory is \\server1\foo
smb: \> dir
received 10 entries (eos=1) . DA 0 Mon May 22 07:19:21 2006 .. DA 0 Mon May 22 07:19:21 2006 foo1 DA 0 Sun Dec 11 04:51:12 2005 foo2 DA 0 Thu Nov 9 09:48:40 2006 < ..... >
smb: \> cd foo1
smb: \foo1\> prompt
prompting is now off
prompting is now off
smb: \foo1\> mget *
received 38 entries (eos=1) getting file \foo1\yikes.tar of size 281768 as yikes.tar 411.3 kb/s) (average 411.3 kb/s) < ..... >
smb: \foo1\> exit
The smbclient manual page documents all of the available commands, and provides a great introduction to this super useful utility. If you bump into any issues connecting to a remote Windows server, you can add “-d” and a debug level (I like debug level 3) to the smbclient command line. This is perfect for debugging connectivity issues.
Solaris: free space in Veritas diskgroups
Finding free space in Veritas diskgroups
The Veritas volume manager (VxVM) provides logical volume management capabilites across a variety of platforms. As you create new volumes, it is often helpful to know how much free space is available. You can find free space using two methods. The first method utilizes vxdg’s “free” option:
$ vxdg -g oradg free
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS oradg c3t20d1 c3t20d1s2 c3t20d1 104848640 1536 - oradg c3t20d3 c3t20d3s2 c3t20d3 104848640 1536 - oradg c3t20d5 c3t20d5s2 c3t20d5 104848640 1536 - oradg c3t20d7 c3t20d7s2 c3t20d7 104848640 1536 - oradg c3t20d9 c3t20d9s2 c3t20d9 104848640 1536 -
The “LENGTH” column displays the number of 512-byte blocks available on each disk drive in the disk group “oradg.”. If you don’t feel like using bc(1) to turn blocks into kilobytes, you can use vxassist’s “maxsize” option to print the number of blocks and Megabytes available:
$ vxassist -g oradg maxsize layout=concat
Maximum volume size: 6144 (3Mb)
Maximum volume size: 6144 (3Mb)
Now to find out what to do with 3 MB of disk storage :)
Solaris: Vxvm calculate free chunk
root $ sh a.sh
... Free chunk: rootmirr 25226.6 Meg
... Free chunk: rtdisk 5122.05 Meg
... Free chunk: rtdisk 0.250977 Meg
... Free chunk: rtdisk 0.255859 Meg
... Free chunk: rtdisk 2049.67 Meg
... Free chunk: rtdisk 550.151 Meg
root $ more a.sh
#! /bin/sh
vxdg -g rootdg free | nawk '
{
#if ($5/2/1024 > 100)
if ($5/2/1024 > 0)
print "... Free chunk: "$1" " $5/2/1024, "Meg"
... Free chunk: rootmirr 25226.6 Meg
... Free chunk: rtdisk 5122.05 Meg
... Free chunk: rtdisk 0.250977 Meg
... Free chunk: rtdisk 0.255859 Meg
... Free chunk: rtdisk 2049.67 Meg
... Free chunk: rtdisk 550.151 Meg
root $ more a.sh
#! /bin/sh
vxdg -g rootdg free | nawk '
{
#if ($5/2/1024 > 100)
if ($5/2/1024 > 0)
print "... Free chunk: "$1" " $5/2/1024, "Meg"
Solaris : VXVM quick mirror
VXVM quickly mirroring an empty volume
# vxassist make newvol 10m layout=concat-mirror init=active disk1 disk2
# vxassist make newvol 10m layout=concat-mirror init=active disk1 disk2
Solaris: Change NIS user password
how to change nis user password
root@t47s# passwd eddccma
Enter login(NIS) password:
passwd(SYSTEM): Sorry, wrong passwd
Permission denied
...just login to nis master server
root@t47s# ypwhich -m passwd
ededuun001
root@t47s# ssh ededuun001
imhas@ededuun001 # yppasswd eddccma
New Password:
Re-enter new Password:
passwd: password successfully changed for eddccma
imhas@ededuun001 #
root@t47s# passwd eddccma
Enter login(NIS) password:
passwd(SYSTEM): Sorry, wrong passwd
Permission denied
...just login to nis master server
root@t47s# ypwhich -m passwd
ededuun001
root@t47s# ssh ededuun001
imhas@ededuun001 # yppasswd eddccma
New Password:
Re-enter new Password:
passwd: password successfully changed for eddccma
imhas@ededuun001 #
Thursday, December 1, 2011
HPUX locating WWPN
Locating the WWPN for an HP-UX host
Complete this task to locate the WWPN for a Hewlett-Packard Server host.
1. Go to the root directory of your HP-UX host.
2. Type ioscan -fnC fc| more for information on the fibre-channel adapters installed on the host.
The following is example output:
fc 0 0/2/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td0
fc 1 0/4/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td1
fc 2 0/6/2/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td2
3. Look under the description for the Fibre Channel Mass Storage adapter.
For example, look for the device path name /dev/td1.
4. Type: fcmsutil /dev/td1 | grep world where /dev/td1 is the path.
The following is example output:
# fcmsutil /dev/td1 | grep World
N_Port Node World Wide Name = 0x50060b000024b139
N_Port Port World Wide Name = 0x50060b000024b138
(root@hpmain)/home/root# fcmsutil /dev/td0 | grep World
N_Port Node World Wide Name = 0x50060b000023a521
N_Port Port World Wide Name = 0x50060b000023a520
(root@hpmain)/home/root# fcmsutil /dev/td2 | grep World
N_Port Node World Wide Name = 0x50060b0000253a8f
N_Port Port World Wide Name = 0x50060b0000253a8e
(root@hpmain)/home/root#
Complete this task to locate the WWPN for a Hewlett-Packard Server host.
1. Go to the root directory of your HP-UX host.
2. Type ioscan -fnC fc| more for information on the fibre-channel adapters installed on the host.
The following is example output:
fc 0 0/2/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td0
fc 1 0/4/0/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td1
fc 2 0/6/2/0 td CLAIMED INTERFACE HP Tachyon XL2 Fibre Channel Mass Storage Adapter /dev/td2
3. Look under the description for the Fibre Channel Mass Storage adapter.
For example, look for the device path name /dev/td1.
4. Type: fcmsutil /dev/td1 | grep world where /dev/td1 is the path.
The following is example output:
# fcmsutil /dev/td1 | grep World
N_Port Node World Wide Name = 0x50060b000024b139
N_Port Port World Wide Name = 0x50060b000024b138
(root@hpmain)/home/root# fcmsutil /dev/td0 | grep World
N_Port Node World Wide Name = 0x50060b000023a521
N_Port Port World Wide Name = 0x50060b000023a520
(root@hpmain)/home/root# fcmsutil /dev/td2 | grep World
N_Port Node World Wide Name = 0x50060b0000253a8f
N_Port Port World Wide Name = 0x50060b0000253a8e
(root@hpmain)/home/root#
Solaris: checking hba qlogic status
checking hba on solaris qlogic
qlc(0) is the internal controllers for the dual SCSI disks or if it's
the Qlogic IPS2200 add-in card.
OS: Solaris 10 (11/06)
Kernel: 125100-07
luxadm -e port
/devices/pci-at-9,600000/SUNW,qlc-at-2/fp-at-0,0:devctl CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-4/fp-at-0,0:devctl NOT CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-5/fp-at-0,0:devctl CONNECTED
qlc(0) is the internal controllers for the dual SCSI disks or if it's
the Qlogic IPS2200 add-in card.
OS: Solaris 10 (11/06)
Kernel: 125100-07
luxadm -e port
/devices/pci-at-9,600000/SUNW,qlc-at-2/fp-at-0,0:devctl CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-4/fp-at-0,0:devctl NOT CONNECTED
/devices/pci-at-8,600000/pci-at-1/SUNW,qlc-at-5/fp-at-0,0:devctl CONNECTED
Solaris: metadb repair
$ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d100 10086628 8051562 1934200 81% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 10513824 1072 10512752 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 6146048 4469136 1676912 73% /tmp
swap 10512784 32 10512752 1% /var/run
/dev/md/dsk/d102 10237036 7516386 2618280 75% /opt
/dev/md/dsk/d104 70569513 50785798 19078020 73% /opt/app
/dev/md/dsk/d103 1988623 29696 1899269 2% /var/tmp
$ uname -a
SunOS intdev02 5.10 Generic_125100-10 sun4u sparc SUNW,Sun-Fire-V240
[03:35:01] root@intdev02[1]# grep -i warn /var/adm/messages
Jun 27 21:29:32 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:37 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:37 intdev02 glm: [ID 401478 kern.warning] WARNING: ID[SUNWpd.glm.cmd_timeout.6018]
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd3):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@2,0 (sd1):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@3,0 (sd2):
[03:35:01] root@intdev02[2]# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1c,600000/scsi@2/sd@0,0
1. c1t1d0
/pci@1c,600000/scsi@2/sd@1,0
2. c1t2d0
/pci@1c,600000/scsi@2/sd@2,0
3. c1t3d0
/pci@1c,600000/scsi@2/sd@3,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of SVM volume stripe:d300. Please see metaclear(1M).
/dev/dsk/c1t1d0s1 is part of SVM volume stripe:d301. Please see metaclear(1M).
/dev/dsk/c1t1d0s3 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s4 is part of SVM volume stripe:d303. Please see metaclear(1M).
/dev/dsk/c1t1d0s5 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s7 contains an SVM mdb. Please see metadb(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
quit
format>
[03:35:01] root@intdev02[3]#
[03:35:01] root@intdev02[5]# date
Tue Jul 1 03:39:06 MEST 2008
[03:35:01] root@intdev02[6]# iostat -En | more
c1t1d0 Soft Errors: 0 Hard Errors: 21 Transport Errors: 2
Vendor: SEAGATE Product: ST373207LSUN72G Revision: 045A Serial No: 05433302ZN
Size: 73.40GB <73400057856 bytes>
Media Error: 0 Device Not Ready: 19 No Device: 1 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
[03:35:01] root@intdev02[9]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[10]#
# metadb -a c0t2d0s3 c1t1d0s3
Example 3: Deleting Two Replicas
This example shows how to delete two replicas from the sys-
tem. Assume that replicas have been set up on
/dev/dsk/c0t2d0s3 and /dev/dsk/c1t1d0s3.
# metadb -d c0t2d0s3 c1t1d0s3
[03:35:01] root@intdev02[18]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[19]# metadb -d c1t1d0s7
[03:35:01] root@intdev02[20]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[21]# metadb -a c1t1d0s7
[03:35:01] root@intdev02[22]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a u 16 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[23]#
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d100 10086628 8051562 1934200 81% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 10513824 1072 10512752 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
10086628 8051562 1934200 81% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 6146048 4469136 1676912 73% /tmp
swap 10512784 32 10512752 1% /var/run
/dev/md/dsk/d102 10237036 7516386 2618280 75% /opt
/dev/md/dsk/d104 70569513 50785798 19078020 73% /opt/app
/dev/md/dsk/d103 1988623 29696 1899269 2% /var/tmp
$ uname -a
SunOS intdev02 5.10 Generic_125100-10 sun4u sparc SUNW,Sun-Fire-V240
[03:35:01] root@intdev02[1]# grep -i warn /var/adm/messages
Jun 27 21:29:32 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:37 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:37 intdev02 glm: [ID 401478 kern.warning] WARNING: ID[SUNWpd.glm.cmd_timeout.6018]
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2 (glm0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:38 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:41 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:46 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:51 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: write error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:30:56 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@1,0 (sd0):
Jun 27 21:30:56 intdev02 md_stripe: [ID 641072 kern.warning] WARNING: md: d300: read error on /dev/dsk/c1t1d0s0
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd3):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@2,0 (sd1):
Jun 27 21:31:01 intdev02 scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@3,0 (sd2):
[03:35:01] root@intdev02[2]# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@1c,600000/scsi@2/sd@0,0
1. c1t1d0
/pci@1c,600000/scsi@2/sd@1,0
2. c1t2d0
/pci@1c,600000/scsi@2/sd@2,0
3. c1t3d0
/pci@1c,600000/scsi@2/sd@3,0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of SVM volume stripe:d300. Please see metaclear(1M).
/dev/dsk/c1t1d0s1 is part of SVM volume stripe:d301. Please see metaclear(1M).
/dev/dsk/c1t1d0s3 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s4 is part of SVM volume stripe:d303. Please see metaclear(1M).
/dev/dsk/c1t1d0s5 is part of SVM volume stripe:d302. Please see metaclear(1M).
/dev/dsk/c1t1d0s7 contains an SVM mdb. Please see metadb(1M).
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!
quit
format>
[03:35:01] root@intdev02[3]#
[03:35:01] root@intdev02[5]# date
Tue Jul 1 03:39:06 MEST 2008
[03:35:01] root@intdev02[6]# iostat -En | more
c1t1d0 Soft Errors: 0 Hard Errors: 21 Transport Errors: 2
Vendor: SEAGATE Product: ST373207LSUN72G Revision: 045A Serial No: 05433302ZN
Size: 73.40GB <73400057856 bytes>
Media Error: 0 Device Not Ready: 19 No Device: 1 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
[03:35:01] root@intdev02[9]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[10]#
# metadb -a c0t2d0s3 c1t1d0s3
Example 3: Deleting Two Replicas
This example shows how to delete two replicas from the sys-
tem. Assume that replicas have been set up on
/dev/dsk/c0t2d0s3 and /dev/dsk/c1t1d0s3.
# metadb -d c0t2d0s3 c1t1d0s3
[03:35:01] root@intdev02[18]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
W p l 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[19]# metadb -d c1t1d0s7
[03:35:01] root@intdev02[20]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[21]# metadb -a c1t1d0s7
[03:35:01] root@intdev02[22]# metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t0d0s7
a p luo 8208 8192 /dev/dsk/c1t0d0s7
a u 16 8192 /dev/dsk/c1t1d0s7
a p luo 16 8192 /dev/dsk/c1t2d0s7
a p luo 8208 8192 /dev/dsk/c1t2d0s7
a p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
[03:35:01] root@intdev02[23]#
Linux: Lun detection
LUN detection procedures
This topic describes LUN detection procedures for the Linux host system.
If you have a Linux driver that does not automatically configure any LUNs other than LUN 0, you can manually configure the other LUNs, depending on the parameters and settings used for the SCSI mid-layer driver. Figure 1 shows an example of the /proc/scsi/scsi file for a Linux host that only configures the first LUN, LUN 0, on each host adapter port.
Figure 1. Example of a /proc/scsi/scsi file from a Linux host that only configures LUN 0
# cat proc/scsi/scsi
...
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM-PSG Model: DPSS-318350M F Rev: S9HA
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 15 Lun: 00
Vendor: IBM Model: TP4.6 V41b3 Rev: 4.1b
Type: Processor ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
There are two ways to workaround the issue of only having LUN 0 configured:
1. Create a script to manually add devices into /proc/scsi/scsi
2. Detect LUNs automatically at system boot by modifying the initial ram-disk (initrd)
Create a script to echo the /proc filesystem
Use the scsi add-single-device command to consecutively configure all of the LUNs that are assigned to your host system. Write a script that repeats the scsi add-single-device command for each LUN on each ID for each host adapter. The script must scan all host adapter ports and identify all of the LUNs that are assigned to each port.
After you run the script, you can view all of the assigned LUNs in the /proc/scsi/scsi file.
Figure 2 shows an excerpt of an example /proc/scsi/scsi file for a Linux host after a script has configured every LUN.
Figure 2. Example of a /proc/scsi/scsi file for a Linux host with configured LUNs
# cat proc/scsi/scsi
...
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 03
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 04
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
...
Detect LUNs automatically at system boot
The second method of configuring LUNs for a Linux system with only LUN 0 configured involves setting the parameter for the SCSI mid-layer driver that controls how many LUNs are scanned during a SCSI bus scan. The following procedure works for both 2.4 and 2.6 kernels, but it assumes the SCSI mid-layer driver is compiled as a scsi_mod module that is loaded automatically at system boot time. For Linux 2.4 kernels, to set the maximum number of disk devices under Linux to properly detect all volumes, you need to set the max_scsi_luns option for the SCSI mid-layer driver. For example, if max_scsi_luns is set to 1 this limits SCSI bus scans to only LUN 0. This value should be set to the respective maximum number of disks the kernel can support, for example, 128 or 256. In Linux 2.6 kernels, the same procedure applies, except that the parameter has been renamed from max_scsi_luns to max_luns.
1. Edit the /etc/modules.conf file.
2. Add the following line:
* options scsi_mod max_scsi_luns= (where is the total number of luns to probe.
3. Save the file.
4. Run the mkinitrd command to rebuild the ram-disk associated with the current kernel. You can use the following figures examples of what mkinitrd command to run for your operating system. refers to the ‘uname –r’ output which displays the currently running kernel level, for example:. 2.4.21-292-smp.
For SUSE distributions, use the following command:
cd /boot
mkinitrd –k vmlinuz- -i initrd-
For Red Hat distributions, use the following command:
cd /boot
mkinitrd –v initrd-.img
5. Reboot the host.
6. Verify that the boot files are correctly configured for the newly created initrd image in the /boot/grub/menu.lst file.
This topic describes LUN detection procedures for the Linux host system.
If you have a Linux driver that does not automatically configure any LUNs other than LUN 0, you can manually configure the other LUNs, depending on the parameters and settings used for the SCSI mid-layer driver. Figure 1 shows an example of the /proc/scsi/scsi file for a Linux host that only configures the first LUN, LUN 0, on each host adapter port.
Figure 1. Example of a /proc/scsi/scsi file from a Linux host that only configures LUN 0
# cat proc/scsi/scsi
...
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM-PSG Model: DPSS-318350M F Rev: S9HA
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 15 Lun: 00
Vendor: IBM Model: TP4.6 V41b3 Rev: 4.1b
Type: Processor ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
There are two ways to workaround the issue of only having LUN 0 configured:
1. Create a script to manually add devices into /proc/scsi/scsi
2. Detect LUNs automatically at system boot by modifying the initial ram-disk (initrd)
Create a script to echo the /proc filesystem
Use the scsi add-single-device command to consecutively configure all of the LUNs that are assigned to your host system. Write a script that repeats the scsi add-single-device command for each LUN on each ID for each host adapter. The script must scan all host adapter ports and identify all of the LUNs that are assigned to each port.
After you run the script, you can view all of the assigned LUNs in the /proc/scsi/scsi file.
Figure 2 shows an excerpt of an example /proc/scsi/scsi file for a Linux host after a script has configured every LUN.
Figure 2. Example of a /proc/scsi/scsi file for a Linux host with configured LUNs
# cat proc/scsi/scsi
...
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 03
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 04
Vendor: IBM Model: 2105800 Rev: .294
Type: Direct-Access ANSI SCSI revision: 03
...
Detect LUNs automatically at system boot
The second method of configuring LUNs for a Linux system with only LUN 0 configured involves setting the parameter for the SCSI mid-layer driver that controls how many LUNs are scanned during a SCSI bus scan. The following procedure works for both 2.4 and 2.6 kernels, but it assumes the SCSI mid-layer driver is compiled as a scsi_mod module that is loaded automatically at system boot time. For Linux 2.4 kernels, to set the maximum number of disk devices under Linux to properly detect all volumes, you need to set the max_scsi_luns option for the SCSI mid-layer driver. For example, if max_scsi_luns is set to 1 this limits SCSI bus scans to only LUN 0. This value should be set to the respective maximum number of disks the kernel can support, for example, 128 or 256. In Linux 2.6 kernels, the same procedure applies, except that the parameter has been renamed from max_scsi_luns to max_luns.
1. Edit the /etc/modules.conf file.
2. Add the following line:
* options scsi_mod max_scsi_luns=
3. Save the file.
4. Run the mkinitrd command to rebuild the ram-disk associated with the current kernel. You can use the following figures examples of what mkinitrd command to run for your operating system.
For SUSE distributions, use the following command:
cd /boot
mkinitrd –k vmlinuz-
For Red Hat distributions, use the following command:
cd /boot
mkinitrd –v initrd-
5. Reboot the host.
6. Verify that the boot files are correctly configured for the newly created initrd image in the /boot/grub/menu.lst file.
Subscribe to:
Posts (Atom)
UNIX: How to print column nicely using printf
[user@hostfwnms1-oam tmp]# cat b.sh printf "%-26s %-19s %-8s %-8s %-s %-s\n" HOSTNAME IP PING SNMPWALK 0-ok 1-fail for i in `cat n...
-
This does increase the amount of CPU and I/O that both your sending and receiving side use, but I’ve been able to run ~25 parallel instance...
-
syntax: rmvterm –m {msys} –p {lpar} # rmvterm -m Server-9117-570-SN103FACD_B -p WBITVIO2
-
Cluster operations Start VCS hastart [-force-stale] hasys -force system Stop VCS hastop -local [-force-evacuate] hastop -sys system [-force-...