Wednesday, July 30, 2008

Solaris sar %sys cpu high

How to find out which pid is consuming %usr kernel cpu resources
#prstat -m
eg.
$ prstat -m
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
3828 hpfimhas 24 75 1.1 0.0 0.0 0.0 0.0 0.0 0 0 87K 0 prstat/1
3812 root 0.1 0.1 0.0 0.0 0.0 0.0 100 0.0 9 0 224 0 sshd/1
3820 hpfimhas 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 38 0 128 0 sshd/1
3822 hpfimhas 0.0 0.0 0.0 0.0 0.0 0.0 100 0.0 3 0 110 0 sh/1
6105 instbas 0.0 0.0 0.0 0.0 0.0 73 27 0.0 10 0 14 0 java/48
6006 lmcbejo 0.0 0.0 0.0 0.0 0.0 87 13 0.0 23 1 16 0 java/30
13769 instbas 0.0 0.0 0.0 0.0 0.0 91 9.4 0.0 4 0 9 0 java/128
5974 instbas 0.0 0.0 0.0 0.0 0.0 84 16 0.0 16 0 10 0 java/57

For solaris 10 can use dtrace, see example below

On with the show: let's say you're looking at mpstat(1) output on your
multiuser server. You might see something like this:

CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
12 1 27 3504 338 206 765 27 114 65 1 337 9 19 41 31
13 1 19 5725 98 68 723 22 108 120 0 692 3 17 20 61
14 0 57 3873 224 192 670 22 75 86 1 805 7 10 35 48
15 36 7 1551 42 28 689 6 68 59 0 132 2 9 35 54
16 14 7 7209 504 457 1031 37 125 244 0 459 6 30 4 60
17 5 5 4960 150 108 817 37 98 154 0 375 6 26 6 62
18 5 6 6085 1687 1661 741 60 76 248 0 434 3 33 0 64
19 0 15 10037 72 41 876 23 100 291 1 454 2 19 9 71
20 12 5 5890 746 711 992 32 122 216 2 960 10 33 4 53
21 60 5 1567 729 713 467 15 80 59 0 376 2 35 10 53
22 0 6 4378 315 291 751 17 84 142 1 312 3 16 1 80
23 0 6 12119 33 3 874 20 82 384 1 513 4 24 11 62

And well you may wonder (as perhaps you often have) -- what the hell is
causing all of those cross calls, anyway? (Cross calls appear in the "xcal"
column; see mpstat(1).)

Using DTrace, investigating this is a snap:

# dtrace -n xcalls'{@[execname] = count()}'
dtrace: description 'xcalls' matched 4 probes
[ letting this run for a few seconds ]
^C

mozilla-bin 1
lockd 1
in.mpathd 2
nsrmmd 5
grep 6
chmod 6
cat 6
nwadmin 13
ls 24
in.tftpd 28
nsrindexd 34
fsflush 38
cut 42
find 42
mkdir 66
rm 76
ipop3d 78
scp 79
inetd 96
dtrace 111
nawk 118
imapd-simmonmt 126
rmdir 132
sshd 138
rpc.rstatd 159
mv 398
save 1292
gzip 1315
get_all2 1678
sched 1712
nfsd 3709
tar 27054 <----- high usage

Wednesday, July 9, 2008

Solaris 10: mount a disk slice in a sub-zone

Non-global zones in Solaris 10 do not have the ability to see drives or disk slices (to prove this, list the contents of /dev/dsk from within a zone). What filesystems are mounted in which zones is controlled exclusively from the global zone. This recipe describes persistently mounting a disk slice in a non-global zone.
To mount the device c0t1d0s3 in the existing zone testzone under the mount point /mnt, login to the global zone and become root or a priviledged user and complete the following steps:



zonecfg -z testzone
zonecfg:testzone> add fs
zonecfg:testzone:fs> set dir=/mnt
zonecfg:testzone:fs> set special=/dev/dsk/c0t1d0s3
zonecfg:testzone:fs> set raw=/dev/rdsk/c0t1d0s3
zonecfg:testzone:fs> set type=ufs
zonecfg:testzone:fs> end
zonecfg:testzone> verify
zonecfg:testzone> commit
zonecfg:testzone> exit

Following the commit command, the filesystem will be immediately available in the zone. Substitute the desired mount point and device name for your system. If the zone has not yet been created, you can use the 'add fs' commands when creating the zone.

Solaris SVM grow concat//stripe

HOw to grow filesystem under disksuite without soft partiotion for concat stripe
find unused disk by #metastat | grep -i 2f5
then, use format cmd to repatition the disk
[05:58:15] root@server[31]# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@780/pci@0/pci@9/scsi@0/sd@0,0
1. c0t1d0
/pci@780/pci@0/pci@9/scsi@0/sd@1,0
2. c4t60060E800543BE00000043BE000000BBd0
/scsi_vhci/ssd@g60060e800543be00000043be000000bb
3. c4t60060E800543BE00000043BE000000BCd0
/scsi_vhci/ssd@g60060e800543be00000043be000000bc
4. c4t60060E800543BE00000043BE000000BDd0
/scsi_vhci/ssd@g60060e800543be00000043be000000bd
5. c4t60060E800543BE00000043BE000000BEd0
/scsi_vhci/ssd@g60060e800543be00000043be000000be
6. c4t60060E800543BE00000043BE000002F4d0
/scsi_vhci/ssd@g60060e800543be00000043be000002f4
7. c4t60060E800543BE00000043BE000002F5d0
/scsi_vhci/ssd@g60060e800543be00000043be000002f5
Specify disk (enter its number):
[05:58:15] root@server[32]#metattach d17 /dev/dsk/c4t60060E800543BE00000043BE000002F5d0s0
d107: component is attached
[05:58:15] root@server[33]# df -k /dbo
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d107 183417068 173603462 8489617 96% /dbo
[05:58:15] root@server[34]# grep dbo /etc/mnttab
/dev/md/dsk/d107 /dbo ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=154006b 1202293409
[05:58:15] root@server[35]# growfs -M /dbo /dev/md/rdsk/d107
/dev/md/rdsk/d107: 476067840 sectors in 77485 cylinders of 48 tracks, 128 sectors
232455.0MB in 4843 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................................
.................
super-block backups for last 10 cylinder groups at:
475107488, 475205920, 475304352, 475402784, 475501216, 475599648, 475698080,
475796512, 475894944, 475993376
[05:58:15] root@server[36]# df -k /dbo
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d107 234430636 173603462 59503185 75% /dbo
[05:58:15] root@server[37]#

UNIX: How to print column nicely using printf

[user@hostfwnms1-oam tmp]# cat b.sh printf "%-26s %-19s %-8s %-8s %-s %-s\n" HOSTNAME IP PING SNMPWALK 0-ok 1-fail for i in `cat n...