Thursday, September 20, 2012
Solaris: modify cpu pool
# poolcfg -dc 'modify pset pset_ACC_INT_A_3 ( uint pset.min = 0 ; uint pset.max = 0)'
# poolcfg -dc 'modify pset pset_ACC_INT_A_4 ( uint pset.min = 0 ; uint pset.max = 0)'
# poolcfg -dc 'modify pset pset_ACC_INT_A_6 ( uint pset.min = 0 ; uint pset.max = 0)'
# poolcfg -dc 'transfer 2 from pset pset_ACC_INT_A_3 to pset_default'
# poolcfg -dc 'transfer 2 from pset pset_ACC_INT_A_4 to pset_default'
# poolcfg -dc 'transfer 2 from pset pset_ACC_INT_A_6 to pset_default'
# pooladm -c
# poolcfg -dc info
Change the pset number for cpupool below
pset_ACC_INT_A_1
pset_ACC_INT_A_2
pset_ACC_INT_A_5
# poolcfg -dc 'modify pset pset_ACC_INT_A_1 ( uint pset.min = 8 ; uint pset.max = 8)'
# poolcfg -dc 'modify pset pset_ACC_INT_A_2 ( uint pset.min = 4 ; uint pset.max = 4)'
# poolcfg -dc 'modify pset pset_ACC_INT_A_5 ( uint pset.min = 8 ; uint pset.max = 8)'
# poolcfg -dc 'transfer 4 from pset pset_default to pset_ACC_INT_A_1'
# poolcfg -dc 'transfer 2 from pset pset_default to pset_ACC_INT_A_2'
# poolcfg -dc 'transfer 6 from pset pset_default to pset_ACC_INT_A_5'
Tuesday, July 10, 2012
Solaris: VCS How to update SystemList without bring down service group
Question
I have a service group say sg1 and it has the following system list defined
SystemList = { a = 1, b = 2, c = 3 }
AutoStartList = { a }
Currently, I am running service group in c. I want to change the system
order like this. a=3, b=2 and c=1 and autostartlist=c.
Can I do this without bringing down the service group?
Answer
haconf -makerw
hagrp -modify
hagrp -modify
haconf -dump -makero
source: http://mailman.eng.auburn.edu/pipermail/veritas-ha/2004-June/008414.html
Wednesday, May 9, 2012
Solaris: check free memory
Commands to check free memory on unix servers -
(1) vmstat 1 2 | tail -1 | awk '{printf "%d%s\n", ($5*4)/1024, "MB" }'
(2) top -h -d 1
(1) vmstat 1 2 | tail -1 | awk '{printf "%d%s\n", ($5*4)/1024, "MB" }'
(2) top -h -d 1
Friday, April 6, 2012
Solaris: Zone capped memory
Add a memory cap.
zonecfg:zoneA> add capped-memory
Set the memory cap.
zonecfg:zoneA:capped-memory> set physical=50m
Set the swap memory cap.
zonecfg:zoneA:capped-memory> set swap=100m
Set the locked memory cap.
zonecfg:zoneA:capped-memory> set locked=30m
End the memory cap specification.
zonecfg:zoneA:capped-memory> end
zonecfg:zoneA> add capped-memory
Set the memory cap.
zonecfg:zoneA:capped-memory> set physical=50m
Set the swap memory cap.
zonecfg:zoneA:capped-memory> set swap=100m
Set the locked memory cap.
zonecfg:zoneA:capped-memory> set locked=30m
End the memory cap specification.
zonecfg:zoneA:capped-memory> end
Thursday, March 15, 2012
Solaris: Find out disks CU:LDev and /dev/rdsk/
# more xpinfo.out
Device File : /dev/rdsk/c1t50060E80056F1168d0s2 Model : XP24000
Port : CL7J Serial # : 00028433
Host Target : c38c Code Rev : 6006
Array LUN : 00 Subsystem : 0085
CU:LDev : 81:6c CT Group : ---
Type : OPEN-V -SUN CA Volume : SMPL
Size : 51200 MB BC0 (MU#0) : SMPL
ALPA : 6d BC1 (MU#1) : SMPL
Loop Id : 43 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : TPVOL RAID Type : ---
RAID Group : --- ACP Pair : ---
Disk Mechs : --- --- --- ---
FC-LUN : 00006f110000816c Port WWN : 50060e80056f1168
HBA Node WWN: 2000001b329c0eb8 HBA Port WWN: 2100001b329c0eb8
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
SLPR : 0 CLPR : 0
Device File : /dev/rdsk/c1t50060E80056F1168d1s2 Model : XP24000
# luxadm display /dev/rdsk/c1t50060E80056F1168d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c1t50060E80056F1168d0s2
Vendor: HP
Product ID: OPEN-V -SUN
Revision: 6006
Serial Num: 50 06F11816C <----CU:LDev
Unformatted capacity: 51200.625 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x0
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c1t50060E80056F1168d0s2
/devices/pci@2,600000/SUNW,qlc@0/fp@0,0/ssd@w50060e80056f1168,0:c,raw
LUN path port WWN: 50060e80056f1168
Host controller port WWN: 2100001b329c0eb8
Path status: O.K.
/dev/rdsk/c2t50060E80056F1178d0s2
/devices/pci@3,700000/SUNW,qlc@0/fp@0,0/ssd@w50060e80056f1178,0:c,raw
LUN path port WWN: 50060e80056f1178
Host controller port WWN: 2100001b329c55ab
Path status: O.K.
Device File : /dev/rdsk/c1t50060E80056F1168d0s2 Model : XP24000
Port : CL7J Serial # : 00028433
Host Target : c38c Code Rev : 6006
Array LUN : 00 Subsystem : 0085
CU:LDev : 81:6c CT Group : ---
Type : OPEN-V -SUN CA Volume : SMPL
Size : 51200 MB BC0 (MU#0) : SMPL
ALPA : 6d BC1 (MU#1) : SMPL
Loop Id : 43 BC2 (MU#2) : SMPL
SCSI Id : ---
RAID Level : TPVOL RAID Type : ---
RAID Group : --- ACP Pair : ---
Disk Mechs : --- --- --- ---
FC-LUN : 00006f110000816c Port WWN : 50060e80056f1168
HBA Node WWN: 2000001b329c0eb8 HBA Port WWN: 2100001b329c0eb8
Vol Group : --- Vol Manager : ---
Mount Points: ---
DMP Paths : ---
SLPR : 0 CLPR : 0
Device File : /dev/rdsk/c1t50060E80056F1168d1s2 Model : XP24000
# luxadm display /dev/rdsk/c1t50060E80056F1168d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c1t50060E80056F1168d0s2
Vendor: HP
Product ID: OPEN-V -SUN
Revision: 6006
Serial Num: 50 06F11816C <----CU:LDev
Unformatted capacity: 51200.625 MBytes
Write Cache: Enabled
Read Cache: Enabled
Minimum prefetch: 0x0
Maximum prefetch: 0x0
Device Type: Disk device
Path(s):
/dev/rdsk/c1t50060E80056F1168d0s2
/devices/pci@2,600000/SUNW,qlc@0/fp@0,0/ssd@w50060e80056f1168,0:c,raw
LUN path port WWN: 50060e80056f1168
Host controller port WWN: 2100001b329c0eb8
Path status: O.K.
/dev/rdsk/c2t50060E80056F1178d0s2
/devices/pci@3,700000/SUNW,qlc@0/fp@0,0/ssd@w50060e80056f1178,0:c,raw
LUN path port WWN: 50060e80056f1178
Host controller port WWN: 2100001b329c55ab
Path status: O.K.
Tuesday, February 21, 2012
Solaris: vxvm determine which disk multipathing
cluster01# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:SVM - - SVM
disk_1 auto:SVM - - SVM
xp24k0_6bc4 auto:cdsdisk egate_dg01 egate_dg online thin nohotuse
xp24k0_6bc5 auto:cdsdisk eaicore_dg01 eaicore_dg online thin nohotuse
xp24k0_6bc6 auto:cdsdisk - - online thin
xp24k0_6bc7 auto:cdsdisk optora_dg01 optora_dg online thin
xp24k0_6bf0 auto:cdsdisk eaiuser1_dg01 eaiuser1_dg online thin
xp24k0_6bfa auto:cdsdisk eaiuser11_dg01 eaiuser11_dg online thin
xp24k0_6bfb auto:cdsdisk eaiuser12_dg01 eaiuser12_dg online thin
xp24k0_6bfc auto:cdsdisk eaiuser13_dg01 eaiuser13_dg online thin
xp24k0_6bf1 auto:cdsdisk - - online thin
xp24k0_6bf2 auto:cdsdisk eaiuser3_dg01 eaiuser3_dg online thin
xp24k0_6bf3 auto:cdsdisk eaiuser4_dg01 eaiuser4_dg online thin
xp24k0_6bf4 auto:cdsdisk eaiuser5_dg01 eaiuser5_dg online thin
xp24k0_6bf5 auto:cdsdisk - - online thin
xp24k0_6bf6 auto:cdsdisk - - online thin
xp24k0_6bf7 auto:cdsdisk - - online thin
xp24k0_6bf8 auto:cdsdisk - - online thin
xp24k0_6bf9 auto:cdsdisk - - online thin
xp24k0_6b8b auto:cdsdisk oraeai2_dg01 oraeai2_dg online thin
xp24k0_6b8c auto:cdsdisk oraeai1_dg01 oraeai1_dg online thin
xp24k0_6b8d auto:cdsdisk oraeai2_dg02 oraeai2_dg online thin
xp24k0_6b8e auto:cdsdisk oraeai1_dg02 oraeai1_dg online thin
xp24k0_6b8f auto:cdsdisk oraeai2_dg03 oraeai2_dg online thin
xp24k0_6b41 auto:none - - online invalid
xp24k0_6b90 auto:cdsdisk oraeai1_dg03 oraeai1_dg online thin
xp24k0_6b91 auto:cdsdisk oraeai1_dg04 oraeai1_dg online thin
xp24k0_6b92 auto:cdsdisk oraeai1_dg05 oraeai1_dg online thin
xp24k0_6b93 auto:cdsdisk mqha_dg01 mqha_dg online thin nohotuse
xp24k0_6b94 auto:cdsdisk oraeai2_dg04 oraeai2_dg online thin
xp24k0_6b95 auto:cdsdisk oraeai2_dg05 oraeai2_dg online thin
xp24k0_6b96 auto:cdsdisk - - online thin
xp24k0_6b97 auto:cdsdisk - - online thin
xp24k0_6b98 auto:cdsdisk - - online thin
xp24k0_6c4a auto:cdsdisk eaiuser4_dg02 eaiuser4_dg online thin
xp24k0_6c4b auto:cdsdisk - - online thin
xp24k0_6c4c auto:cdsdisk - - online thin
xp24k0_6c4d auto:cdsdisk eaiuser11_dg02 eaiuser11_dg online thin
xp24k0_6c4e auto:cdsdisk oraeai1_dg06 oraeai1_dg online thin
xp24k0_6c4f auto:cdsdisk oraeai2_dg06 oraeai2_dg online thin
xp24k0_62ce auto:cdsdisk oraeai2_dg07 oraeai2_dg online thin nohotuse
xp24k0_66ef auto:cdsdisk oraeai2_dg09 oraeai2_dg online thin nohotuse
xp24k0_66f0 auto:cdsdisk oraeai2_dg10 oraeai2_dg online thin nohotuse
xp24k0_615f auto:cdsdisk oraeai1_dg07 oraeai1_dg online thin
xp24k0_834b auto:cdsdisk seebeyondvcp2_rootdg01 seebeyondvcp2_rootdg online thin nohotuse
xp24k0_6336 auto:cdsdisk oraeai2_dg08 oraeai2_dg online thin
cluster011# vxdisk list xp24k0_6c4d
Device: xp24k0_6c4d
devicetag: xp24k0_6c4d
type: auto
hostid: cluster01
disk: name=eaiuser11_dg02 id=1280996984.101.cluster01
group: name=eaiuser11_dg id=1279714356.151.cluster01
info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags: online ready private autoconfig noautoimport imported thin
pubpaths: block=/dev/vx/dmp/xp24k0_6c4ds2 char=/dev/vx/rdmp/xp24k0_6c4ds2
guid: {99bcadaa-a06b-11df-8fe1-0021286cbf2e}
udid: HP%5F50%5F1320C%5F50%201320C6C4D
site: -
version: 3.1
iosize: min=512 (bytes) max=2048 (blocks)
public: slice=2 offset=65792 len=83799808 disk_offset=0
private: slice=2 offset=256 len=65536 disk_offset=0
update: time=1325053781 seqno=0.183
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=48144
logs: count=1 len=7296
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 enabled
config priv 000256-048207[047952]: copy=01 offset=000192 enabled
log priv 048208-055503[007296]: copy=01 offset=000000 enabled
lockrgn priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths: 2
c1t50060E8015320C4Dd34s2 state=enabled
c2t50060E8015320C5Dd34s2 state=enabled
cluster01#
cluster01# vxdmpadm getdmpnode nodename=c1t50060E8015320C4Dd34s2
NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME
=========================================================================
xp24k0_6c4d ENABLED Disk 2 2 0 Disk
************************************************************************
Other Useful cmd
cluster01# vxdmpadm getsubpaths ctlr=c1
cluster01# vxdmpadm getsubpaths dmpnodename=xp24k0_6c4d
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:SVM - - SVM
disk_1 auto:SVM - - SVM
xp24k0_6bc4 auto:cdsdisk egate_dg01 egate_dg online thin nohotuse
xp24k0_6bc5 auto:cdsdisk eaicore_dg01 eaicore_dg online thin nohotuse
xp24k0_6bc6 auto:cdsdisk - - online thin
xp24k0_6bc7 auto:cdsdisk optora_dg01 optora_dg online thin
xp24k0_6bf0 auto:cdsdisk eaiuser1_dg01 eaiuser1_dg online thin
xp24k0_6bfa auto:cdsdisk eaiuser11_dg01 eaiuser11_dg online thin
xp24k0_6bfb auto:cdsdisk eaiuser12_dg01 eaiuser12_dg online thin
xp24k0_6bfc auto:cdsdisk eaiuser13_dg01 eaiuser13_dg online thin
xp24k0_6bf1 auto:cdsdisk - - online thin
xp24k0_6bf2 auto:cdsdisk eaiuser3_dg01 eaiuser3_dg online thin
xp24k0_6bf3 auto:cdsdisk eaiuser4_dg01 eaiuser4_dg online thin
xp24k0_6bf4 auto:cdsdisk eaiuser5_dg01 eaiuser5_dg online thin
xp24k0_6bf5 auto:cdsdisk - - online thin
xp24k0_6bf6 auto:cdsdisk - - online thin
xp24k0_6bf7 auto:cdsdisk - - online thin
xp24k0_6bf8 auto:cdsdisk - - online thin
xp24k0_6bf9 auto:cdsdisk - - online thin
xp24k0_6b8b auto:cdsdisk oraeai2_dg01 oraeai2_dg online thin
xp24k0_6b8c auto:cdsdisk oraeai1_dg01 oraeai1_dg online thin
xp24k0_6b8d auto:cdsdisk oraeai2_dg02 oraeai2_dg online thin
xp24k0_6b8e auto:cdsdisk oraeai1_dg02 oraeai1_dg online thin
xp24k0_6b8f auto:cdsdisk oraeai2_dg03 oraeai2_dg online thin
xp24k0_6b41 auto:none - - online invalid
xp24k0_6b90 auto:cdsdisk oraeai1_dg03 oraeai1_dg online thin
xp24k0_6b91 auto:cdsdisk oraeai1_dg04 oraeai1_dg online thin
xp24k0_6b92 auto:cdsdisk oraeai1_dg05 oraeai1_dg online thin
xp24k0_6b93 auto:cdsdisk mqha_dg01 mqha_dg online thin nohotuse
xp24k0_6b94 auto:cdsdisk oraeai2_dg04 oraeai2_dg online thin
xp24k0_6b95 auto:cdsdisk oraeai2_dg05 oraeai2_dg online thin
xp24k0_6b96 auto:cdsdisk - - online thin
xp24k0_6b97 auto:cdsdisk - - online thin
xp24k0_6b98 auto:cdsdisk - - online thin
xp24k0_6c4a auto:cdsdisk eaiuser4_dg02 eaiuser4_dg online thin
xp24k0_6c4b auto:cdsdisk - - online thin
xp24k0_6c4c auto:cdsdisk - - online thin
xp24k0_6c4d auto:cdsdisk eaiuser11_dg02 eaiuser11_dg online thin
xp24k0_6c4e auto:cdsdisk oraeai1_dg06 oraeai1_dg online thin
xp24k0_6c4f auto:cdsdisk oraeai2_dg06 oraeai2_dg online thin
xp24k0_62ce auto:cdsdisk oraeai2_dg07 oraeai2_dg online thin nohotuse
xp24k0_66ef auto:cdsdisk oraeai2_dg09 oraeai2_dg online thin nohotuse
xp24k0_66f0 auto:cdsdisk oraeai2_dg10 oraeai2_dg online thin nohotuse
xp24k0_615f auto:cdsdisk oraeai1_dg07 oraeai1_dg online thin
xp24k0_834b auto:cdsdisk seebeyondvcp2_rootdg01 seebeyondvcp2_rootdg online thin nohotuse
xp24k0_6336 auto:cdsdisk oraeai2_dg08 oraeai2_dg online thin
cluster011# vxdisk list xp24k0_6c4d
Device: xp24k0_6c4d
devicetag: xp24k0_6c4d
type: auto
hostid: cluster01
disk: name=eaiuser11_dg02 id=1280996984.101.cluster01
group: name=eaiuser11_dg id=1279714356.151.cluster01
info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags: online ready private autoconfig noautoimport imported thin
pubpaths: block=/dev/vx/dmp/xp24k0_6c4ds2 char=/dev/vx/rdmp/xp24k0_6c4ds2
guid: {99bcadaa-a06b-11df-8fe1-0021286cbf2e}
udid: HP%5F50%5F1320C%5F50%201320C6C4D
site: -
version: 3.1
iosize: min=512 (bytes) max=2048 (blocks)
public: slice=2 offset=65792 len=83799808 disk_offset=0
private: slice=2 offset=256 len=65536 disk_offset=0
update: time=1325053781 seqno=0.183
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=48144
logs: count=1 len=7296
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 enabled
config priv 000256-048207[047952]: copy=01 offset=000192 enabled
log priv 048208-055503[007296]: copy=01 offset=000000 enabled
lockrgn priv 055504-055647[000144]: part=00 offset=000000
Multipathing information:
numpaths: 2
c1t50060E8015320C4Dd34s2 state=enabled
c2t50060E8015320C5Dd34s2 state=enabled
cluster01#
cluster01# vxdmpadm getdmpnode nodename=c1t50060E8015320C4Dd34s2
NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME
=========================================================================
xp24k0_6c4d ENABLED Disk 2 2 0 Disk
************************************************************************
Other Useful cmd
cluster01# vxdmpadm getsubpaths ctlr=c1
cluster01# vxdmpadm getsubpaths dmpnodename=xp24k0_6c4d
Subscribe to:
Posts (Atom)
UNIX: How to print column nicely using printf
[user@hostfwnms1-oam tmp]# cat b.sh printf "%-26s %-19s %-8s %-8s %-s %-s\n" HOSTNAME IP PING SNMPWALK 0-ok 1-fail for i in `cat n...
-
This does increase the amount of CPU and I/O that both your sending and receiving side use, but I’ve been able to run ~25 parallel instance...
-
syntax: rmvterm –m {msys} –p {lpar} # rmvterm -m Server-9117-570-SN103FACD_B -p WBITVIO2
-
Cluster operations Start VCS hastart [-force-stale] hasys -force system Stop VCS hastop -local [-force-evacuate] hastop -sys system [-force-...