Friday, May 30, 2008

Solaris Netapps add new filesystem

The following volume options should be configured:
Nosnap No automatic scheduled Snapshot™ duplications
Minra Minimal read ahead
Nvfail NVRAM check and behavior
sapfiler1> vol options sapdata nosnap on
sapfiler1> vol options sapdata minra on
sapfiler1> vol options sapdata nvfail on
sapfiler1> vol options saplog nosnap on
sapfiler1> vol options saplog minra on
sapfiler1> vol options saplog nvfail on
sapfiler1> qtree create /vol/sapdata/sapdata_lun
sapfiler1> qtree create /vol/saplog/saplog_lun
sapfiler1> qtree create /vol/saplog/sapmnt_lun
sapfiler1> qtree create /vol/saplog/sapusr_lun
sapfiler1> qtree create /vol/saplog/trans_lun
Create the LUNs
sapfiler1> lun create -s 30g -t solaris /vol/sapdata/sapdata_lun/sapdata
sapfiler1> lun create -s 5g -t solaris /vol/saplog/saplog_lun/saplog
sapfiler1> lun create -s 500m -t solaris /vol/saplog/sapmnt_lun/sapmnt
sapfiler1> lun create -s 500m -t solaris /vol/saplog/sapusr_lun/sapusr
sapfiler1> lun create -s 200m -t solaris /vol/saplog/trans_lun/trans
Define the initiator group
To create the initiator group you need to know the WWPN of the host, which will use these LUNs. The
WWPN can be obtained with the sanlun command on the host.
bash-2.03# sanlun fcp show adapter
lpfc0 WWPN:10000000c92d55f3
sapfiler1> igroup create -f -t solaris pp400 10:00:00:00:c9:2d:55:f3
Map the LUNs to the initiator group
sapfiler1> lun map /vol/sapdata/sapdata_lun/sapdata pp400 0
sapfiler1> lun map /vol/saplog/saplog_lun/saplog pp400 6
sapfiler1> lun map /vol/saplog/sapusr_lun/sapusr pp400 7
sapfiler1> lun map /vol/saplog/sapmnt_lun/sapmnt pp400 8
sapfiler1> lun map /vol/saplog/trans_lun/trans pp400 9

Configure persistent binding
Persistent binding is configured with the tool /usr/sbin/lpfc/lputil . To configure persistent
binding you need to know the WWNN of the filer. The WWNN can be obtained with the sysconfig
command at the filer console.
sapfiler1> sysconfig –v
…………
…………
slot 3: Fibre Channel Target Host Adapter 3a
(Dual-channel, QLogic 2312 (2342) rev. 2, 64-bit, )
Firmware rev: 3.1.15
Host Port Addr: 011000
Cacheline size: 8
SRAM parity: Yes
FC Nodename: 50:a9:80:00:02:00:88:f7 (50a98000020088f7)
FC Portname: 50:a9:80:03:02:00:88:f7 (50a98003020088f7)
Connection: PTP, Fabric
After using lputil command the lpfc.conf file has the following entry.
File /kernel/drv/lpfc.conf
…………
…………
# BEGIN: LPUTIL-managed Persistent Bindings
fcp-bind-WWNN="50a98000020088f7:lpfc0t1";
Network Appliance Inc. Proprietary
7
TECHNICAL REPORT
File /kernel/drv/lpfc.conf
…………
…………
# BEGIN: LPUTIL-managed Persistent Bindings
fcp-bind-WWNN="50a98000020088f7:lpfc0t1";
Edit entries in /kernel/drv/sd.conf
The LUN IDs that were used when mapping the LUNs to the initiator groups and the SCSI ID that was
used with the lputil command have to be used in the entries in /kernel/drv/sd.conf.
…………
…………
name="sd" parent="lpfc" target=1 lun=0;
name="sd" parent="lpfc" target=1 lun=1;
name="sd" parent="lpfc" target=1 lun=2;
name="sd" parent="lpfc" target=1 lun=3;
name="sd" parent="lpfc" target=1 lun=4;
name="sd" parent="lpfc" target=1 lun=5;
name="sd" parent="lpfc" target=1 lun=6;
name="sd" parent="lpfc" target=1 lun=7;
name="sd" parent="lpfc" target=1 lun=8;
name="sd" parent="lpfc" target=1 lun=9;

Reboot with reconfigure
After the new entries in /kernel/drv/sd.conf are edited, the host needs to be rebooted with the
reconfigure option. reboot -- -r

Configure new disks with format command
The new disks can now be configured with the format command.

VCS Adding node to the existing cluster

Adding the node to the existing cluster
Perform the tasks on one of the existing nodes in the cluster.
To add the new node to the existing cluster
1 Enter the command:
# haconf -makerw
2 Add the new system to the cluster:
# hasys -add east
3 Enter the following command:
# haconf -dump
4 Copy the main.cf file from an existing node to your new node:
# rcp /etc/VRTSvcs/conf/config/main.cf east:/etc/VRTSvcs/conf/
config/
5 Start VCS on the new node:
# hastart
6 If necessary, modify any new system attributes.
7 Enter the command:
# haconf -dump -makero
Start VCS after adding the new node to the cluster and verify the cluster.
To start VCS and verify the cluster
1 From the new system, start VCS with the new system added to the cluster:
# hastart
2 Run the GAB configuration command on each node to verify that Port a and
Port h include the new node in the membership:
# /sbin/gabconfig -a
GAB Port Memberships
===================================
Port a gen a3640003 membership 012
Port h gen fd570002 membership 012

Solaris emc powerpath link problem

If you boot a Solaris host with all socal host adapters to storage
system volumes disconnected or dysfunctional, PowerPath will not
configure any socal host adapter paths. After physically restoring the
socal connections, run the following commands to restore the paths
in PowerPath:

On hosts running this OS Run these commands
Solaris 7 and 8
devfsadm
powercf -q
powermt config
Solaris 2.6
drvconfig; disks; devlinks
powercf -q
powermt config

AIX configuring emc powerpath on bootdisk

This section describes the process for converting a system with AIX
installed on internal disks to boot from storage system logical
devices. The process first transfers a copy of the complete operating
system from an internal disk to logical devices on a storage system. It
then configures PowerPath so the root volume group takes advantage
of multipathing and failover capabilities. This is the recommended
process, as it allows you to revert to the internal disks in the event of
a problem.
Before you start:
 Ensure that the AIX alt_disk_install LPP is installed on the
system. The LPP is on the AIX installation CD.
 Apply the rte and boot_images filesets.
Then follow these steps:
1. Ensure that all device connections to the storage system are
established.
2. Ensure that all hdisks are configured properly
3. Run powermt config.
4. Use the rmdev command with the -d option to remove all
PowerPath devices, including the powerpath0 device. PowerPath
should remain installed, but all PowerPath devices must be
deleted.
5. Run lsdev -Ct power. No devices should be listed in the output.
6. Determine which hdisks on the storage system will receive the
copy of the operating system.
7. Run alt_install_disk -C hdisk_list to create the copy on the
storage system hdisk(s).
8. Reboot the system.
The system should boot using the hdisks specified in the previous
step.
9. Run powermt config.
10. Run bootlist -m normal -o to determine which hdisk is in the
bootlist.
11. Use powermt to determine which hdiskpower contains the hdisk
in the boot list.
12. Use the bootlist command to include all the path hdisks for the
hdiskpower found in the previous step.
13. Run pprootdev on.
14. Reboot the system.
When the system comes up, rootvg should be using hdiskpower
devices

Tuesday, May 27, 2008

Solaris vx removing duplicate devices from vxdisk list

vxdisk listDEVICE TYPE DISK GROUP STATUS
c7t21d0s2 sliced disk01 oradg online
c7t22d0s2 sliced disk02 oradg error
c7t22d0s2 sliced - - error
c7t23d0s2 sliced disk03 oradg online

vxdg -g oradg rmdisk disk02
vxdisk rm c7t22d0s2
vxdisk rm c7t22d0s2
devfsadm -C
vxdctl enable
vxdisk list

DEVICE TYPE DISK GROUP STATUS
c7t21d0s2 sliced disk01 oradg online
c7t22d0s2 sliced disk02 oradg online
c7t23d0s2 sliced disk03 oradg online

Details:
This specific procedure must be used when replacing one of the internal fibre drives within the following servers and/or arrays:

Sun Fire 280R, V480, and V880.
SENA A5X00 Arrays.

Note: Failure to follow this procedure could result in a duplicate device entry for the replaced disk in Volume Manager. This is most notable when running a vxdisk list command.

Example:

# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 sliced rootdisk rootdg online
c1t1d0s2 sliced - - error
c1t1d0s2 sliced - - error


1. Select vxdiskadm option 4 - Select the Volume Manager disk to be replaced

2. luxadm -e offline - detach ssd instance

Use luxadm to get this disk out of the Solaris kernel configuration. The device path should end in ",raw" (for example, pci@1f,0/ide@d/dad@0,0:a,raw). This is the path from the /devices directory, not /dev/rdsk/c?t?d?s?.

* If the disk is multipathed, run the luxadm -e offline on the second path as well

3. devfsadm -C

The -C option cleans up the /dev directory, and removes any lingering logical links to the device link names. It should remove all the device paths for this particular disk. This can be verified with:

# ls -ld /dev/dsk/c1t1d* - This should return no devices entries for c1t1d*.


4. The drive can now be pulled physically

5. luxadm insert_device

This is an interactive command. It will go through the steps to insert the new device and create the necessary entries in the Solaris device tree.

6. vxdctl enable

This is for Volume Manager to rescan the disks. It should pick up the new disk with an "error" status. If not in error, the disk might contain some Volume Manager information, and might need to be formatted.

7. Select vxdiskadm option 5

This will start the recovery process (if needed).

Thursday, May 22, 2008

Solaris 220R init 6 hang

If you run init 6 on 220 and it hang, and you have to hard reset to reboot the system. This is what you should do. This system must have single cpu and it is installed on slot/module 2. Move it to slot/module 0 to solve the issue.
Example below with 220R with 2 cpu's for illustration
:~$ /usr/platform/sun4u/sbin/prtdiag
System Configuration: Sun Microsystems sun4u Sun Enterprise 220R (2 X
UltraSPARC-II 450MHz)
System clock frequency: 113 MHz
Memory size: 1024 Megabytes

============CPUs ===========

Run Ecache CPU CPU
Brd CPU Module MHz MB Impl. Mask
--- --- ------- ----- ------ ------ ----
0 0 0 450 4.0 US-II 10.0
0 2 2 450 4.0 US-II 10.0

Solaris ipmp example

Solaris IP Multi Path. Ethernet/IP layer redundancy w/o support from switch side.
Can run as active/standby (more compatible, only single IP presented to outside world), or active/active config (outbound traffic can go over both NIC using 2 IPs, inbound will depends on the IP the client use to send data back, so typically only 1 NIC).


hostname.ce0 (main active interface) ::
oaprod1-ce0 netmask + broadcast + deprecated -failover \
group oaprod_ipmp up \
addif oaprod1 netmask + broadcast + up

hostname.ce2 (active-standby config) ::
oaprod1-ce2 netmask + broadcast + deprecated -failover \
standby group oaprod_ipmp up
^^^^^^^

hostname.ce2 (active-active config) ::
oaprod1-ce2 netmask + broadcast + deprecated -failover \
group oaprod_ipmp up \
addif oaprod-nic2 netmask + broadcast + up

/etc/inet/hosts ::
172.27.3.71 oaprod1
172.27.3.72 oaprod1-ce0
172.27.3.73 oaprod1-ce2
172.27.3.74 oaprod2-nic2

Solaris bad superblock corrupt recovery

Find alternate superblocks

# newfs -N /dev/rdsk/c0t3d0s7
/dev/rdsk/c0t3d0s7: 163944 sectors in 506 cylinders of 9 tracks, 36 sectors
83.9MB in 32 cyl groups (16 c/g, 2.65MB/g, 1216 i/g)
super-block backups (for fsck -b #) at:
32, 5264, 10496, 15728, 20960, 26192, 31424, 36656, 41888,
47120, 52352, 57584, 62816, 68048, 73280, 78512, 82976, 88208,
93440, 98672, 103904, 109136, 114368, 119600, 124832, 130064, 135296,
140528, 145760, 150992, 156224, 161456,
Then, run
# fsck -F ufs -o b=5264 /dev/rdsk/c0t3d0s7
Alternate superblock location: 5264.
** /dev/rdsk/c0t3d0s7
** Last Mounted on
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
36 files, 867 used, 75712 free (16 frags, 9462 blocks, 0.0% fragmentation)
/dev/rdsk/c0t3d0s7 FILE SYSTEM STATE SET TO OKAY

***** FILE SYSTEM WAS MODIFIED *****
#
On Solaris 9
If the superblock in the root (/) file system becomes damaged and you cannot restore it, you have two choices:

1.Reinstall the system
2.Boot from the network or local CD, and attempt the above steps. If these steps fail, recreate the root (/) file system with the newfs command and restore it from a backup copy.

AIX superblock corrupt

Aix has 2 superblocks , one in logical block 1 and a copy in logical block 31
run beow to copy 31 into 1
#dd count=1 bs=4k skip=31 seek=1 if=/dev/hd4 of=/dev/hd4

Wednesday, May 21, 2008

Aix boot fails JFS/JFS2 log corrupt

If LED code 551,552,554,555,556,557
try access the rootvg file system before mounting it, and run #logform -V jfs /dev/hd8 and #logform -V jfs2 /dev/hdisk8 and run fsck afterwards
#fsck -y -V jfs /dev/hd1
#fsck -y -V jfs /dev/hd2
#fsck -y -V jfs /dev/hd3
#fsck -y -V jfs /dev/hd4
#fsck -y -V jfs /dev/hd9var
#fsck -y -V jfs /dev/hd10opt
and do the same for jfs2
then #exit

** previous transaction will be lost

AIX fix corrupted boot logical volume

Boot from cdrom or NIM(F1 or #1 to set SMS options)
>Maintenance
>> 1 Access a Root Volume Group
>>> select hd5
#bosboot -ad /dev/hdisk0
#shutdown -Fr

if you ever need to create hd5 then,
Boot from cdrom or NIM(F1 or #1 to set SMS options)
>Maintenance
>> 1 Access a Root Volume Group
>>> select hd5
#rmlv hd5
#chpv -c hdisk0
#mklv -y hd5 -t boot -a e rootvg 1
#bosboot -ad /dev/hdisk0
#bootlist -m normal -0
#sync
#sync
#shutdown -Fr

AIX data not managed by ODM

These are files not managed by ODM
/etc/filesystems
/etc/security
/etc/passwd
/etc/quecf

Tuesday, May 20, 2008

AIX restart sendmail

This starts sendmail on AIX
startsrc -s sendmail -a "-bd -q30m"

/etc/mail
# more sendmail.pid
123468
sendmail -bd -q30m
# ps -ef | grep sendmail
root 123468 1 0 Oct 26 - 3:15 sendmail: accepting connections
smmsp 283660 14700 0 Oct 26 - 0:00 /usr/lib/sendmail
root 304634 273842 1 23:54:43 pts/1 0:00 grep sendmail
# kill -15 `head -1 /etc/mail/sendmail.pid`
# ps -ef | grep sendmail
root 123494 273842 1 23:59:56 pts/1 0:00 grep sendmail
smmsp 283660 14700 0 Oct 26 - 0:00 /usr/lib/sendmail
# sendmail -bd -q30m
# ps -ef | grep sendmail

root 274584 273842 1 00:00:38 pts/1 0:00 grep sendmail
smmsp 283660 14700 0 Oct 26 - 0:00 /usr/lib/sendmail
root 318830 1 0 00:00:32 - 0:00 sendmail: accepting connections

# ps -ef | grep sendmail
root 274586 273842 1 00:00:49 pts/1 0:00 grep sendmail
smmsp 283660 14700 0 Oct 26 - 0:00 /usr/lib/sendmail
root 318830 1 0 00:00:32 - 0:00 sendmail: accepting connections

# ls -l sendmail.pid
-rw------- 1 root 1586 26 Dec 03 00:00 sendmail.pid
# more sendmail.pid
318830
sendmail -bd -q30m

Sendmail uuencode attachment

Use this example to send email attachment via mailx
#uuencode /content/05070000001.pdf 05070000001.pdf mailx -v -s 'SubjectLine' you@yahoo.com

AWK by example

# count lines (emulates "wc -l")
awk 'END{print NR}'

# print the sums of the fields of every line
awk '{s=0; for (i=1; i<=NF; i++) s=s+$i; print s}'

# add all fields in all lines and print the sum
awk '{for (i=1; i<=NF; i++) s=s+$i}; END{print s}'

# print the total number of fields ("words") in all lines
awk '{ total = total + NF }; END {print total}' file

# print every line where the value of the last field is > 4
awk '$NF > 4'

# substitute "foo" with "bar" ONLY for lines which contain "baz"
awk '/baz/{gsub(/foo/, "bar")};{print}'

# substitute "foo" with "bar" EXCEPT for lines which contain "baz"
awk '!/baz/{gsub(/foo/, "bar")};{print}'

# switch the first 2 fields of every line
awk '{temp = $1; $1 = $2; $2 = temp}' file

# print line number 52
awk 'NR==52'
awk 'NR==52 {print;exit}' # more efficient on large files

# print section of file between two regular expressions (inclusive)
awk '/Iowa/,/Montana/' # case sensitive

# delete ALL blank lines from a file (same as "grep '.' ")
awk NF
awk '/./'

SED by example

# substitute (find and replace) "foo" with "bar" on each line
sed 's/foo/bar/' # replaces only 1st instance in a line
sed 's/foo/bar/4' # replaces only 4th instance in a line
sed 's/foo/bar/g' # replaces ALL instances in a line
sed 's/\(.*\)foo\(.*foo\)/\1bar\2/' # replace the next-to-last case sed 's/\(.*\)foo/\1bar/' # replace only the last case

# substitute "foo" with "bar" ONLY for lines which contain "baz"
sed '/baz/s/foo/bar/g'

# substitute "foo" with "bar" EXCEPT for lines which contain "baz"
sed '/baz/!s/foo/bar/g'


# join pairs of lines side-by-side (like "paste")
sed '$!N;s/\n/ /'


# change "scarlet" or "ruby" or "puce" to "red"
sed 's/scarlet/red/g;s/ruby/red/g;s/puce/red/g' # most seds
gsed 's/scarlet\ruby\puce/red/g' # GNU sed only


# print only lines which match regular expression (emulates "grep") sed -n '/regexp/p' # method 1
sed '/regexp/!d' # method 2


# print the line immediately before a regexp, but not the line
# containing the regexp
sed -n '/regexp/{g;1!p;};h'


# grep for AAA and BBB and CCC (in any order)
sed '/AAA/!d; /BBB/!d; /CCC/!d'

# grep for AAA and BBB and CCC (in that order)
sed '/AAA.*BBB.*CCC/!d'

# print section of file based on line numbers (lines 8-12, inclusive)
sed -n '8,12p' # method 1
sed '8,12!d' # method 2

# print line number 52
sed -n '52p' # method 1
sed '52!d' # method 2
sed '52q;d' # method 3, efficient on large files

# delete ALL blank lines from a file (same as "grep '.' ")
sed '/^$/d' # method 1
sed '/./!d' # method 2


Thursday, May 15, 2008

Solaris one liner backup

--> move a directory to another server
tar cf - ./games rsh brucey cd /tmp\; tar xvBpf -
--> move a directory
tar cf - ./games (cd /tmp; tar xvBpf - )
--> dump to zip
ufsdump 0f - /filesystem /opt/local/gzip - > /tmp/dump.gz
--> backup one liner
tar cvf - /home/ebs gzip - > ebs.tar.gz
--> encrypt filename 1 and output to 1.crypt file
crypt <> 1.crypt ; rm 1
--> decrypt filename 1.crypt and stdout to screen
crypt <>
--> clever way to archive
tar cvf - `find . –print` >/tmp/dumpfile.tar
tar xvf - selectively extract from a tar archive
tar xvf /tmp/iona.tar ./iona/.sh_history

Solaris mounting and sharing cdrom

restart volmgt daemon
# pkill vold && /usr/sbin/vold &

check which is the cdrom drive
% iostat -En

c1t0d0 Soft Errors: 149 Hard Errors: 0 Transport Errors: 0
Vendor: MATSHITA Product: CDRW/DVD UJDA740 Revision: 1.00 Serial No:
Size: 0.56GB <555350016>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 149 Predictive Failure Analysis: 0

mount the cdrom manually if vold fails to mount it
mount -F hsfs -o ro /dev/dsk/c1t0d0s2 /cdrom

nfs shares cdrom drive
edit /etc/dfs/dfstab, put in below line or just run it
share -F nfs -o ro /cdrom

run dfshares to see if cdrom is shared
if not run /etc/init.d/nfs.server stopstart

on remote mahine run
mount servername:/cdrom /mnt

if want to umount /mnt run this first
fuser -cu /mnt , to see pid using the /mnt
fuser -ck /mnt , will kill all processes holding the cdrom

Wednesday, May 14, 2008

Solaris American English Unicode (en_US.UTF-8) Full Locale

Beginning with Solaris 8, all Unicode locales benefit from Asian locale's native Asian input systems and methods. To use Asian native input methods in Unicode locales, you must install the Asian locales that you want to use. For example, if you want to use Japanese input systems in Unicode locales, you must install at least one of the available Japanese locales.

This locale should be already on your system if you've installed "End User System Support" meta-cluster or added an upper meta-cluster such as "Developer System Support" or "Entire Distribution" during the installation. To find out if you have the locale, run the command

% ls /usr/lib/locale/en_US.UTF-8/en_US.UTF-8.so.1
If the file is in the directory, the locale is in place. If the file is not in the directory then, add the packages below.
The packages in the two lists below can be found on the Solaris Software 1 of 2 CD in the directory .../Solaris_8/Product/:

SPARC platform

SUNWarrf SUNWeugrf SUNWi2rf SUNWi4rf SUNWi5rf SUNWi7rf SUNWi8rf SUNWi9rf SUNWi13rf SUNWi15rf SUNWtxfnt SUNW5xmft SUNWcxmft SUNWjxmft SUNWkxmft SUNWeudba SUNWeudbd SUNWeudda SUNWeudhr SUNWeudhs SUNWeudis SUNWeudiv SUNWeudlg SUNWeudmg SUNWeuezt SUNWeuluf SUNWeulux SUNWeuodf SUNWeusru SUNWeuxwe SUNWuiu8 SUNWuiu8x SUNWuium SUNWulcf SUNWulcfx SUNWulocf SUNWuxlcf SUNWuxlcx

Solaris check if patch is installed

/var/sadm/patch contains the list of installed patched. The date of the creation of a directory for a particular patch is actually the date of patch installation.

or

#showrev -p grep 116268

Solaris vxfs verify large file support

how to verify and enable largefile support on a vxfs filesystem

Description

To verify if largefile support is enabled on a VXFS filesystem:

# fsadm -F vxfs /dir_name

If you need to enable largefile support:

# fsadm -F vxfs -o largefiles /dir_name

Example

fsadm -F vxfs /dir_name; fsadm -F vxfs -o largefiles /dir_name

Tuesday, May 13, 2008

HP-UX disk and filesystem tasks

Search for attached disk
ioscan -fnC disk
Initialize a disk for use with LVM
pvcreate -f /dev/rdsk/c0t1d0
Create the device structure needed for a new volume group.
cd /dev
mkdir vgdata
cd vgdata
mknod group c 64 0x010000
Create volume group vgdata
vgcreate vgdata /dev/dsk/c0t1d0
{ if your expecting to use more than 16 physical disks use the -p option, range from 1 to 256 disks. }
Display volume group vgdata
vgdisplay -v vg01
Add another disk to volume group
pvcreate -f /dev/rdsk/c0t4d0
vgextend vg01 /dev/dsk/c0t4d0
Remove disk from volume group
vgreduce vg01 /dev/dsk/c0t4d0
Create a 100 MB logical volume lvdata
lvcreate -L 100 -n lvdata vgdata
newfs -F vxfs /dev/vgdata/rlvdata
Extend logical volume to 200 MB
lvextend -L 200 /dev/vgdata/lvdata
Extend file system to 200 MB
{ if you don't have Online JFS installed volumes must be unmounted before you can extend the file system. }
fuser -ku /dev/vgdata/lvdata { kill all process that has open files on this volume. }
umount /dev/vgdata/lvdata
extendfs /data

{ for Online JFS, 200 MB / 4 MB = 50 LE; 50 x 1024 = 51200 blocks }
fsadm -F vxfs -b 51200 /data
Set largefiles to support files greater than 2GB
fsadm -F vxfs -o largefiles /data

Exporting and Importing disks across system.

1. make the volume group unavailable
vgchange -a n /dev/vgdata
2. Export the the disk while creating a logical volume map file.
vgexport -v -m data_map vgdata
3. Disconnect the drives and move to new system.
4. Move the data_map file to the new system.
5. On the new system recreate the volume group directory
mkdir /dev/vgdata
mknod /dev/vgdata/group c 64 0x02000
6. Import the disks to the new system
vgimport -v -m data_map /dev/vgdata /dev/dsk/c2t1d0 /dev/dsk/c2t2d0
7. Enable the new volume group
vgchange -a y /dev/vgdata
Renaming a logical volume
/dev/vgdata/lvol1 -> /dev/vgdata/data_lv
umount /dev/vgdata/lvol1
ll /dev/vgdata/lvol1 take note of the minor ( e.g 0x010001 )
brw-r----- 1 root root 64 0x010001 Dec 31 17:59 lvol1
mknod /dev/vgdata/data_lv b 64 0x010001 create new logical volume name
mknod /dev/vgdata/rdata_lv c 64 0x010001
vi /etc/fstab { reflect the new logical volume }
mount -a
rmsf /dev/vgdata/lvol1
rmsf /dev/vgdata/rlvol1

Solaris which pid uses port

#!/bin/ksh
# find from a port the pid that started the port
#
line='-------------------------------------------------------------------------'
pids=`/usr/bin/ps -ef | sed 1d | awk '{print $2}'`

# Prompt users or use 1st cmdline argument
if [ $# -eq 0 ]; then
read ans?"Enter port you like to know pid for: "
else
ans=$1
fi

# Check all pids for this port, then list that process
for f in $pids
do
/usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q "port: $ans"
if [ $? -eq 0 ] ; then
echo "$line\nPort: $ans is being used by PID: \c"
/usr/bin/ps -o pid -o args -p $f | sed 1d
fi
done
exit 0

AIX HPUX how to remove password

To remove password in AIX,
Edit /etc/security/passwd, change the password to *
myaot2:
password = *

To remove password requirements for HPUX, edit this file /tcb/files/auth/o/oracle;

oracle:u_name=oracle:u_id#600:\
:u_pwd=*:\
:u_auditid#67:\
:u_auditflag#1:\
:u_exp#3024000:u_life#4838400:u_succhg#1150751962:u_unsucchg#1150691460:\
:u_pw_expire_warning#604800:u_pswduser=oracle:u_nullpw:u_pwchanger=root:\
:u_suclog#1150752396:u_suctty=/dev/pts/1:u_unsuclog#1150692078:u_unsuctty=ssh:\
:u_lock@:chkent:

Change value u_pwd=*:\

You also have to disable password aging for the specific account in SAM. Password access will be revoked & it will not lock the account then

AIX print queue command

To Check AIX Print Queue

#qchk -q -P HOUOSP0P350

Queue Dev Status Job Files User PP % Blks Cp Rnk

------- ----- --------- --- ------------------ ---------- ---- -- ----- --- ---

HOUOSP0 @houh READY

HOUOSP0P350:

HOUOSP0P350:

HOUOSP0P350:

HOUOSP0P350:

HOUOSP0P350:


PRINTERS / PRINT QUEUES
--------------------------------------------------------------------------------

splp (device) Displays/changes printer driver settings
splp /dev/lp0

export $LPDEST="pqname" Set default printer queue for login session

lsvirprt Lists/changes virtual printer attributes.

lsallq Displays all queues

rmvirprt -q queuename -d queuedevice Removes a virtual printer

qpri -#(job No) -a(new priority) Change a queue job priority.

qhld -#(job No) Put a hold on hold
qhld -r #(job No) Release a held job

qchk -A Status of jobs in queues
lpstat
lpstat -p(queue) Status of jobs in a named queue

qcan -x (job No) Cancel a job from a queue
cancel (job No)

enq -U -P(queue) Enable a queue
enable (queue)

enq -D -P(queue) Disable a queue
disable (queue)

qmov -m(new queue) -#(job No) Move a job to another queue

startsrc -s qdaemon Start qdaemon sub-system
lssrc -s qdaemon List status of qdaemon sub-system
stop -s qdaemon Stop qdaemon sub-system

Solaris find sun fibre channel device

# more luxadm_probe_-p.out

No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):

Node WWN:5005076300c09f4b Device Type:Disk device
Logical Path:/dev/rdsk/c6t6005076300C09F4B000000000000100Dd0s2
Physical Path:
/devices/scsi_vhci/ssd@g6005076300c09f4b000000000000100d:c,raw
Node WWN:5005076300c09f4b Device Type:Disk device
Logical Path:/dev/rdsk/c6t6005076300C09F4B000000000000102Cd0s2
Physical Path:
/devices/scsi_vhci/ssd@g6005076300c09f4b000000000000102c:c,raw
Node WWN:5005076300c09f4b Device Type:Disk device
Logical Path:/dev/rdsk/c6t6005076300C09F4B000000000000102Ed0s2
Physical Path:
/devices/scsi_vhci/ssd@g6005076300c09f4b000000000000102e:c,raw

# format

Searching for disks...done
AVAILABLE DISK SELECTIONS:

0. c0t0d0
/pci@1f,4000/scsi@3/sd@0,0
1. c0t1d0
/pci@1f,4000/scsi@3/sd@1,0
2. c0t2d0
/pci@1f,4000/scsi@3/sd@2,0
3. c0t3d0
/pci@1f,4000/scsi@3/sd@3,0
4. c2t0d0
/pci@6,4000/scsi@4/sd@0,0
5. c2t1d0
/pci@6,4000/scsi@4/sd@1,0
6. c2t2d0
/pci@6,4000/scsi@4/sd@2,0
7. c2t3d0
/pci@6,4000/scsi@4/sd@3,0
8. c6t6005076300C09F4B000000000000100Dd0
/scsi_vhci/ssd@g6005076300c09f4b000000000000100d

9. c6t6005076300C09F4B000000000000102Cd0
/scsi_vhci/ssd@g6005076300c09f4b000000000000102c

10. c6t6005076300C09F4B000000000000102Ed0
/scsi_vhci/ssd@g6005076300c09f4b000000000000102e


Specify disk (enter its number): ^D

Legato NEtworker nsrjb command example

nsrjb
Note: The nsrjb command is performed on a NetWorker server
with a jukebox attached.

nsrjb

A plain "nsrjb" command shows the volumes present in the
jukebox slots and in the jukebox drives.

nsrjb -d -P1 -S26

Deposit a tape cartridge from access port slot 1 (-P1)
to jukebox slot 26 (-S26).

nsrjb -w -S236 -P5

Withdraw a tape cartridge from jukebox slot 236 to
access port slot 5.

nsrjb -l -f /dev/rmt/0cbn B00341

Load volume B00341 into jukebox device /dev/rmt/0cbn.

nsrjb -l -f /dev/rmt/6cbn -S 26

Load a tape cartridge from jukebox slot 26 into jukebox
device /dev/rmt/6cbn.

nsrjb -u -f /dev/rmt/6cbn

Unload the tape cartridge in jukebox device /dev/rmt/6cbn
back to the jukebox slot it came from.

nsrjb -L -f /dev/rmt/2cbn -S 177 -b FULL

Load the tape cartridge in jukebox slot 177 into jukebox
device /dev/rmt/2cbn and write a label on it indicating
that it is in the FULL pool. This example assumes that
the jukebox has a barcode reader, that the tape cartridge
has a barcode attached, and NetWorker is configured to
automatically use barcodes for logical labels. This also
updates the media index database.

nsrjb -HE

Unload all jukebox drives and reset them.

nsrjb -IE -S307 -f /dev/rmt/4cbn

Inventory the contents of slot 307 using jukebox device
/dev/rmt/4cbn.

Legato networker check jukebox content

# nsrjb -IS 1-3 -j jb3
/dev/rmt1.1: verified label of Cdeck:042.
/dev/rmt0.1: verified label of Cdeck:048.
/dev/rmt1.1: verified label of Cdeck:049.
# nsrjb -v
setting verbosity level to `1'
No jukebox selected.
1: jb3
2: jb2
Please select a jukebox to use:? [1] 1

Jukebox jb3:
slot volume used pool volume id recyclable
1: Cdeck:042 36% Cdeck 20d8bb27-00000005-a9813304-44813304-00580000-92c20898 no
2: Cdeck:048 0% Cdeck d5b67332-00000005-ff99375b-4499375b-00020000-92c20898 no
3: Cdeck:049 0% Cdeck 59776074-00000005-fe993926-44993926-00030000-92c20898 no
4: Cdeck:045 0% Cdeck 0c3a10f9-00000005-7d926e6a-44926e6a-02840000-92c20898 no
5: Cdeck:046 0% Cdeck faa4a428-00000005-be87fe23-4487fe23-01430000-92c20898 no
6: Cdeck:047 0% Cdeck b03a3bd3-00000005-00993515-44993515-00010000-92c20898 no
7: Cdeck:048 0% Cdeck d5b67332-00000005-ff99375b-4499375b-00020000-92c20898 no
8: Cdeck:049 0% Cdeck 59776074-00000005-fe993926-44993926-00030000-92c20898 no
9: Cdeck:050 0% Cdeck d0a975b1-00000005-fd993ac7-44993ac7-00040000-92c20898 no
10: Cleaning Tape (4 uses left) -
11:
12:
13:
14:
15:
*not registered in the NetWorker media data base

9 registered volume(s), 9 less than 80% full.
315 GB estimated capacity, 302 GB remaining (4% full)
Default slot range(s) are 1-9, 11-15

drive 1 (/dev/rmt0.1) slot :
drive 2 (/dev/rmt1.1) slot :
You have mail in /usr/spool/mail/root
#

Legato networker volumes not registered

When networker marks volumes as not registered in the NetWorkwe media data base indicated with a * next to the volume name in the output from nsrjb, you can use scanner to rebuild both the media and online file indexes for the effected volume(s).

load the volume in a drive:

$: nsrjb | grep volume_name

$: 39: volume_name
|
slot number in jb

$: nsrjb -ln -S 39 -f /path/to/device (/dev/rmt/0cbn 1cbn 2cbn ...)
(/path/to/device should be for the device which is known by NetWorker)

rewind the tape

$: mt -f /path/to/device rewind
(if you used /dev/rmt/0cbn above use /dev/rmt/0 here)

scan the volume with the -i (india) option

$: scanner -i /path/to/device (/dev/rmt/0cbn 1cbn 2cbn ...)
(/path/to/device should be for the device which is known by NetWorker)

you can check the media data bank with mminfo to determine if the voulme entries were recovered.

mminfo -r 'client,savetime' -q volume=volume_name

eject the volume and return it to the jb

nsrjb -u -f /path/to/device (/dev/rmt/0cbn 1cbn 2cbn ...)
(/path/to/device should be for the device which is known by NetWorker)

HPUX move print jobs

lpshut
lpmove brl0p006 brl0p004
lpstart
/usr/sbin/accept brl0p006
/usr/bin/enable brl0p006

AIX alternative split mirror copy for backup

The AIX alternative to split a mirror copy uses either the splitlvcopy or chfs command. The splitlvcopy is used for raw partitions, and the chfs command is used for filesystems. To illustrate, the following example assumes we want to make a split mirror copy of the "datalv". The copy will be located on hdisk2, with a LV name of "lv_copy"


1. Define a mirror copy of "datalv" on "hdisk2"

mklvcopy datalv 2 hdisk2

2. Synchronize (copy) the data to the mirror

syncvg -l -P6 datalv

3. Verify copy is complete (ie no "stale" partitions)

lslv datalv

5. Stop application/database

6 Split off the hdisk2 mirror copy.

For raw partitions: splitlvcopy -y lv_copy datalv 1 hdisk2
For JFS filesystems: chfs -a splitcopy=/data_copy -a copy=2 /data

Note: to use "chfs," the JFSLOG must be mirrored. If not, you'll see the error:
"jfs_syscall: A system call received a parameter that is not valid "

7. Restart application/database

8. Backup data

The downtime associated with the split mirror copy is in Steps 5-7. Depending on the size of the data, the typical downtime is 5-30 minutes. All other steps can done while running production.

Friday, May 9, 2008

AIX enable jumbo frames

chdev -P -l ent2 -a media_speed=Auto_Negotiation
ifconfig en2 down detach
chdev -l ent2 -a jumbo_frames=yes
chdev -l en2 -a mtu=9000
chdev -l en2 -a state=up

Solaris enable jumbo frames

Solaris 9 with a "ce" NIC
1. # ifconfig -a
lo0: flags=1000849 mtu 8232 index 1

inet 127.0.0.1 netmask ff000000

ce0: flags=1000843 mtu 1500 index 2

inet 128.200.197.161 netmask ffffff80 broadcast 128.200.197.255
ether 0:3:ba:d2:f6:d3

ce1: flags=1000843 mtu 1500 index 3

inet 192.168.2.101 netmask ffffff00 broadcast 192.168.2.255
ether 0:3:ba:d2:f6:d3

2. # ifconfig ce1 down unplumb

3. # ndd -set /dev/ce instance 1

4. # ndd -set /dev/ce accept-jumbo 1

5. #) ifconfig ce1 plumb 192.168.2.101 up

AIX dsmc find specific date

#dsmc q ba -inact -fromdate=06/14/06 -todate=06/15/06 -subdir=yes -pass=PASSWORD /local/data/ppi/prd/archive/ > /local/data/ppi/prd/9999/ppiin/temp/tsm.txt_20060614_180000

Solaris soft partition SAN disk add space to server

Customer asks to add 1 Gb extra space to the filesystem /local/bin on the solaris host.

1) Filesystem /local/bin is on a soft partition :


d8 -m d80 d82 1
d80 3 1 c0t1d0s0 \
1 c0t2d0s0 \
1 c0t3d0s0
d82 3 1 c2t1d0s0 \
1 c2t2d0s0 \
1 c2t3d0s0
d108 -p d8 -o 34534000 -b 8388608
d106 -p d8 -o 13562478 -b 2097152
d105 -p d8 -o 12538477 -b 1024000
d104 -p d8 -o 8344172 -b 4194304
d103 -p d8 -o 7320171 -b 1024000
d102 -p d8 -o 1028714 -b 6291456 -o 34450618 -b 81920
d101 -p d8 -o 4713 -b 1024000

2) To get wwn number :

# luxadm probe -p
No Network Array enclosures found in /dev/es

Found Fibre Channel device(s):
Node WWN:5005076300c09f4b Device Type:Disk device
Logical Path:/dev/rdsk/c6t6005076300C09F4B000000000000100Dd0s2
Physical Path:
/devices/scsi_vhci/ssd@g6005076300c09f4b000000000000100d:c,raw
Node WWN:5005076300c09f4b Device Type:Disk device
Logical Path:/dev/rdsk/c6t6005076300C09F4B000000000000102Cd0s2
Physical Path:
/devices/scsi_vhci/ssd@g6005076300c09f4b000000000000102c:c,raw
Node WWN:5005076300c09f4b Device Type:Disk device
Logical Path:/dev/rdsk/c6t6005076300C09F4B000000000000102Ed0s2
Physical Path:
/devices/scsi_vhci/ssd@g6005076300c09f4b000000000000102e:c,raw

OR

#/tmp/dumpmaps (this script I copied from the SF12K servers, can be used on other SUn servers)

c4 = qlc0 (fp0) -> /devices/pci@6,2000/SUNW,qlc@1/fp@0,0evctl
================================================================================
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 11500 0 5005076300d09f4b 5005076300c09f4b 0x0 (Disk device)
1 51500 0 5005076300ce9f4b 5005076300c09f4b 0x0 (Disk device)
2 250500 0 210000e08b1632fc 2

c5 = qlc1 (fp1) -> /devices/pci@6,2000/SUNW,qlc@1,1/fp@0,0evctl
================================================================================
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 21700 0 5005076300c89f4b 5005076300c09f4b 0x0 (Disk device)
1 61500 0 5005076300ca9f4b 5005076300c09f4b 0x0 (Disk device)
2 240500 0 210100e08b3632fc


3) Request LUN from Storage team. We asked for 2 of the smallest LUN (8 Gig) since we only needed 2 Gig. Note : Since the filesystem /local/bin is on the soft partition, BOTH metadevices found under the mirror (mirror: d8, metadevices : d80 and d82) must be expanded by 1 disk/LUN. Which was why I requested for 2 8 Gig LUNs.


4) After the LUNs have been assigned to the server (the storage engineer will email and inform us what are the ending numbers for the LUNs assigned to check in the format command : in our case, 2*8GB (00D and 02C) )

5) Before LUNs can be shown in format, it has to be detected by the system :

(c4 and c5 are found from the output of /tmp/dumpmaps) :

#cfgadm -c configure c4

#cfgadm -c configure c5

6) Now the LUNs can be seen in format :

8. c6t6005076300C09F4B000000000000100Dd0
/scsi_vhci/ssd@g6005076300c09f4b000000000000100d
9. c6t6005076300C09F4B000000000000102Cd0

Format the 2 new disks, assign all storage to slice 0 (since this is used for all soft partitions).

7) Attach the new disks to the metadevices under the mirror d8 :


# metattach d80 c6t6005076300C09F4B000000000000102Cd0s0
d80: component is attached

# metattach d82 c6t6005076300C09F4B000000000000100Dd0s0
d82: component is attached


8) Attach 1g (as requested to the shoft partition) :

# metattach d102 1g
d102: Soft Partition has been grown

9) A final step, is to grow the soft partition since we have just increased its size :

# growfs -M /local/bin /dev/md/rdsk/d102
Warning: 1648 sector(s) in last cylinder unallocated
/dev/md/rdsk/d102: 8470528 sectors in 1798 cylinders of 19 tracks, 248 sectors
4136.0MB in 82 cyl groups (22 c/g, 50.62MB/g, 8256 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 103952, 207872, 311792, 415712, 519632, 623552, 727472, 831392, 935312,
7465888, 7569808, 7673728, 7777648, 7881568, 7985488, 8089408, 8193328,
8297248, 8401168,

10) DONE!

# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d1 6050182 1718189 4271492 29% /
/proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
/dev/md/dsk/d3 962571 254474 650343 29% /var
swap 4552056 32 4552024 1% /var/run
swap 512000 25008 486992 5% /tmp
/dev/md/dsk/d101 480847 28109 404654 7% /local
/dev/md/dsk/d104 2033415 1530688 441725 78% /local/users
/dev/md/dsk/d105 480847 1041 431722 1% /local/apps
/dev/md/dsk/d106 986287 438820 488290 48% /local/data
/dev/md/dsk/d9 7687661 1616563 5994222 22% /local/osdumps
/dev/md/dsk/d102 4149310 2165535 1922140 53% /local/bin
/dev/md/dsk/d108 4130134 2683148 1405685 66% /local/logs
/dev/md/dsk/d103 480847 155271 277492 36% /local/bin/patrol
/dev/md/dsk/d109 36045276 27777545 7907279 78%

HPUX NetApp add and grow file system

Basic process for resizing a file system using the HP-UX LVM and NetApp storage.
1. Create a new LUN on the filer to add to the LVM volume group and map it to the HP-UX host.
filer> lun create -t hpux -s 100g /vol/vol1/oraserv1.lun
filer> lun map /vol/vol1/oraserv1.lun oraserv
lun map: auto-assigned oraserv=1
2. Run commands on the HP-UX host to recognize the LUN and make it available to LVM.
# ioscan -fnC disk
# ioinit -i
# sanlun lun show -p
# pvcreate /dev/dsk/c4t0d1
The "sanlun" command is used to determine the path for the new LUN.
3. Add the new LUN to an existing volume group.
# vgextend vg01 /dev/dsk/c4t0d1
4. Use the Network Appliance attach kit to configure the PVlinks multipathing.
# ntap_config_paths
5. Extend the logical volume.
# lvextend -L 144 /dev/vg01/lvol1
6. If the OnlineJFS license and package are not installed, unmount the file system.
# umount /u01
Note: The command "swlist -l bundle grep JFS" will determine whether OnlineJFS is
installed.
7. Extend the file system.
For systems without OnlineJFS:
# extendfs /dev/vg01/lvol1
For systems with OnlineJFS:
# fsadm -b 147456 /u01
Note: The value 147456 is the size 144 from step 5 multiplied by 1024 to convert it to megabytes.
8. If the file system was unmounted in step 6, remount the file system.
# mount /dev/vg01/lvol1 /u01

Tuesday, May 6, 2008

AIX grow and add new jfs largefile filesystem

Requirement
1. Expand file-system
/db/tst4/dbf2 – 11 GB (Existing size is 2 GB)
/db/tst4/temp – 7 GB (Existing size is 0.5 GB)
/db/tst4/undo – 7 GB (Existing size is 0.5 GB)

2. Create file-system
/db/tst4/dbf3 – 11 GB

The Filesystems should allow large files [ > 2Gb];

Pre-implementation checkout
# lsvg hds128vg
VOLUME GROUP: hds128vg VG IDENTIFIER: 00c54e7d00004c00000001175fb130df
VG STATE: active PP SIZE: 128 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 2310 (295680 megabytes)
MAX LVs: 256 FREE PPs: 507 (64896 megabytes)
LVs: 37 USED PPs: 1803 (230784 megabytes)
OPEN LVs: 37 QUORUM: 8 (Enabled)
TOTAL PVs: 14 VG DESCRIPTORS: 14
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 14 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable

Implementation Steps
1. #chfs –a size=+9G /db/tst4/dbf4
2. #chfs –a size=+6500M /db/tst4/temp
3. #chfs –a size=+6500M /db/tst4/undo

4. Create new logical volume on cbitdb3:
• Please use smit menu
# smitty mklv <-- Jump straight to "Add a Logical Volume"
Specify hds128vg for the VOLUME GROUP name
Set Logical volume TYPE to jfs

New LV NAME VGName SIZE total LP PV Name
----------------------------------------------------
tst4_dbf3 hdsvg 11 GB - -

5. Create a new Journaled Filesystem with largefile option enabled on previously defined logical volumes. Create and mount it accordingly from smit menu.
# smitty crjfslvbf <-- jumps straight to "Add a Large File Enabled Journaled File System" on a previously defined logical volume

Change the following entry accordingly in the smitty screen:
Mount AUTOMATICALLY at system restart? Yes

LOGICAL VOLUME name MOUNT POINT
-----------------------------------------
tst4_dbf3 /db/tst4/dbf3
* For LV name, use F4 to list out & select the LV to be used.
Select tst4_dbf3

6. Mount new FS
# mkdir /db/tst4/dbf3
Change the permission to “oracle:dba”
# chown –R oracle:dba /db/tst4/dbf3
# mount /db/tst4/dbf3

Review Plan
After created new FS , capture new FS output and verify that new file systems are there:
# df -g /db/tst4/dbf2 , 11GB
# df -g /db/tst4/temp , 7GB
# df -g /db/tst4/undo , 7GB
# df -g /db/tst4/dbf3 , 11GB

# su – oracle
# cd /db/tst4/dbf3
# touch testfile
# ls –l , make sure can write
# exit

AIX increase LP's

Procedure to increase the number of LP's available
Assume we receive an error that the maximum number of LP's had been exceeded, and the maximum number of LP's defined was 1100:

#lsvg , to show the total PP's available in the volume group =1250
#lsvg -l , to show the total PP's used in all logical volumes in that volume group (showed sys1log, the jfs log was using 2 PP's)
#chlv -x 1248 , to change the maximum number of LP's from 1100 to 1248 (1250 PP's in the volume group - 2 PP's used by the jfs log = 1248 available)

AIX physical disk administration

Physical Disk Procedures

Procedure to find disks/vpaths that are unallocated

#lsvpcfg
This will show disks/vpaths and the volume group they are allocated to
#lspvgrep None
This will show pvs and whether they are asssociated with a volume group
Note: For vpaths, the hdisks will show as none, but they may be allocated to a vpath - you must grep each hdisk with the lsvpcfg

Procedure to make a new lun available to AIX

Allocate the new lun on the SAN
Run "cfgmgr"
Verify the new vpatch/hdisk by running "lsvpcfg"
There should be a new vpath and it should be available with no volume group - if not, rerun cfgmgr


Procedure to list the PVs in a volume group:

#lsvg -p

AIX determine disk size

Get the size (in MB) of hdisk1
getconf DISK_SIZE hdisk1 or bootinfo -s hdisk1

Solaris VCS system maintenance

node1# hastatus -sum
node1# hagrp –switch appgrp –to node2
node1# hagrp –switch ClusterService –to node2
node1# hastatus –sum
Make sure aaprg and ClusterServices are online on node2,
then proceed to next command. See #tail –f /var/*vcs/log/engine_A.log
node1# hastatus –sum
node1# haconf –makerw
node1# hagrp –freeze – persistent oragrp
node1# hagrp –freeze – persistent appgrp
node1# hagrp –freeze – persistent ClusterService
node1# haconf –dump -makero


Stop and verify the “had daemon is gone from both node1
root@node1:/ # hastop –sys node1
root@node1/ # ps –efgrep had
root@node1/# init S

do system maintanence ..
boot node1 ...

node1# haconf –makerw
node1# hagrp –unfreeze – persistent oragrp
node1# hagrp –unfreeze – persistent appgrp
node1# hagrp –unfreeze – persistent ClusterService
node1# haconf –dump -makero
node1# hastatus -sum
node1# hagrp –switch appgrp –to node1
node1# hagrp –switch ClusterService –to node1
node1# hastatus –sum

Monday, May 5, 2008

AIX network administration

 The examples here assume that the default TCP/IP con guration
(rc.net) method is used. If the alternate method of using rc.bsdnet
is used then some of these examples may not apply.
Determine if rc.bsdnet is used over rc.net
lsattr -El inet0 -a bootup option
TCP/IP related daemon startup script
/etc/rc.tcpip
To view the route table
netstat -r
To view the route table from the ODM DB
lsattr -EHl inet0 -a route
Temporarily add a default route
route add default 192.168.1.1
Temporarily add an address to an interface
ifconfig en0 192.168.1.2 netmask 255.255.255.0
Temporarily add an alias to an interface
ifconfig en0 192.168.1.3 netmask 255.255.255.0 alias
To permanently add an IP address to the en1 interface
chdev -l en1 -a netaddr=192.168.1.1 -a netmask=0xffffff00
Permanently add an alias to an interface
chdev -l en0 -a alias4=192.168.1.3,255.255.255.0
Remove a permanently added alias from an interface
chdev -l en0 -a delalias4=192.168.1.3,255.255.255.0
List ODM (next boot) IP con guration for interface
lsattr -El en0
Permanently set the hostname
chdev -l inet0 -a hostname=www.tablesace.net
Turn on routing by putting this in rc.net
no -o ipforwarding=1
List networking devices
lsdev -Cc tcpip
List Network Interfaces
lsdev -Cc if
List attributes of inet0
lsattr -Ehl inet0
List (physical layer) attributes of ent0
lsattr -El ent0
List (networking layer) attributes of en0
lsattr -El en0
Speed is found through the entX device
lsattr -El ent0 -a media speed
Set the ent0 link to Gig full duplex
(Auto Negotiation is another option)
chdev -l ent0 -a media speed=1000 Full Duplex -P
Turn o Interface Speci c Network Options
no -p -o use isno=0
Get (long) statistics for the ent0 device (no -d is shorter)
entstat -d ent0
List all open, and in use TCP and UDP ports
netstat -anf inet
List all LISTENing TCP ports
netstat -na grep LISTEN
Remove all TCP/IP con guration from a host
rmtcpip
IP packets can be captured using iptrace / ipreport or tcpdump

Sunday, May 4, 2008

AIX how to mirror rootdisk

server1:/ # lspv more
hdisk0 0024b7fafa266f89 old_rootvg
hdisk1 0024b7fafa266fbf rootvg active
hdisk2 0024b7fa1937be52 user128vg active
hdisk3 0024b7fa1937c5de user128vg active
server1:/ # alt_rootvg_op -X old_rootvg
server1:/ # extendvg rootvg hdisk0
0516-1398 extendvg: The physical volume hdisk0, appears to belong to another volume group. Use the force option to add this physical volume to a volume group.
0516-792 extendvg: Unable to extend volume group.
server1:/ # lspv more
hdisk0 0024b7fafa266f89 None
hdisk1 0024b7fafa266fbf rootvg active
hdisk2 0024b7fa1937be52 user128vg active
hdisk3 0024b7fa1937c5de user128vg active
server1:/ # extendvg -f rootvg hdisk0
server1:/ # lspv more
hdisk0 0024b7fafa266f89 rootvg active
hdisk1 0024b7fafa266fbf rootvg active
hdisk2 0024b7fa1937be52 user128vg active
hdisk3 0024b7fa1937c5de user128vg active
server1:/ #
server1:/ # mirrorvg rootvg hdisk0
0516-1804 chvg: The quorum change takes effect immediately.
0516-1126 mirrorvg: rootvg successfully mirrored, user should perform bosboot of system to initialize boot records. Then, user must modify bootlist to include: hdisk0 hdisk1.
server1:/ #
server1:/ # syncvg -v rootvg
server1:/ # bosboot -ad /dev/hdisk0
bosboot: Boot image is 35429 512 byte blocks.
server1:/ # bootlist -m normal hdisk0 hdisk1
server1:/ # lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 26 52 2 open/syncd N/A
hd8 jfslog 1 2 2 open/syncd N/A
hd4 jfs 2 4 2 open/syncd /
hd2 jfs 16 32 2 open/syncd /usr
hd9var jfs 15 30 2 open/syncd /var
hd3 jfs 2 4 2 open/syncd /tmp
hd1 jfs 2 4 2 open/syncd /home
usr_local jfs 40 80 2 open/syncd /usr/local
candlelv jfs 1 2 2 open/syncd /usr/candle
httpserver jfs 1 2 2 open/syncd /usr/HTTPServer
apps_apl jfs 1 2 2 open/syncd /apps/apl
hd10opt jfs 1 2 2 open/syncd /opt
apps jfs 2 4 2 open/syncd /apps
hd7 sysdump 3 3 1 open/syncd N/A
ca_uni jfs 2 4 2 open/syncd /ca_uni
syswork jfs 1 2 2 open/syncd /syswork
usr_ai_lv jfs 1 2 2 open/syncd /usr/ai
server1:/ # lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 546 428 106..35..69..109..109
hdisk0 active 546 431 109..35..69..109..109
server1:/ #shutdown -Fr

Solaris vxvm unencapsulate rootdisk

If your system partitions (/, swap, /usr, /var) are located on more than one physical disk, you will have to manually "unencapsulate" your root disk instead of using Veritas' vxunroot command below.

1. Modify /etc/vfstab to reference the cxtxdxsx devices instead of the VxVM devices. There should be a file vfstab.prevm, use this one if it is there

2. Comment out the lines in /etc/system between:

* vxvm_START (do not remove)
* vxvm_END (do not remove)

3. Run the following command to prevent VxVM from starting up after reboot:

touch /etc/vx/reconfig.d/state.d/install-db

4. Reboot the system. After the reboot, you may uninstall VxVM if needed.


System partitions on boot disk
The Veritas vxunroot command is used to unencapsulate a root disk that contains all your system partitions. However, if the root disk is mirrored, you have to remove the mirror plexes.

Example:

# /etc/vx/bin/vxunroot

This operation will convert the following file systems from
volumes to regular partitions: root swap usr var opt home

ERROR: There are 2 plexes associated with volume rootvol
The vxunroot operation cannot proceed.

Listing of all volumes in rootdg:

# vxprint -v -g rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v opt gen ENABLED 4198392 - ACTIVE - -
v rootvol root ENABLED 1050776 - ACTIVE - -
v swapvol swap ENABLED 4198392 - ACTIVE - -
v usr gen ENABLED 4198392 - ACTIVE - -
v var gen ENABLED 4198392 - ACTIVE - -

Here we see that rootdg contains volumes opt, rootvol, swapvol, usr, and var. Let's see if the volumes consist of more than one plex.

# vxprint opt rootvol swapvol usr var
Disk group: rootdg

TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v opt gen ENABLED 4198392 - ACTIVE - -
pl opt-01 opt ENABLED 4198392 - ACTIVE - -
sd rootdisk-04 opt-01 ENABLED 4198392 0 - - -
pl opt-02 opt ENABLED 4198392 - ACTIVE - -
sd rootdisk-mirror-01 opt-02 ENABLED 4198392 0 - - -

v rootvol root ENABLED 1050776 - ACTIVE - -
pl rootvol-01 rootvol ENABLED 1050776 - ACTIVE - -
sd rootdisk-B0 rootvol-01 ENABLED 1 0 - - Block0
pl rootvol-02 rootvol ENABLED 1050776 - ACTIVE - -
sd rootdisk-02 rootvol-01 ENABLED 1050775 1 - - -

v swapvol swap ENABLED 4198392 - ACTIVE - -
pl swapvol-01 swapvol ENABLED 4198392 - ACTIVE - -
sd rootdisk-01 swapvol-01 ENABLED 4198392 0 - - -
pl swapvol-02 swapvol ENABLED 4198392 - ACTIVE - -
sd rootdisk-mirror-03 swapvol-02 ENABLED 4198392 0 - - -

v usr gen ENABLED 4198392 - ACTIVE - -
pl usr-01 usr ENABLED 4198392 - ACTIVE - -
sd rootdisk-03 usr-01 ENABLED 4198392 0 - - -
pl usr-02 usr ENABLED 4198392 - ACTIVE - -
sd rootdisk-mirror-04 usr-02 ENABLED 4198392 0 - - -

v var gen ENABLED 4198392 - ACTIVE - -
pl var-01 var ENABLED 4198392 - ACTIVE - -
sd rootdisk-05 var-01 ENABLED 4198392 0 - - -
pl var-02 var ENABLED 4198392 - ACTIVE - -
sd rootdisk-mirror-05 var-02 ENABLED 4198392 0 - - -

VM disk rootdisk-mirror contains mirror plexes for volumes opt,rootvol, swapvol, usr, and var. We have to remove the plexes before proceeding with vxunroot.

# vxplex -o rm dis opt-02 rootvol-02 swapvol-02 usr-02 var-02

# vxprint opt rootvol swapvol usr var
Disk group: rootdg

TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v opt gen ENABLED 4198392 - ACTIVE - -
pl opt-01 opt ENABLED 4198392 - ACTIVE - -
sd rootdisk-04 opt-01 ENABLED 4198392 0 - - -

v rootvol root ENABLED 1050776 - ACTIVE - -
pl rootvol-01 rootvol ENABLED 1050776 - ACTIVE - -
sd rootdisk-B0 rootvol-01 ENABLED 1 0 - - Block0
sd rootdisk-02 rootvol-01 ENABLED 1050775 1 - - -

v swapvol swap ENABLED 4198392 - ACTIVE - -
pl swapvol-01 swapvol ENABLED 4198392 - ACTIVE - -
sd rootdisk-01 swapvol-01 ENABLED 4198392 0 - - -

v usr gen ENABLED 4198392 - ACTIVE - -
pl usr-01 usr ENABLED 4198392 - ACTIVE - -
sd rootdisk-03 usr-01 ENABLED 4198392 0 - - -

v var gen ENABLED 4198392 - ACTIVE - -
pl var-01 var ENABLED 4198392 - ACTIVE - -
sd rootdisk-05 var-01 ENABLED 4198392 0 - - -

# /etc/vx/bin/vxunroot

This operation will convert the following file systems from
volumes to regular partitions: root swap usr var opt home

Replace volume rootvol with c0t0d0s0.

This operation will require a system reboot. If you choose to
continue with this operation, system configuration will be updated
to discontinue use of the volume manager for your root and swap
devices.

Do you wish to do this now [y,n,q,?] (default: y)

After a reboot, the root disk will be unencapsulated
.

Solaris printer administration

Configuring Print Clients
There are a couple of utilities you can use to create print clients:

Admintool -- to be used for Solaris 2.6 and Solaris 7
The printmgr utility in Solaris 8 (in /usr/sadm/admin/bin)
However, they both execute the same commands:

# lpadmin -p banana -s server!remote-queue
Note that:
# enable banana
# accept banana
banana is the queue you are creating to print to on the client.
server is the machine you are sending the print job to.
remote-queue is the name of the print queue on the machine you are sending the print job to.
If the remote-queue is the same as the queue you are creating on the client then you won't need the !remote-queue part of the command.

Saturday, May 3, 2008

AIX Print queue administration

To Check AIX Print Queue

root@XXXXX143 # date; qchk -q -P XXXXXP0P350

Sat Apr 22 08:21:27 DFT 2006

Queue Dev Status Job Files User PP % Blks Cp Rnk

________________________________________________

HOUOSP0 @houh READY

HOUOSP0P350:

HOUOSP0P350:

HOUOSP0P350:

HOUOSP0P350:

HOUOSP0P350:





PRINTERS / PRINT QUEUES

splp (device) Displays/changes printer driver settings
splp /dev/lp0

export $LPDEST="pqname" Set default printer queue for login session

lsvirprt Lists/changes virtual printer attributes.

lsallq Displays all queues

rmvirprt -q queuename -d queuedevice Removes a virtual printer

qpri -#(job No) -a(new priority) Change a queue job priority.

qhld -#(job No) Put a hold on hold
qhld -r #(job No) Release a held job

qchk -A Status of jobs in queues
lpstat
lpstat -p(queue) Status of jobs in a named queue

qcan -x (job No) Cancel a job from a queue
cancel (job No)

enq -U -P(queue) Enable a queue
enable (queue)

enq -D -P(queue) Disable a queue
disable (queue)

qmov -m(new queue) -#(job No) Move a job to another queue

startsrc -s qdaemon Start qdaemon sub-system
lssrc -s qdaemon List status of qdaemon sub-system
stop -s qdaemon Stop qdaemon sub-system

Friday, May 2, 2008

Solaris Vxvm vxdiskadm option 4 replacing failed rootdisk fail

Exact Error Message
"vxvm:vxdg: ERROR: associating disk-media rootdisk with c0t0d0s2: Disk public region is too small"

Details:
Sometimes the above vxdiskadm command will fail. If it does, try to ensure that the disk is replaced with a disk with the same geometry. Use the output from the format command to verify if this is the case:

AVAILABLE DISK SELECTIONS:
0. c0t0d0
/sbus@3,0/SUNW,fas@3,8800000/sd@0,0
1. c2t1d0
/sbus@3,0/SUNW,fas@0,8800000/sd@1,0

If needed, the "public" and "private" lines from an output from vxdisk list c#t#d# of each device can be compared to confirm that the size is the same, although the disk will have to be initialized before this will work. (vxdisk list c#t#d# shows the same private/public configuration). Now that the disks have been established to be the same, an attempt can be made to manually mirror the volumes rather than use vxdiskadm:


The following is an example output of "vxdisk list" command:

DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 sliced - - online
c2t1d0s2 sliced disk01 rootdg online
- - rootdisk rootdg removed was:c0t0d0s2

Run the following commands to completely remove the existence of the rootdisk in order to tidy up the configuration. This will involve disassociating plexes and removing the plexes with their associated subdisks:
Disassociate the plexes. This must be done with all plexes with a "STATE" of "DISABLED REMOVED". They must be done ONE AT A TIME. A plex can be located by looking for a "pl" in the far left column of information in the "vxprint -ht" output.
eg.
vxplex dis rootdiskvol-01
vxplex dis rootvol-01
vxplex dis swapvol-01


Run vxprint -ht to see that the plexes already disassociated
Remove the DISABLED REMOVED plexes and the rootdiskPriv subdisk.

eg.
vxedit -rf rm rootdiskvol-01
vxedit -rf rm rootvol-01
vxedit -rf rm swapvol-01

Remove the disk media name "rootdisk".
vxdg rmdisk rootdisk

Run a vxprint and make sure that all of the items from the "rootdisk" are removed (the original failed disk).

vxprint -htg rootdg


Once the vxprint shows that everything associated with the rootdisk has been removed, use vxdiskadm option 1 to initialize the new disk into rootdg (if not already done). DO NOT accept the default name. Instead, name it "rootdisk" (using the same diskname as previously). After the disk has been initialized, use vxdiskadm option 6 to mirror the volumes on a disk. This should automatically start the recovery process.

Solaris Replacing encapsulated root disk

[Replace the failed disk and boot the node:]

ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Create an empty install-db file:]
# touch /a/etc/vx/reconfig.d/state.d/install-db
[Edit /etc/system on the temporary file system and
remove or comment out the following entries:]
# rootdev:/pseudo/vxio@0:0
# set vxio:vol_rootdev_is_volume=1
[Edit /etc/vfstab on the temporary file system:]
Example:
Change from—
/dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol / ufs 1 no-

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no -
[Unmount the temporary file system, then check the file system:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

[Reboot:]
# reboot
[Update the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0
[Encapsulate the disk::]
# vxinstall
Choose to encapsulate the root disk.
[If a conflict in minor number occurs, reminor the rootdg disk group
:]
# umount /global/.devices/node@nodeid
# vxdg reminor rootdg 100
# shutdown -g0 -i6 -y

Solaris HPUX network command differences



Solaris
/usr/sbin/automount
/etc/inet/netmasks
/etc/defaultdomain
/etc/dfs/dfstab
/usr/sbin/share
/usr/sbin/shareall
/etc/resolv.conf
/usr/sbin/in.routed
/etc/defaultrouter
/usr/sbin/unshare
/usr/sbin/unshareall
HPUX 10.0/11.0
/usr/sbin/automount
/etc/rc.config.d/netconf
/etc/rc.config.d/namesvrs
/etc/exports
/usr/sbin/exportfss
/usr/sbin/exportfs -a
/etc/resolv.conf
/usr/sbin/gated
/etc/rc.confi.d/netconf
/usr/sbin/unshare
/usr/sbin/exportfs -au