Thursday, July 8, 2010

Solaris SSH and Tar combine

to push


tar czf - www.example.com/ | ssh joebloggs@otherserver.com tar xzf - -C ~/


to get

ssh username@from_server "tar czf - directory_to_get" | tar xzvf - -C

Wednesday, July 7, 2010

Solaris 10: Find new disk and label it

#cat format.cmd
label
quit

#cat list
4B19
5B19
6B19
7B19

#! /bin/sh
echo "\n" | format > format.before
#cfgadm -al
echo "\n" | format > format.after
for i in `cat list`
do
echo $i
grep -i $i format.after| grep -i configured| awk -F: '{print $1}' > a.out
echo "format -s -f format.cmd `grep -i $i format.after| grep -i configured| awk -F: '{print $1}'`" > labeldisk.sh
done

#sh -x labeldisk.sh

Tuesday, March 23, 2010

REDHAT LVM determine free disk space and move LVM aroun

Determine free space

# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- ACTIVE PV "/dev/sda" of VG "dev" [1.95 GB / 0 free]
pvscan -- ACTIVE PV "/dev/sdb" of VG "sales" [1.95 GB / 1.27 GB free]
pvscan -- ACTIVE PV "/dev/sdc" of VG "ops" [1.95 GB / 564 MB free]
pvscan -- ACTIVE PV "/dev/sdd" of VG "dev" [1.95 GB / 0 free]
pvscan -- ACTIVE PV "/dev/sde" of VG "ops" [1.95 GB / 1.9 GB free]
pvscan -- ACTIVE PV "/dev/sdf" of VG "dev" [1.95 GB / 1.33 GB free]
pvscan -- ACTIVE PV "/dev/sdg1" of VG "ops" [996 MB / 432 MB free]
pvscan -- ACTIVE PV "/dev/sdg2" of VG "dev" [996 MB / 632 MB free]
pvscan -- total: 8 [13.67 GB] / in use: 8 [13.67 GB] / in no VG: 0 [0]


We decide to reallocate /dev/sdg1 and /dev/sdg2 to design so first we have to move the physical extents into the free areas of the other volumes (in this case /dev/sdf for volume group dev and /dev/sde for volume group ops).

Move data off the disks to be used

Some space is still used on the chosen volumes so it is necessary to move that used space off onto some others.

Move all the used physical extents from /dev/sdg1 to /dev/sde and from /dev/sdg2 to /dev/sdf

# pvmove /dev/sdg1 /dev/sde
pvmove -- moving physical extents in active volume group "ops"
pvmove -- WARNING: moving of active logical volumes may cause data loss!
pvmove -- do you want to continue? [y/n] y
pvmove -- doing automatic backup of volume group "ops"
pvmove -- 141 extents of physical volume "/dev/sdg1" successfully moved

# pvmove /dev/sdg2 /dev/sdf
pvmove -- moving physical extents in active volume group "dev"
pvmove -- WARNING: moving of active logical volumes may cause data loss!
pvmove -- do you want to continue? [y/n] y
pvmove -- doing automatic backup of volume group "dev"
pvmove -- 91 extents of physical volume "/dev/sdg2" successfully moved

Make a file system on the volume ext3

# mkfs.ext3 /dev/datavg01/datalv01
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
19660800 inodes, 39305216 blocks
1965260 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1200 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.



Mount the new volume

# mkdir -p /mnt/design/users
# mount /dev/datavg01/datalv01 /mnt/design/users/


It's also a good idea to add an entry for this file system in your /etc/fstab file as follows:

/dev/datavg01/datalv01 /mnt/design/users ext3 defaults 1 2

Multi-path Device Mapper for Linux Software - Software Installation

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=115&prodSeriesId=3559651&prodTypeId=18964&objectID=c01475612

Another good redhat blog for reference

http://thomasvogt.wordpress.com/2007/12/04/linux-network-bonding/

Friday, March 5, 2010

HPUX download lsof site

http://hpux.connect.org.uk/hppd/hpux/Sysadmin/lsof-4.82/

REDHAT: To Test New Kernel Reboot Once

We want to change default=0 to default=1 This will boot your old kernel (not the upgraded one). Grub starts at 0, which will be your new kernel. We will switch to the new kernel later.

Save and exit.

Step 3: Configure server to boot once with the new kernel.

Now we will configure the server to boot the new kernel only once in case it fails. This is the most important step, if its not done and the new kernel fails your out of luck.

Type the following from command line (one at a time):

grub
savedefault --default=0 --once
quit

Step 4 – Reboot

Then reboot the server or computer. When it comes back up run uname -a to make sure it has the new kernel. (Note: if it does not come back up, reboot it again and it will load the old kernel.)

Then switch the grub.conf to boot the new kernel permanently.

nano /boot/grub/grub.conf

change default=1 to default=0

Save and exit. Thats it you have successfully updated your kernel.

Tuesday, February 9, 2010

Solaris:How can I use the Solaris ?Explorer? program to collect information on my zone(s)

Explorer 5.0 can be run on Solaris 10 in a global zone. It can be used to collect information on containers (non-global zones) with the -w option.

Solaris: How do I modify the network configuration of a running zone?

global# ifconfig bge0 addif  192.168.200.2 zone myzone
global# ifconfig bge0 removeif 192.168.200.2

Or remove and add again interface via zonecfg -z myzone