Popular Posts

Thursday, 10 March 2011

Scan and Configure New LUNS on Redhat Linux (RHEL)

To configure the newly added LUNS on RHEL:



# ls /sys/class/fc_host
 host0  host1  host2  host3
 fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

 echo "1" > /sys/class/fc_host/host0/issue_lip

 echo "- - -" > /sys/class/scsi_host/host0/scan

 echo "1" > /sys/class/fc_host/host1/issue_lip

 echo "- - -" > /sys/class/scsi_host/host1/scan

 echo "1" > /sys/class/fc_host/host2/issue_lip

 echo "- - -" > /sys/class/scsi_host/host2/scan

 echo "1" > /sys/class/fc_host/host3/issue_lip

 echo "- - -" > /sys/class/scsi_host/host3/scan

 cat /proc/scsi/scsi | egrep -i 'Host:' | wc -l

 fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

Alternatively, we can
run the re-scan-scsi script.

To scan new LUNs on Linux operating system which is using QLogic driver 

You need to find out driver proc file /proc/scsi/qlaXXX. 

For example on my system it is /proc/scsi/qla2300/0 

Once file is identified you need to type following command (login as the root
):
 
# echo "scsi-qlascan" > /proc/scsi/qla2300/0
 # cat /proc/scsi/qla2300/0

Now use the script rescan-scsi-bus.sh new LUN as a device. Run script as follows:
 
# ./rescan-scsi-bus.sh -l -w

The output of ls -l /sys/block/*/device should give you an idea about how each device is connected to the system. 
This document shows how you can set up a two node, high-availability HTTP cluster with heartbeat on linux. Both nodes use the Apache web server to serve the same content.
Pre-Configuration Requirements:
1. Assign hostname host01 to primary node with IP address 192.168.0.1 to eth0.
2. Assign hostname host02 to secondry node with IP address 192.168.0.2.
Note: on host01
# uname -n
host01
On host02
uname -n
host02
192.160.2.1 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).

Configuration:

1. Download and install the heartbeat package. In our case we are using linux so we will install heartbeat with yum:
yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:

cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 2
2 sha1 test-ha
Change the permission of the authkeys file:
# chmod 600 /etc/ha.d/authkeys
5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Configuring Heartbeat High Availability Cluster On linux
Add the following lines in the ha.cf file:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node host01
node host02
Note: host01 and host02 is the output generated by
# uname -n
6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
# vi /etc/ha.d/haresources
Add the following line:
host01 192.160.2.1 httpd
7. Copy the /etc/ha.d/ directory from host01 to host02:
# scp -r /etc/ha.d/ root@host02:/etc/
8. As we want httpd highly enabled let's start configuring httpd:
# vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 192.160.2.1:80
9. Copy the /etc/httpd/conf/httpd.conf file to host02:
# scp /etc/httpd/conf/httpd.conf root@host02:/etc/httpd/conf/
10. Create the file index.html on both nodes (host01 & host02):
On host01:
echo "host01 apache test server" > /var/www/html/index.html
On host02:
echo "host02 apache test server" > /var/www/html/index.html
11. Now start heartbeat on the primary host01 and secondary host02:
/etc/init.d/heartbeat start
12. Open web-browser and type in the URL:
It will show host01 apache test server.
13. Now stop the heartbeat daemon on host01:
# /etc/init.d/heartbeat stop
In your browser type in the URL http://192.160.2.1 and press enter.
It will show host02 apache test server.
14. We don't need to create a virtual network interface and assign an IP address (192.160.2.1) to it. Heartbeat will create this and start the service (httpd) itself.
Don't use the IP addresses 192.168.0.1 and 192.168.0.2 for services. These addresses are used by heartbeat for communication between host01 and host02. When any of them will be used for services/resources, it will disturb heartbeat and will not work. Be carefull!!!

Configuring High Availability Linux Cluster

This document shows how you can set up a two node, high-availability HTTP cluster with heartbeat on linux. Both nodes use the Apache web server to serve the same content.
Pre-Configuration Requirements:
1. Assign hostname host01 to primary node with IP address 192.168.0.1 to eth0.
2. Assign hostname host02 to secondry node with IP address 192.168.0.2.
Note: on host01
# uname -n
host01
On host02
uname -n
host02
192.160.2.1 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).

Configuration:

1. Download and install the heartbeat package. In our case we are using linux so we will install heartbeat with yum:
yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:

cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 2
2 sha1 test-ha
Change the permission of the authkeys file:
# chmod 600 /etc/ha.d/authkeys
5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Configuring Heartbeat High Availability Cluster On linux
Add the following lines in the ha.cf file:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node host01
node host02
Note: host01 and host02 is the output generated by
# uname -n
6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
# vi /etc/ha.d/haresources
Add the following line:
host01 192.160.2.1 httpd
7. Copy the /etc/ha.d/ directory from host01 to host02:
# scp -r /etc/ha.d/ root@host02:/etc/
8. As we want httpd highly enabled let's start configuring httpd:
# vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 192.160.2.1:80
9. Copy the /etc/httpd/conf/httpd.conf file to host02:
# scp /etc/httpd/conf/httpd.conf root@host02:/etc/httpd/conf/
10. Create the file index.html on both nodes (host01 & host02):
On host01:
echo "host01 apache test server" > /var/www/html/index.html
On host02:
echo "host02 apache test server" > /var/www/html/index.html
11. Now start heartbeat on the primary host01 and secondary host02:
/etc/init.d/heartbeat start
12. Open web-browser and type in the URL:
It will show host01 apache test server.
13. Now stop the heartbeat daemon on host01:
# /etc/init.d/heartbeat stop
In your browser type in the URL http://192.160.2.1 and press enter.
It will show host02 apache test server.
14. We don't need to create a virtual network interface and assign an IP address (192.160.2.1) to it. Heartbeat will create this and start the service (httpd) itself.
Don't use the IP addresses 192.168.0.1 and 192.168.0.2 for services. These addresses are used by heartbeat for communication between host01 and host02. When any of them will be used for services/resources, it will disturb heartbeat and will not work. Be carefull!!!

Configuring Network Bonding in Linux

Bonding is creation of a single bonded interface by combining 2 or more ethernet interfaces. This helps in high availability and performance improvement.

Here is the steps for creating a network bonding in Fedora Core and Redhat Linux

Step 1:

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding configuration file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168. 1.100
NETMASK=255. 255.255.0
GATEWAY=192. 168.1.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Step 2:

Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above.

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Step 3:

Set the parameters for bond0 bonding kernel module. Add the following lines to/etc/modprobe. conf

# bonding commands
alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Note: Here we configured the bonding mode as "balance-alb". All the available modes are given at the end and you should choose appropriate mode specific to your requirement.

Step 4:

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5:

Restart the network, or restart the computer.

$ service network restart  Or restart computer

When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:14:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!

RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.

* Mode 0 (balance-rr)
This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.

* Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.

* Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.

* Mode 3 (broadcast)
This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.

* Mode 4 (802.3ad)
This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.

* Mode 5 (balance-tlb)
This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.

* Mode 6 (balance-alb)
This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

Unmount filesystem when device is busy

When you unmount a filesystem, you may get "device is busy error" sometimes.  Using the following steps, you can unmount safely.


# umount  /testsrv1/rman
umount: /testsrv1/rman: device is busy
umount: /testsrv1/rman: device is busy


# fuser -m /testsrv1/rman
/testsrv1/rman:         31477c


# ps -eaf | grep 31477
oracle  31477 31448  0 09:52 pts/0    00:00:00 /bin/ksh






# df -h /testsrv1/rman
Filesystem            Size  Used Avail Use% Mounted on
testsrv1:/miszpool/mis
                      2.5T  1.9T  560G  78% /testsrv1/rman


# ps -eaf | grep 31477
oracle  31477 31448  0 09:52 pts/0    00:00:00 /bin/ksh


# ps -eaf | grep 31448
dbauser1 31448 31447  0 09:51 pts/0    00:00:00 -ksh
oracle  31477 31448  0 09:52 pts/0    00:00:00 /bin/ksh


# kill -9 31477
# ps -eaf | grep 31448
dbauser1 31448 31447  0 09:51 pts/0    00:00:00 -ksh


# umount -f /testsrv1/rman


# mount /testsrv1/rman


# df -h /testsrv1/rman
Filesystem            Size  Used Avail Use% Mounted on
testsrv1:/miszpool/mis
                      2.5T  1.9T  560G  78% /testsrv1/rman

Moving volume group to another Server in Linux

Moving a VG to another server:

To do this we use the vgexport and vgimport commands.

vgexport and vgimport is not necessary to move disk drives from one server to another. It is an administrative policy tool to prevent access to volumes in the time it takes to move them.

1. Unmount the file system
First, make sure that no users are accessing files on the active volume, then unmount it

# unmount /appdata

2.Mark the volume group inactive
Marking the volume group inactive removes it from the kernel and prevents any further activity on it.

# vgchange -an appvg
vgchange -- volume group "appvg" successfully deactivate



3. Export the volume group

It is now must to export the volume group. This prevents it from being accessed on the old server and prepares it to be removed.

# vgexport appvg
vgexport -- volume group "appvg" successfully exported

Now, When the machine is next shut down, the disk can be unplugged and then connected to it's new machine

4. Import the volume group

When it plugged into the new server, it becomes /dev/sdc (depends).

so an initial pvscan shows:

# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdc1" is in EXPORTED VG "appvg" [996 MB / 996 MB free]
pvscan -- inactive PV "/dev/sdc2" is in EXPORTED VG "appvg" [996 MB / 244 MB free]
pvscan -- total: 2 [1.95 GB] / in use: 2 [1.95 GB] / in no VG: 0 [0]

We can now import the volume group (which also activates it) and mount the file system.

If you are importing on an LVM 2 system, run:

# vgimport appvg
Volume group "vg" successfully imported

5. Activate the volume group

You must activate the volume group before you can access it.

# vgchange -ay appvg

Mount the file system

# mkdir -p /appdata
# mount /dev/appvg/appdata /appdata

The file system is now available for use.

Installing EMC PowerPath Keys

This describes how to configure the EMC PowerPath registration keys.

First, check the current configuration of PowerPath:
# powermt config
Warning: all licenses for storage systems support are missing or expired.
The install the keys:
# emcpreg -install
=========== EMC PowerPath Registration ===========
Do you have a new registration key or keys to enter?[n] y
Enter the registration keys(s) for your product(s),
one per line, pressing Enter after each key.
After typing all keys, press Enter again.
Key (Enter if done): P6BV-4KDB-QET6-RF9A-QV9D-MN3V
1 key(s) successfully added.
Key successfully installed.
Key (Enter if done):
1 key(s) successfully registered.