Popular Posts

Thursday 10 March 2011

Scan and Configure New LUNS on Redhat Linux (RHEL)

To configure the newly added LUNS on RHEL:



# ls /sys/class/fc_host
 host0  host1  host2  host3
 fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

 echo "1" > /sys/class/fc_host/host0/issue_lip

 echo "- - -" > /sys/class/scsi_host/host0/scan

 echo "1" > /sys/class/fc_host/host1/issue_lip

 echo "- - -" > /sys/class/scsi_host/host1/scan

 echo "1" > /sys/class/fc_host/host2/issue_lip

 echo "- - -" > /sys/class/scsi_host/host2/scan

 echo "1" > /sys/class/fc_host/host3/issue_lip

 echo "- - -" > /sys/class/scsi_host/host3/scan

 cat /proc/scsi/scsi | egrep -i 'Host:' | wc -l

 fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

Alternatively, we can
run the re-scan-scsi script.

To scan new LUNs on Linux operating system which is using QLogic driver 

You need to find out driver proc file /proc/scsi/qlaXXX. 

For example on my system it is /proc/scsi/qla2300/0 

Once file is identified you need to type following command (login as the root
):
 
# echo "scsi-qlascan" > /proc/scsi/qla2300/0
 # cat /proc/scsi/qla2300/0

Now use the script rescan-scsi-bus.sh new LUN as a device. Run script as follows:
 
# ./rescan-scsi-bus.sh -l -w

The output of ls -l /sys/block/*/device should give you an idea about how each device is connected to the system. 
This document shows how you can set up a two node, high-availability HTTP cluster with heartbeat on linux. Both nodes use the Apache web server to serve the same content.
Pre-Configuration Requirements:
1. Assign hostname host01 to primary node with IP address 192.168.0.1 to eth0.
2. Assign hostname host02 to secondry node with IP address 192.168.0.2.
Note: on host01
# uname -n
host01
On host02
uname -n
host02
192.160.2.1 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).

Configuration:

1. Download and install the heartbeat package. In our case we are using linux so we will install heartbeat with yum:
yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:

cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 2
2 sha1 test-ha
Change the permission of the authkeys file:
# chmod 600 /etc/ha.d/authkeys
5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Configuring Heartbeat High Availability Cluster On linux
Add the following lines in the ha.cf file:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node host01
node host02
Note: host01 and host02 is the output generated by
# uname -n
6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
# vi /etc/ha.d/haresources
Add the following line:
host01 192.160.2.1 httpd
7. Copy the /etc/ha.d/ directory from host01 to host02:
# scp -r /etc/ha.d/ root@host02:/etc/
8. As we want httpd highly enabled let's start configuring httpd:
# vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 192.160.2.1:80
9. Copy the /etc/httpd/conf/httpd.conf file to host02:
# scp /etc/httpd/conf/httpd.conf root@host02:/etc/httpd/conf/
10. Create the file index.html on both nodes (host01 & host02):
On host01:
echo "host01 apache test server" > /var/www/html/index.html
On host02:
echo "host02 apache test server" > /var/www/html/index.html
11. Now start heartbeat on the primary host01 and secondary host02:
/etc/init.d/heartbeat start
12. Open web-browser and type in the URL:
It will show host01 apache test server.
13. Now stop the heartbeat daemon on host01:
# /etc/init.d/heartbeat stop
In your browser type in the URL http://192.160.2.1 and press enter.
It will show host02 apache test server.
14. We don't need to create a virtual network interface and assign an IP address (192.160.2.1) to it. Heartbeat will create this and start the service (httpd) itself.
Don't use the IP addresses 192.168.0.1 and 192.168.0.2 for services. These addresses are used by heartbeat for communication between host01 and host02. When any of them will be used for services/resources, it will disturb heartbeat and will not work. Be carefull!!!

Configuring High Availability Linux Cluster

This document shows how you can set up a two node, high-availability HTTP cluster with heartbeat on linux. Both nodes use the Apache web server to serve the same content.
Pre-Configuration Requirements:
1. Assign hostname host01 to primary node with IP address 192.168.0.1 to eth0.
2. Assign hostname host02 to secondry node with IP address 192.168.0.2.
Note: on host01
# uname -n
host01
On host02
uname -n
host02
192.160.2.1 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).

Configuration:

1. Download and install the heartbeat package. In our case we are using linux so we will install heartbeat with yum:
yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:

cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 2
2 sha1 test-ha
Change the permission of the authkeys file:
# chmod 600 /etc/ha.d/authkeys
5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Configuring Heartbeat High Availability Cluster On linux
Add the following lines in the ha.cf file:
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node host01
node host02
Note: host01 and host02 is the output generated by
# uname -n
6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
# vi /etc/ha.d/haresources
Add the following line:
host01 192.160.2.1 httpd
7. Copy the /etc/ha.d/ directory from host01 to host02:
# scp -r /etc/ha.d/ root@host02:/etc/
8. As we want httpd highly enabled let's start configuring httpd:
# vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 192.160.2.1:80
9. Copy the /etc/httpd/conf/httpd.conf file to host02:
# scp /etc/httpd/conf/httpd.conf root@host02:/etc/httpd/conf/
10. Create the file index.html on both nodes (host01 & host02):
On host01:
echo "host01 apache test server" > /var/www/html/index.html
On host02:
echo "host02 apache test server" > /var/www/html/index.html
11. Now start heartbeat on the primary host01 and secondary host02:
/etc/init.d/heartbeat start
12. Open web-browser and type in the URL:
It will show host01 apache test server.
13. Now stop the heartbeat daemon on host01:
# /etc/init.d/heartbeat stop
In your browser type in the URL http://192.160.2.1 and press enter.
It will show host02 apache test server.
14. We don't need to create a virtual network interface and assign an IP address (192.160.2.1) to it. Heartbeat will create this and start the service (httpd) itself.
Don't use the IP addresses 192.168.0.1 and 192.168.0.2 for services. These addresses are used by heartbeat for communication between host01 and host02. When any of them will be used for services/resources, it will disturb heartbeat and will not work. Be carefull!!!

Configuring Network Bonding in Linux

Bonding is creation of a single bonded interface by combining 2 or more ethernet interfaces. This helps in high availability and performance improvement.

Here is the steps for creating a network bonding in Fedora Core and Redhat Linux

Step 1:

Create the file ifcfg-bond0 with the IP address, netmask and gateway. Shown below is my test bonding configuration file.

$ cat /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
IPADDR=192.168. 1.100
NETMASK=255. 255.255.0
GATEWAY=192. 168.1.1
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Step 2:

Modify eth0, eth1 and eth2 configuration as shown below. Comment out, or remove the ip address, netmask, gateway and hardware address from each one of these files, since settings should only come from the ifcfg-bond0 file above.

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes

$ cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Step 3:

Set the parameters for bond0 bonding kernel module. Add the following lines to/etc/modprobe. conf

# bonding commands
alias bond0 bonding
options bond0 mode=balance-alb miimon=100

Note: Here we configured the bonding mode as "balance-alb". All the available modes are given at the end and you should choose appropriate mode specific to your requirement.

Step 4:

Load the bond driver module from the command prompt.

$ modprobe bonding

Step 5:

Restart the network, or restart the computer.

$ service network restart  Or restart computer

When the machine boots up check the proc settings.

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.2 (March 23, 2006)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:14:72:80: 62:f0

Look at ifconfig -a and check that your bond0 interface is active. You are done!

RHEL bonding supports 7 possible "modes" for bonded interfaces. These modes determine the way in which traffic sent out of the bonded interface is actually dispersed over the real interfaces. Modes 0, 1, and 2 are by far the most commonly used among them.

* Mode 0 (balance-rr)
This mode transmits packets in a sequential order from the first available slave through the last. If two real interfaces are slaves in the bond and two packets arrive destined out of the bonded interface the first will be transmitted on the first slave and the second frame will be transmitted on the second slave. The third packet will be sent on the first and so on. This provides load balancing and fault tolerance.

* Mode 1 (active-backup)
This mode places one of the interfaces into a backup state and will only make it active if the link is lost by the active interface. Only one slave in the bond is active at an instance of time. A different slave becomes active only when the active slave fails. This mode provides fault tolerance.

* Mode 2 (balance-xor)
Transmits based on XOR formula. (Source MAC address is XOR'd with destination MAC address) modula slave count. This selects the same slave for each destination MAC address and provides load balancing and fault tolerance.

* Mode 3 (broadcast)
This mode transmits everything on all slave interfaces. This mode is least used (only for specific purpose) and provides only fault tolerance.

* Mode 4 (802.3ad)
This mode is known as Dynamic Link Aggregation mode. It creates aggregation groups that share the same speed and duplex settings. This mode requires a switch that supports IEEE 802.3ad Dynamic link.

* Mode 5 (balance-tlb)
This is called as Adaptive transmit load balancing. The outgoing traffic is distributed according to the current load and queue on each slave interface. Incoming traffic is received by the current slave.

* Mode 6 (balance-alb)
This is Adaptive load balancing mode. This includes balance-tlb + receive load balancing (rlb) for IPV4 traffic. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

Unmount filesystem when device is busy

When you unmount a filesystem, you may get "device is busy error" sometimes.  Using the following steps, you can unmount safely.


# umount  /testsrv1/rman
umount: /testsrv1/rman: device is busy
umount: /testsrv1/rman: device is busy


# fuser -m /testsrv1/rman
/testsrv1/rman:         31477c


# ps -eaf | grep 31477
oracle  31477 31448  0 09:52 pts/0    00:00:00 /bin/ksh






# df -h /testsrv1/rman
Filesystem            Size  Used Avail Use% Mounted on
testsrv1:/miszpool/mis
                      2.5T  1.9T  560G  78% /testsrv1/rman


# ps -eaf | grep 31477
oracle  31477 31448  0 09:52 pts/0    00:00:00 /bin/ksh


# ps -eaf | grep 31448
dbauser1 31448 31447  0 09:51 pts/0    00:00:00 -ksh
oracle  31477 31448  0 09:52 pts/0    00:00:00 /bin/ksh


# kill -9 31477
# ps -eaf | grep 31448
dbauser1 31448 31447  0 09:51 pts/0    00:00:00 -ksh


# umount -f /testsrv1/rman


# mount /testsrv1/rman


# df -h /testsrv1/rman
Filesystem            Size  Used Avail Use% Mounted on
testsrv1:/miszpool/mis
                      2.5T  1.9T  560G  78% /testsrv1/rman

Moving volume group to another Server in Linux

Moving a VG to another server:

To do this we use the vgexport and vgimport commands.

vgexport and vgimport is not necessary to move disk drives from one server to another. It is an administrative policy tool to prevent access to volumes in the time it takes to move them.

1. Unmount the file system
First, make sure that no users are accessing files on the active volume, then unmount it

# unmount /appdata

2.Mark the volume group inactive
Marking the volume group inactive removes it from the kernel and prevents any further activity on it.

# vgchange -an appvg
vgchange -- volume group "appvg" successfully deactivate



3. Export the volume group

It is now must to export the volume group. This prevents it from being accessed on the old server and prepares it to be removed.

# vgexport appvg
vgexport -- volume group "appvg" successfully exported

Now, When the machine is next shut down, the disk can be unplugged and then connected to it's new machine

4. Import the volume group

When it plugged into the new server, it becomes /dev/sdc (depends).

so an initial pvscan shows:

# pvscan
pvscan -- reading all physical volumes (this may take a while...)
pvscan -- inactive PV "/dev/sdc1" is in EXPORTED VG "appvg" [996 MB / 996 MB free]
pvscan -- inactive PV "/dev/sdc2" is in EXPORTED VG "appvg" [996 MB / 244 MB free]
pvscan -- total: 2 [1.95 GB] / in use: 2 [1.95 GB] / in no VG: 0 [0]

We can now import the volume group (which also activates it) and mount the file system.

If you are importing on an LVM 2 system, run:

# vgimport appvg
Volume group "vg" successfully imported

5. Activate the volume group

You must activate the volume group before you can access it.

# vgchange -ay appvg

Mount the file system

# mkdir -p /appdata
# mount /dev/appvg/appdata /appdata

The file system is now available for use.

Installing EMC PowerPath Keys

This describes how to configure the EMC PowerPath registration keys.

First, check the current configuration of PowerPath:
# powermt config
Warning: all licenses for storage systems support are missing or expired.
The install the keys:
# emcpreg -install
=========== EMC PowerPath Registration ===========
Do you have a new registration key or keys to enter?[n] y
Enter the registration keys(s) for your product(s),
one per line, pressing Enter after each key.
After typing all keys, press Enter again.
Key (Enter if done): P6BV-4KDB-QET6-RF9A-QV9D-MN3V
1 key(s) successfully added.
Key successfully installed.
Key (Enter if done):
1 key(s) successfully registered.

Configuring proxy settings in linux

If your Linux box is behind a proxy server then to access the internet, proxy configuration will be required. The 'http_proxy' and 'ftp_proxy' environment variables hold the information about proxy server. Just execute the following commands-

$export http_proxy=http://proxy-server-ip:port
$export ftp_proxy=http://proxy-server-ip:port

Now wget, yum, apt-get etc. can use these variables to access http/s and ftp URLs out of office LAN. If proxy server needs authentication credentials then pass them as shown below-

$export http_proxy=http://user:password@proxy-server-ip:port
$export ftp_proxy=http://user:password@proxy-server-ip:port

To avoid executing these commands every time you login or reboot the machine then add them in ~/.bashrc file


Common NFS errors & solutions

Common NFS errors & solutions:

1."Server Not Responding" Message
2.  "Access Denied" Message
3."Permission Denied" Message
4.  "Device Busy" Message

Error 1: If You Receive an NFS "Server Not Responding" Message

ping the nfs server from client

1.ping "nfs serer name or ip"

2./usr/bin/rpcinfo -p servername

The rpcinfo command should display the following processes:

    * portmap
    * nfs
    * mountd
    * status
    * nlockmgr
    * llockmgr

If any of these processes is not running, follow the below steps:


a.Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines:

NFS_SERVER=1
START_MOUNTD=1

b.Make sure that the /etc/inetd.conf file on the NFS server does not contain a line to start rpc.mountd.
If it does, make sure the START_MOUNTD variable in /etc/rc.config.d/nfsconf is set to 0.

c.Issue the following command on the NFS server to start all the necessary NFS processes:

#/sbin/init.d/nfs.server start

Error 2: If You Receive an "Access Denied" Message

a.check the FS is exported or not

#/usr/sbin/showmount -e server_name

(If it is not exported means u have to edit /etc/exports file in NFS server and put the necessary entry and
then run the command
/usr/sbin/exportfs -a)

Error 3 :If You Receive a "Permission Denied" Message

a.Check the mount options in the /etc/fstab file on the NFS client. A directory you are attempting to write to may have
been mounted read-only.

b.Issue the ls -l command to check the HP-UX permissions on the server directory and on the client directory
that is the mount point. You may not be allowed access to the directory.

c.Issue the following command on the NFS server:

/usr/sbin/exportfs

Or, issue the following command on the NFS client:

/usr/sbin/showmount -e server_name

d. Check the export permissions on the exported directory. The directory may have been exported read-only to your client.
The system administrator of the NFS server can use the remount mount
option to mount the directory read/write without unmounting it

Error 4 : If You Receive a "Device Busy" Message

a.If you received the "device busy" message while attempting to mount a directory, try to access the mounted directory.
  If you can access it, then it is already mounted.
 
b.If you received the "device busy" message while attempting to unmount a directory, a user or process is currently using the directory. Wait until the process completes, or follow these steps:

 1.Issue the following command to determine who is using the mounted directory:

       /usr/sbin/fuser -cu local_mount_point

   The fuser(1M) command will return a list of process IDs and user names that are currently using the directory
   mounted under local_mount_point. This will help you decide whether to kill the processes or wait for them to complete.

 2. To kill all processes using the mounted directory, issue the following command:

            /usr/sbin/fuser -ck local_mount_point

 3. Try again to unmount the directory.


Override LVM Quorum Check in Linux

Normally, volume groups are automatically activated during system startup. Unless you intentionally deactivate a volume group using vgchange, you will probably not need to reactivate a volume group.


However, LVM does require that a "quorum" of disks in a volume group be available. During normal system operation, LVM needs a quorum of more than half of the disks in a volume group for activation. 


If, during run time, a disk fails and causes quorum to be lost, LVM alerts you with a message to the console, but keeps the volume group active.


If there is no other way to make a quorum available, the -q option to the vgchange command will override the quorum check.


EXAMPLE:


vgchange -a y -q n /dev/vg01


You should attempt to return the disabled disks to the volume group as soon as possible. When you return a disk to service that was not online when you originally activated the volume group, use the activation command again to attach the now accessible disks to the volume group.

You might also like:

Configuring Samba Server in Linux

SAMBA(SMB server) is a file sharing server. Which is used to share files between Windows, Linux and Unix Systems. SMB(Server Message Block) is a proprietary protocol which is developed by Microsoft


This article explains how to install and configure SAMBA in Linux.


Step 1: Create a directory where you want to keep data and share with other remote systems(either windows/Linux/UNIX).
#mkdir /sales


Step 2 : Installing Samba server
#yum install samba*


Step 3 : Now we have to configure the samba server. Edit the main configuration file (/etc/samba/smb.conf)


specify the work group where this server belongs 


#vi /etc/samba/smb.conf


search for workgroup word and specify your work group name


workgroup = SALES-DEPT.


Why we require this workgroup? 
When windows user try to access any network resource they first try to access my network places and then search for workgroup then to server. So definitly we have to specify this workgroup entry in smb.conf file.


Now we have to give a name to this samba server, search for "server string" with out quotes then provide the samba server name (here that name is sales-share)


server string = sales-share


Now specify the share details, which folder you want to share. To whom you want to share? Goto last line of the smb.conf file specify your shared folder details as follows.


[sales-dir]
comment = "This is the Sales data which is shared with my windows users"
path = /sales
valid users = user1 user2
writable = no
privatable = no
browsable =yes. 


After giving this seven entries just save and exit the file.


Let me explain each and every thing what we used here.
a.[sales-dir] -- This is the share name, so when ever any user accessed the samba server through network this will be visible as folder shared.
b.comment -- This is just a comment, which will help to know what is this share for.
c.path -- This is used to specify which folder on my samba machine to share.
d.valid users -- This will specifies which user is having access on this folder.
e.writable -- This will specify whether users are able to write or not, In this example the can just read the folder and copy.
f.Privatable -- This will indicate whether this folder is private or not.
g.browsable -- This is used to specify whether the folder content is browsable or not.


You can specify the Read-Only and Write Access to specified using keywords "read only" and "write list"


example: write list = user1, user3
              read only = user2, user4


Step 4 : Now create passwords for the users who are going to access this samba share remotely.
#smbpasswd -a user1
#smbpasswd -a user2


generate the passwords for this two users and this passwords will be stored in /etc/samba/smbpasswd


Step 5 : check for the syntax for your smb.conf file if in case you did any mistake
#testparm


Step 6 : Restart the samba service
#service smb restart


Step 7 : Permanently on the smb service, So that after rebooting the system too our server will start running.
#chkconfig smb on


step 8: change the selinux security context for the shared folder
#chcon -Rt samba_share_t /sales


Thats all. You have done configuring the samba server.