If your crs is corrupted and you want to only re-install it without affecting your database, you can use the following steps
Environment
-----------
Nodes = 2
OS Version = RHEL 5.7
Clusterware Version = 10.2.0.5
DB Version = 10.2.0.5
Steps
------
1. On both the Nodes clean RAC init scripts
Linux:
rm -rf /etc/oracle/*
rm -f /etc/init.d/init.cssd
rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs
cp /etc/inittab.orig /etc/inittab
2. Kill all the crsd,evmd and cssd processes on both nodes using kill -9 command
ps -ef | grep crs
ps -ef | grep evmd
ps -ef | grep cssd
3. Remove the files in /var/tmp/.oracle/ location
rm -rf /var/tmp/.oracle/
4. Remove the file in /etc/oracle/ocr.loc
rm-rf /etc/oracle/ocr.loc
5. De-install the CRS home using Oracle universe installer
- You can ignore this step, if you don't find the CRS installation in Universal Installer "Installed Products"
- If you can't un-install just remove the CRS directory
rm -rf /u01/crs/oracle/product/10.2.0/crs
6. Clean out OCR and Voting disk from one node
- Check on /etc/sysconfig/rawdevices to identify your OCR and Voting disk partition
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdb2
/dev/raw/raw3 /dev/sdb8
/dev/raw/raw4 /dev/sdb9
/dev/raw/raw5 /dev/sdb11
- You can use the fdisk command to delete and add the partition of ocr and voting disk
- The partition number gets changed after we try to delete and add the partition in logical group, so make a note of it and modify your /etc/sysconfig/rawdevices accordingly
- If the partion numbers are changed, use "oracleasm scandisk" on both the nodes, to make the ASM disk visible on both the nodes
- Make sure you have correct permission for Voting disk and OCR file
To Delete a partition
--------------------
[root@coltdb01 ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 48829.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 51.2 GB, 51200917504 bytes
64 heads, 32 sectors/track, 48829 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 239 244720 83 Linux
/dev/sdb2 240 478 244736 83 Linux
/dev/sdb4 479 48829 49511424 5 Extended
/dev/sdb5 10734 20271 9766896 83 Linux
/dev/sdb6 20272 29809 9766896 83 Linux
/dev/sdb7 29810 39347 9766896 83 Linux
/dev/sdb8 479 717 244720 83 Linux
/dev/sdb9 718 956 244720 83 Linux
/dev/sdb10 1196 10733 9766896 83 Linux
/dev/sdb11 957 1195 244720 83 Linux
Partition table entries are not in disk order
Command (m for help): d
Partition number (1-11): 1
To Add a partition
-------------------------
[root@coltdb01 ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 48829.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sdb: 51.2 GB, 51200917504 bytes
64 heads, 32 sectors/track, 48829 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 239 244720 83 Linux
/dev/sdb2 240 478 244736 83 Linux
/dev/sdb4 479 48829 49511424 5 Extended
/dev/sdb5 10734 20271 9766896 83 Linux
/dev/sdb6 20272 29809 9766896 83 Linux
/dev/sdb7 29810 39347 9766896 83 Linux
/dev/sdb8 479 717 244720 83 Linux
/dev/sdb9 718 956 244720 83 Linux
/dev/sdb10 1196 10733 9766896 83 Linux
/dev/sdb11 957 1195 244720 83 Linux
Partition table entries are not in disk order
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
7.Check that Virtual IP's are down on both the nodes
- If vip's remained up the need to be removed using : ifconfig <device> down
8. Reinstall the CRS from node 1
- After the install of clusterware 10.2.0.1, the services can't be brought online, since the DB and ASM instance version is 10.2.0.5. You have to apply the patchset 10.2.0.5 to clusterware to
bring up all the services online
9. Run crs_stat -t to check whether all the services and instances are up
No comments:
Post a Comment