Friday 29 June 2012

Steps To Migrate a Database From Non-ASM to ASM


You can use these steps to migrate a NON-ASM database to ASM database in Oracle 11gR2.

Prerequisite - ASM instance should be created and up and running. Please refer my previous article to create a ASM instance

1) Check the database version and status

   SQL> select INSTANCE_NAME,VERSION,DATABASE_STATUS from v$instance;

    INSTANCE_NAME    VERSION           DATABASE_STATUS
    ---------------- ----------------- -----------------
    learndb          11.2.0.3.0        ACTIVE

2) We should be able to check the ASM disk available from the instance which we created earlier

   SQL> select NAME,STATE,TOTAL_MB,PATH from v$asm_disk;
   NAME                           STATE      TOTAL_MB PATH
   ------------------------------ -------- ---------- ----------
   DISK1                          NORMAL         4777 ORCL:DISK1


Migrating the SPFILE from Non-asm to ASM


1) If you are using a spfile, take a backup of spfile using rman to restore it to ASM disk

   RMAN>BACKUP AS BACKUPSET SPFILE;

2) Now shutdown the database and start it in Mount stage
   SQL> Shutdown immediate

   SQL> STARTUP MOUNT;

3) Now restore the spfile from the backup to ASM disk as below

  RMAN>RESTORE SPFILE TO '+DATA/spfilelearndb.ora';

4) You can see, still the parameter "spfile" is pointing to file system instead of ASM disk

   SQL> show parameter spfile

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    spfile                               string      /opt/oracle/product/11.2.0/dbh
                                                        ome_1/dbs/spfilelearndb.ora
5) Shutdowm the instance

    SQL> shutdown immediate

6) Delete "spfile<sid>.ora and init<sid>.ora from the $ORACLE_HOME/dbs directory and create a new init<sid>.ora with the following line of content and start the instance,

    vi initlearndb.ora
   SPFILE='+DATA/spfilelearndb.ora'

   SQL> Startup (  first it will search for spfile<sid>.ora which we deleted and next it will look for      init<sid>.ora which we have moified with the above content  )

   SQL> show parameter spfile

    NAME                                 TYPE        VALUE
   ------------------------------------ ----------- ------------------------------
   spfile                               string      +DATA/spfilelearndb.ora


Migrating Control files from Non-ASM to ASM


1) These are the current control files in non-asm

SQL>show parameter contol_files
         control_files                        string      /u01/data/learndb/control01.ctl,
                                                                 /u01/data/learndb/control02.ctl

2) Start the instance in nomount state

SQL> startup nomount

3) You can move these control files to ASM using RMAN

    [oracle@coltdb04 ~]$ rman target /

    Recovery Manager: Release 11.2.0.3.0 - Production on Fri Jun 29 03:04:39 2012

    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

    connected to target database: LEARNDB (not mounted)


    RMAN>restore controlfile to '+DATA' from '/u01/data/learndb/control01.ctl';


4) You can check whether the control file are created

   ASMCMD> find -t CONTROLFILE +DATA *
   WARNING:option 't' is deprecated for 'find'
   please use 'type'

   +DATA/LEARNDB/CONTROLFILE/current.256.787200593

5) Update the spfile with new control file location

   SQL> alter system set control_files='+DATA/LEARNDB/CONTROLFILE/current.256.787200593' scope=spfile;

6) Now shutdown the instance and startup in mount stage and check for the control file location
 
   SQL> shutdown immediate

   SQL> startup mount

   SQL> show parameter control_files

       NAME                                 TYPE        VALUE
     ------------------------------------ ----------- ------------------------------
     control_files                        string      +DATA/learndb/controlfile/curr
                                                      ent.256.787200593


Migrating the Datafiles from Non-ASM to ASM

1) Keep the database is in mount stage

2) Use RMAN "BACKUP AS COPY" method to migrate the datafiles from filesystem to ASM

   [oracle@coltdb04 ~]$ rman target /

   Recovery Manager: Release 11.2.0.3.0 - Production on Fri Jun 29 03:00:40 2012

   Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

   connected to target database: LEARNDB (DBID=3704858930, not open)
                                           

   RMAN> BACKUP AS COPY DATABASE FORMAT '+DATA';

3)  Switch the database to the copy created using the following RMAN command

   RMAN> SWITCH DATABASE TO COPY;

   datafile 1 switched to datafile copy "+DATA/learndb/datafile/system.258.787201633"
   datafile 2 switched to datafile copy "+DATA/learndb/datafile/sysaux.257.787201553"
   datafile 3 switched to datafile copy "+DATA/learndb/datafile/undotbs1.259.787201713"
   datafile 4 switched to datafile copy "+DATA/learndb/datafile/users.261.787201725"

4) Migrate the tempfile to ASM using RMAN

   RMAN> run
   2> {
   3> set newname for tempfile '/u01/data/learndb/temp01.dbf' to '+DATA';
   4> switch tempfile all;
   5> }

   executing command: SET NEWNAME

   using target database control file instead of recovery catalog

   renamed tempfile 1 to +DATA in control file

5) Now open the Database

   RMAN> ALTER DATABASE OPEN;

   database opened

6) You can now check the datafiles created in ASM
 
   SQL> select FILE_NAME from dba_data_files;

   FILE_NAME
   --------------------------------------------------------------------------------
   +DATA/learndb/datafile/users.261.787201725
   +DATA/learndb/datafile/undotbs1.259.787201713
   +DATA/learndb/datafile/sysaux.257.787201553
   +DATA/learndb/datafile/system.258.787201633


Migrating the Redo log files from Non-ASM to ASM

1) Identify the currently available redo log files using the following command

   SQL> SELECT a.group#, b.member, a.status FROM v$log a, v$logfile b WHERE a.group#=b.group#;

    GROUP#           MEMBER                STATUS
   ----------       ---------              -----------
 
        3    /u01/data/learndb/redo03.log INACTIVE

        2    /u01/data/learndb/redo02.log CURRENT

        1    /u01/data/learndb/redo01.log INACTIVE

2) Add the new logfiles to ASM using following command

   ALTER DATABASE ADD LOGFILE MEMBER '+DATA' TO GROUP 1; 
   ALTER DATABASE ADD LOGFILE MEMBER '+DATA' TO GROUP 2;
   ALTER DATABASE ADD LOGFILE MEMBER '+DATA' TO GROUP 3;

3) Now drop the old logfiles, A redo log member can only be dropped after being archived and being in INACTIVE mode. If needed, switch logfile multiple times until the logfile is ready for dropping.
   ALTER DATABASE DROP LOGFILE MEMBER '/u01/data/learndb/redo01.log';
   ALTER SYSTEM SWITCH LOGFILE;
   ALTER DATABASE DROP LOGFILE MEMBER '/u01/data/learndb/redo02.log';
   ALTER SYSTEM SWITCH LOGFILE;
   ALTER DATABASE DROP LOGFILE MEMBER '/u01/data/learndb/redo03.log';

4) You can check the logfiles created now in ASM

     GROUP#           MEMBER                STATUS
   ----------       ---------              -----------
         1    +DATA/learndb/onlinelog/group_1.264.787205393   CURRENT

         2    +DATA/learndb/onlinelog/group_2.265.787205405   INACTIVE

         3    +DATA/learndb/onlinelog/group_3.266.787205417   INACTIVE

Now your database is migrated to ASM

Thursday 28 June 2012

Creating Standalone ASM instance in Oracle 11gR2 using "Oracle Grid Infrastructure"



Creating ASM disk using Oracle ASMLib


1) Download Oracle ASMLib from following site ==> 
   http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

   Identify the kernel version to download the correct Drivers for Kernel

   Use the following command to identify the kernel version

   [root@coltdb04 ~]# uname -r
   2.6.18-274.el5

   In my case it is 64bit RHEL OS , so I downloaded following rpms to install Oracle ASMLib

            oracleasm-support-2.1.7-1.el5.x86_64.rpm
            oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm
            oracleasmlib-2.0.4-1.el5.x86_64.rpm

2) The rpm's should be applied in the same order, otherwise you will get a dependencies error

rpm -Uvh oracleasm-support-2.1.7-1.el5.x86_64.rpm
rpm -Uvh oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm
rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm

3) Configure Oracle ASM Library driver

[root@coltdb04 init.d]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

4) You can check the Oracle ASM status
[root@coltdb04 init.d]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

5) Creating a new Partition for ASM disk

[root@coltdb04 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

  Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
  e   extended
    p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-652, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-652, default 652): +5000M

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@coltdb04 ~]# fdisk -l

Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          38      305203+  83  Linux
/dev/sda2              39        3405    27045427+  83  Linux
/dev/sda3            3406        3916     4104607+  82  Linux swap / Solaris

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         609     4891761   83  Linux

6) To create a ASM disk using Oracle ASMLib 
[root@coltdb04 ~]# oracleasm createdisk DISK1 /dev/sdb1
Writing disk header: done
Instantiating disk: done

7) To List and check the Oracle ASM disks
[root@coltdb04 ~]# oracleasm listdisks
DISK1
[root@coltdb04 ~]# oracleasm querydisk DISK1
Disk "DISK1" is a valid ASM disk



Creating Oracle Standalone ASM instance on Oracle 11gR2 (11.2.0.3)


1) Once the Oracle ASM disk is created, We have to create the ASM instance using Oracle Grid Infrastructure
   
   In Oracle 11gR2, the ASM instance is Installed on new "Grid Infrastructure home".
   
   Also unlike in previous editions, we don't require "localconfig add" to configure CSS before creating ASM Instance

2) Download the Grid Infrastructure software from Oracle support site and install it using ./runInstaller

3) Select "Configure Oracle Grid Infrastructure for a Standalone Server"




4) As you can see the ASM Disk -DISK1, which we created earlier is visible here. Give the Disk Group Name (DATA) and Redundancy as required. 


5) In Oracle 11gR2 we have a new "SYSASM" Privilege to manage the ASM instance. Provide the password for the same.


 6) Once the all the prerequisite are met, you will receive the "install" screen


7) At the end execute "root.sh" as a root user and complete the installation.


8) Verify whether the ASM instance is created and
   
        [oracle@coltdb04 grid]$ sqlplus

        SQL*Plus: Release 11.2.0.3.0 Production on Thu Jun 28 02:58:10 2012

        Copyright (c) 1982, 2011, Oracle.  All rights reserved.

        Enter user-name: / as sysasm

       Connected to:
       Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
       With the Automatic Storage Management option


       SQL> select INSTANCE_NAME,VERSION,STATUS from v$instance;

                  INSTANCE_NAME    VERSION           STATUS
                   ---------------- ----------------- ------------
                          +ASM             11.2.0.3.0        STARTED




       SQL> select NAME,STATE,TOTAL_MB,FREE_MB,COMPATIBILITY from                  V$ASM_DISKGROUP;

              NAME                           STATE         TOTAL_MB    FREE_MB
              ------------------------------ -----------             ----------          ----------


                 DATA                           MOUNTED           4777       4718








Tuesday 26 June 2012

Oracle 10g RAC Fast Application Notification - Server side callouts


On the database tier, you can implement FAN server-side callouts.


FAN events are generated in response to a state change, which can include the following:
• Starting or stopping the database
• Starting or stopping an instance
• Starting or stopping a service
• A node leaving the cluster


You can also implement server-side callouts, which can be configured to perform actions such as logging fault tickets or notifying the database administrator.

When a state change occurs, the RAC HA framework posts a FAN event to ONS immediately. When a node receives an event through ONS, it will asynchronously execute all executables in the
server-side callouts directory, which is $ORA_CRS_HOME/racg/usrco. Server-side callouts must be stored in this directory; otherwise, they will not be executed.

Create the following callout sample script as Oracle user in $ORA_CRS_HOME/racg/usrco on
Node 1(coltdb01)

[oracle@coltdb01 usrco]$ cat callout.sh
#! /bin/ksh
FAN_LOGFILE=/home/oracle/log/rac_`hostname`.log
echo $* "reported="`date` >> $FAN_LOGFILE &


All the events are recorded and appended to the log file - home/oracle/log/rac_<hostname>.log

You must set the execute permission for the callout

[oracle@coltdb01 usrco]$ chmod 744 /u01/crs/oracle/product/10.2.0/crs/racg/usrco/callout.sh

In addition, you must remember to copy the script to each node in the cluster. The script will automatically be executed whenever the HA framework generates a RAC event.

Example - 1

Node 1 (coltdb01) - shutdown the node-1(coltdb01) instance

[oracle@coltdb01 ~]$ srvctl stop instance -d sendb -i sendb1

Now check the logfile mentioned in callout script on node 1

[oracle@coltdb01 log]$ pwd
/home/oracle/log
[oracle@coltdb01 log]$ ls -ltr
total 4
-rw-rw-r-- 1 oracle oinstall 182 Jun 26 17:20 rac_coltdb01.log
[oracle@coltdb01 log]$ more  rac_coltdb01.log
INSTANCE VERSION=1.0 service=sendb.cms.colt database=sendb instance=sendb1 host=coltdb01 status=down reason=user timestamp=26-Jun-2012 17:20:59 reported=Tue Jun 26 17:20:59 IST 2012

One node-2 & node-3 there are no logfile created as we shutdown only node-1 instance


Example - 2

Node -1 (coltdb1) - shutdown the node-2(coltdb02) instance


Check the logfile on node 1Nothing is appended as we shutdown node-2 instance

Check the logfile on node 2
[oracle@coltdb02 log]$ more rac_coltdb02.log
INSTANCE VERSION=1.0 service=sendb.cms.colt database=sendb instance=sendb2 host=coltdb02 status=down reason=user timestamp=26-Jun-2012 17:26:48 reported=Tue Jun 26 17:26:48 IST 2012


Node-3 there are no logfile created as we shutdown only node-2 instance


Example - 3

Node 1 (coltdb01) - Start the database

Logfiles in all the nodes are updated with the Event

Node -1
[oracle@coltdb01 log]$ more  rac_coltdb01.log
INSTANCE VERSION=1.0 service=sendb.cms.colt database=sendb instance=sendb1 host=coltdb01 status=up reason=user timestamp=26-Jun-2012 17:32:44 reported=Tue Jun 26 17:32:44 IST 2012

Node - 2
[oracle@coltdb02 log]$ more rac_coltdb02.log
INSTANCE VERSION=1.0 service=sendb.cms.colt database=sendb instance=sendb2 host=coltdb02 status=up reason=user timestamp=26-Jun-2012 17:33:11 reported=Tue Jun 26 17:33:11 IST 2012

Node-3 
[oracle@coltdb03 log]$ more rac_coltdb03.log
INSTANCE VERSION=1.0 service=sendb.cms.colt database=sendb instance=sendb3 host=coltdb03 status=up reason=user timestamp=26-Jun-2012 17:33:07 reported=Tue Jun 26 17:33:07 IST 2012

Friday 15 June 2012

Adding a new Node to Oracle RAC cluster


Environment
------------------
DB Version - 10.2.0.5
OS Version - RHEL 5.7 64bit
Existing Nodes - coltdb01, coltdb02
New Node - coltdb03

Pre-requisites
----------------
1. Install the Same OS version as other 2 nodes
2. Copy the /etc/sysctl.conf from the other node to the new node for the Kernel Parameter
3. Create the same user & group as other nodes ( Oracle user & dba group)
4. Copy the .bash_profile from the Oracle home of other node to the new node and edit the ORACLE_SID
5. Configure the ssh from the other nodes to the new node.
6. If you are not using DNS, add the IP addresses of all other nodes in the cluster to /etc/hosts,
   including the public and private network addresses and the VIP address

Adding Storage and ASM
----------------------------

1. Identify the Worldwide Name (WWN) for the HBA on the server. On the Storage Area Network (SAN), associate each LUN used by the database with the WWN of the server.
   - To identify the WWPN use the following command  -
       cat /sys/class/scsi_host/hostn/device/fc_host:hostn/port_name . Replace n with your setting

2. Once the Storage Admin assigns the LUN to the new node, execute fdisk -l to check it.

3. If you are using ASM, configuring Oracle ASM using ASMLib
    - Install following RPMS in the same order and configure ASM -
                                                                   oracleasm-support-2.1.7-1.el5.x86_64.rpm
                                                         oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm
                                                             oracleasmlib-2.0.4-1.el5.x86_64.rpm
4. For the new node to access existing ASM disk groups, you must issue the following command:
    - /etc/init.d/oracleasm scandisks

5. To list the ASM disks
    - oracleasm listdisks


Adding OCR and Voting Disk in New Node
-------------------------------------------------------
1. Copy the /etc/sysconfig/rawdevices from existing node to the new node
 [root@coltdb03 raw]# cat /etc/sysconfig/rawdevices
# raw device bindings
# format:  <rawdev> <major> <minor>
#          <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
#          /dev/raw/raw2 8 5
#OCR
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdb2
# Voting
/dev/raw/raw3 /dev/sdb5
/dev/raw/raw4 /dev/sdb6
/dev/raw/raw5 /dev/sdb7

2. Provide the necessary Privileges
OCR==>
chown root:oinstall /dev/raw/raw[12]
chmod 640 /dev/raw/raw[12]

Voting disk==>
chown oracle:oinstall /dev/raw/raw[345]
chmod 640 /dev/raw/raw[345]


* Add to /etc/rc.d/rc.local to get it enabled on boot time

3. Restart the raw devices service 
     [root@coltdb03 ~]# service rawdevices restart

Install Oracle Clusterware using x-windows
-------------------------------------------------
1.  Install the Clusterware from the existing node "coltdb01" as Oracle user
    - [oracle@coltdb01 bin]$ cd $CRS_HOME/oui/bin
      [oracle@coltdb01 bin]$ ./addNode.sh


2. Add the Node-Specific Interface Configuration (cd $CRS_HOME/bin)
   - Obtain the Oracle Notification Service(ONS) remote port number, which is speficied in the file $CRS_HOME/opmn/conf/ons.config
   - [oracle@coltdb01 ~]$ cd $CRS_HOME/bin
     [oracle@coltdb01 bin]$ racgons add_config coltdb03:6200

3. Before Proceeding with the installation of the Oracle Database Software,Verify Oracle Clusterware Installation Using CLUVFY
    - cluvfy comp clumgr -n all
    - cluvfy comp clu
    - cluvfy stage -post crsinst -n all

4. Install Oracle Database Software ( cd $ORACLE_HOME/oui/bin)
    - ./addNode.sh

5. if you have separate ASM Home, install the ASM Software, ( cd $ASM_HOME/oui/bin)
    - ./addNode.sh

6. Adding the Instance by executing dbca from one of the existing node (coltdb01)
    oracle@coltdb01 bin]$ ./dbca
     - It will automatically detect that you use ASM and will configure your ASM instance and then your database instance.
     - Your Listener will also will be configured in your ASM home

7. Verify using crs_stat -t

Thursday 14 June 2012

CRSD,CSSD and EVMD daemons are not starting after system eboot


Environment
--------------

OS - RHEL 5.7
Cluster  version - 10.2.0.5

Steps to check
-------------

1. The prerequisite check with the 'init.cssd startcheck' execution

The clusterware scripts further run a check script to know whether the clusterware is startable before launching the clusterware processes, that is, to check whether basic prerequisites are met and permit the clusterware to start :

[root@coltdb03 init.d]# /etc/init.d/init.cssd startcheck
       If the command hangs, go to /tmp/crsctl.XXXX file to check the error

[root@coltdb03 tmp]# cat crsctl.4315
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [Permission denied] [13]

In this case the permission for OCR files where missing. so after grating the required permission and privileges, execute the init.cssd startcheck command once again.This time it should return to command prompt immediately

Wednesday 13 June 2012

REINSTALL CRS ON A RAC ENVIRONMENT


If your crs is corrupted and you want to only re-install it without affecting your database, you can use the following steps

Environment
-----------
Nodes = 2
OS Version = RHEL 5.7
Clusterware Version = 10.2.0.5
DB Version = 10.2.0.5

Steps
------

1. On both the Nodes clean RAC init scripts

Linux:
rm -rf /etc/oracle/*
rm -f /etc/init.d/init.cssd
rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs
cp /etc/inittab.orig /etc/inittab

2. Kill all the crsd,evmd and cssd processes on both nodes using kill -9 command

ps -ef | grep crs
ps -ef | grep evmd
ps -ef | grep cssd

3. Remove the files in /var/tmp/.oracle/ location
rm -rf /var/tmp/.oracle/

4. Remove the file in /etc/oracle/ocr.loc
rm-rf /etc/oracle/ocr.loc

5. De-install the CRS home using Oracle universe installer
     - You can ignore this step, if you don't find the CRS installation in Universal Installer "Installed Products"
      - If you can't un-install just remove the CRS directory
          rm -rf /u01/crs/oracle/product/10.2.0/crs

6. Clean out OCR and Voting disk from one node

- Check on /etc/sysconfig/rawdevices to identify your OCR and Voting disk partition
 /dev/raw/raw1 /dev/sdb1
 /dev/raw/raw2 /dev/sdb2

 /dev/raw/raw3 /dev/sdb8
 /dev/raw/raw4 /dev/sdb9
 /dev/raw/raw5 /dev/sdb11

        - You can use the fdisk command to delete and add the partition of ocr and voting disk
- The partition number gets changed after we try to delete and add the partition in logical group, so  make a note of it and modify your /etc/sysconfig/rawdevices accordingly
        - If the partion numbers are changed, use "oracleasm scandisk" on both the nodes, to make the ASM disk visible on both the nodes
- Make sure you have correct permission for Voting disk and OCR file

To Delete a partition
--------------------
[root@coltdb01 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 48829.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 51.2 GB, 51200917504 bytes
64 heads, 32 sectors/track, 48829 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         239      244720   83  Linux
/dev/sdb2             240         478      244736   83  Linux
/dev/sdb4             479       48829    49511424    5  Extended
/dev/sdb5           10734       20271     9766896   83  Linux
/dev/sdb6           20272       29809     9766896   83  Linux
/dev/sdb7           29810       39347     9766896   83  Linux
/dev/sdb8             479         717      244720   83  Linux
/dev/sdb9             718         956      244720   83  Linux
/dev/sdb10           1196       10733     9766896   83  Linux
/dev/sdb11            957        1195      244720   83  Linux

Partition table entries are not in disk order

Command (m for help): d
Partition number (1-11): 1

To Add a partition
-------------------------
[root@coltdb01 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 48829.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 51.2 GB, 51200917504 bytes
64 heads, 32 sectors/track, 48829 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         239      244720   83  Linux
/dev/sdb2             240         478      244736   83  Linux
/dev/sdb4             479       48829    49511424    5  Extended
/dev/sdb5           10734       20271     9766896   83  Linux
/dev/sdb6           20272       29809     9766896   83  Linux
/dev/sdb7           29810       39347     9766896   83  Linux
/dev/sdb8             479         717      244720   83  Linux
/dev/sdb9             718         956      244720   83  Linux
/dev/sdb10           1196       10733     9766896   83  Linux
/dev/sdb11            957        1195      244720   83  Linux

Partition table entries are not in disk order

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)

7.Check that Virtual IP's are down on both the nodes
- If vip's remained up the need to be removed using : ifconfig <device> down

8. Reinstall the CRS from node 1
     - After the install of clusterware 10.2.0.1, the services can't be brought online, since the DB and ASM instance version is 10.2.0.5. You have to apply the patchset 10.2.0.5 to clusterware to
       bring up all the services online

9. Run crs_stat -t to check whether all the services and instances are up

Tuesday 12 June 2012

Mapping OS device to ASM disk


The best way of mapping OS device -> ASM disk

[root@coltdb01 init.d]# oracleasm querydisk /dev/sdb8
Device "/dev/sdb8" is marked an ASM disk with the label "VOL4"

[root@coltdb01 init.d]# oracleasm querydisk /dev/sdb7
Device "/dev/sdb7" is marked an ASM disk with the label "VOL3"

[root@coltdb01 init.d]# oracleasm querydisk /dev/sdb6
Device "/dev/sdb6" is marked an ASM disk with the label "VOL2"

[root@coltdb01 init.d]# oracleasm querydisk /dev/sdb5
Device "/dev/sdb5" is not marked as an ASM disk