Tuesday 24 July 2012

Connected to an idle instance..DB is up and running


[oracle@coltdb01 bin]$ ps -ef | grep pmon
oracle    6098     1  0 Jul18 ?        00:00:08 asm_pmon_+ASM1
oracle    8280     1  0 Jul23 ?        00:00:01 ora_pmon_sendb1
oracle   28280 27300  0 18:12 pts/1    00:00:00 grep pmon

[oracle@coltdb01 ~]$ export ORACLE_HOME=/u03/oracle/product/11.2.0/db_1/
[oracle@coltdb01 ~]$ export ORACLE_SID=sendb1

[oracle@coltdb01 bin]$ pwd
/u03/oracle/product/11.2.0/db_1/bin
[oracle@coltdb01 bin]$ ./sqlplus

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 24 18:13:48 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.


I was wondering why I was not able to connect to the instance, which is up and running. After some time I could figure out that, the issue was with the backslash(/) in Oracle Home environment variable. After I removed the backslash(/) I am able to connect to the instance

[oracle@coltdb01 bin]$ export ORACLE_HOME=/u03/oracle/product/11.2.0/db_1
[oracle@coltdb01 bin]$ export ORACLE_SID=sendb1
[oracle@coltdb01 bin]$ pwd
/u03/oracle/product/11.2.0/db_1/bin
[oracle@coltdb01 bin]$ ./sqlplus

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 24 18:17:09 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>


Monday 23 July 2012

11gR2 RDBMS software Installation and Database Upgrade from 10.2 to 11.3 using DBUA

Please find the high level steps to upgrade your 10.2 RAC database to 11.3 using DBUA. I would recommend using DBUA for RAC databases upgrade, as dbua is RAC aware and performs various RAC related tasks internally.

1) Before we start the upgrade, use cluvfy to verify whether the nodes are ready for upgrade.
 
    From the 11gR2 software binaries location execute -->
    cluvfy stage -pre dbinst -n coltdb01,coltdb02 -d $ORACLE_HOME -verbose

2)  Once the cluvfy succeed for all the nodes, run the ./runInstaller to install the database software.

3) Choose "Install database software only" option.


4) As you can see, the nodes would be displayed and select all the nodes, where the software should be      installed.

5) Specify the Oracle Base and Oracle software location


6) Oracle will check for the Pre-requisite to install the software, make sure all the pre-requisite are met


7) Before the installation starts, the Summary is provided, .Click Install to start the installation.


 8) At the end of the installation, you will be asked to execute the root.sh on all the nodes.



Once you have successfully install the database software, now you can upgrade your 10.2 RAC database using DBUA.


 Things to do before you start your upgrade


1. Take a Full Database Backup


2. Take the count of invalid Objects after executing the utlrp.sql script.


3. Execute OH_11G/rdbms/admin/utlu112i.sql, to identify the Pre-upgrade task to be done on the database.
    
    SQL> @/u03/oracle/product/11.2.0/db_1/rdbms/admin/utlu112i.sql
    Oracle Database 11.2 Pre-Upgrade Information Tool 07-17-2012 15:36:09
    Script Version: 11.2.0.3.0 Build: 001
.
    **********************************************************************
    Database:
   **********************************************************************
   --> name:          SENDB
   --> version:       10.2.0.5.0
   --> compatible:    10.2.0.5.0
   --> blocksize:     8192
   --> platform:      Linux x86 64-bit
  --> timezone file: V4
.
  **********************************************************************
  Components: [The following database components will be upgraded or installed]
  **********************************************************************
  --> Oracle Catalog Views         [upgrade]  VALID
  --> Oracle Packages and Types    [upgrade]  VALID
  --> JServer JAVA Virtual Machine [upgrade]  VALID
  --> Oracle XDK for Java          [upgrade]  VALID
  --> Real Application Clusters    [upgrade]  VALID
  --> Oracle Workspace Manager     [upgrade]  VALID
  --> OLAP Analytic Workspace      [upgrade]  VALID
  --> OLAP Catalog                 [upgrade]  VALID
  --> EM Repository                [upgrade]  VALID
  --> Oracle Text                  [upgrade]  VALID
  --> Oracle XML Database          [upgrade]  VALID
  --> Oracle Java Packages         [upgrade]  VALID
  --> Oracle interMedia            [upgrade]  VALID
  --> Spatial                      [upgrade]  VALID
  --> Data Mining                  [upgrade]  VALID
  --> Expression Filter            [upgrade]  VALID
  --> Rule Manager                 [upgrade]  VALID
  --> Oracle OLAP API              [upgrade]  VALID


3. Fix  the component if they are not valid.


Steps to upgrade the Database using DBUA

1) Execute the dbua from 11g R2 Oracle Home.
    [oracle@coltdb01 bin]$ pwd
    /u03/oracle/product/11.2.0/db_1/bin
    [oracle@coltdb01 bin]$ ./dbua




2) DBUA will list the databases mentioned in the Oratab. Select the correct db which you require to upgrade


3) You will receive a pop-up asking for the confirmation. Click Yes, if you have done all the pre- requisite





4) Provide the degree of Parallelism to recompile the invalid objects and also turn off the archiving mode    while upgrade. The DBUA will automatically turn on the archive mode after the upgrade.


5) At the end, dbua provides a Database upgrade Summary Report. Click Finish to start the upgrade.


6)  The DBUA upgrades all the components


7) At the End it provides you the Upgrade Results Summary. You can review, if there are any errors.


Post upgrade Validation

1) Execute utlrp.sql to compile the invalid objects and check for the count. It should be same or less than the earlier count.

2) Execute the OH_11G/rdbms/admin/utlu112s.sql to check the upgrade status of all the components.
    Also you can check the Total Upgrade Time taken at the end of the report.
     


3) Gather the stats for system and fixed objects.


4) Execute the  following command to check the status of the database
  [oracle@coltdb01 bin]$ ./srvctl config database -d sendb
   Database unique name: sendb
   Database name:
   Oracle home: /u03/oracle/product/11.2.0/db_1
   Oracle user: oracle
   Spfile: +DATA/sendb/spfilesendb.ora
   Domain: cms.colt
   Start options: open
   Stop options: immediate
   Database role: PRIMARY
   Management policy: AUTOMATIC
   Server pools: sendb
   Database instances: sendb1,sendb2
   Disk Groups: DATA,FRA
   Mount point paths:
   Services:
   Type: RAC
   Database is administrator managed


5)  Take a full backup of the database


6) If you have tested your application thoroughly with oracle 11gR2, you can modify your compatible parameter. Because once you have modified your compatible parameter to 11.2, you cannot downgrade your database using normal method. only option is yo restore your database using the backup taken.


Time taken to upgrade the database does not mainly depend on your size of the database. It depends on the number of component you have installed along with the software.





Friday 20 July 2012

Administrating the 10.2 RAC database in 11.2 Grid CRS


To use the 10.2 Database with 11.2 CRS the the node(s) should be pinned. In 11gR2 default is "unpinned"

Issue below command to check nodes are pinned or not
$GRID_HOME/bin/olsnodes -t -n

If nodes are "unpinned" then to pin nodes by executing the below command as root user
$GRID_HOME/bin/crsctl pin css -n <racnode1> <racnode2>

Also to manage the 10g databases you must use 10g srvctl, you cannot execute the srvctl from 11.2 Grid home

[oracle@coltdb01 db_1]$ /u02/app/11.2.0/grid/bin/srvctl status database -d sendb
PRCD-1027 : Failed to retrieve database sendb
PRCD-1027 : Failed to retrieve database sendb
PRKP-1088 : Failed to retrieve configuration of cluster database sendb
PRKR-1078 : Database sendb of version 10.2.0.0.0 cannot be administered using current version of srvctl. Instead run srvctl from /opt/oracle/product/10.2.0/db_1


Now try from 10.2 database home

[oracle@coltdb01 db_1]$ /opt/oracle/product/10.2.0/db_1/bin/srvctl status database -d sendb
Instance sendb1 is running on node coltdb01
Instance sendb2 is running on node coltdb02

Friday 13 July 2012

Upgrading Oracle Clusterware and ASM from 10.2.0.5 to 11.2.0.3



I have provided the high level steps to upgrade Oracle Clusterware and ASM from 10.2.0.5 to 11.2.0.3. 

Upgrading from oracle 10gR2 clusterware and ASM to 11gR2 is always out of place upgrade and can’t be done in rolling upgrade options because in 11gR2 we have single “Grid Infrastructure Home"  for both Clusterware and ASM.

Grid Infrastructure  --> Clusterware & ASM 

Below are the Steps to upgrade to your Clusterware and ASM

1) Download the following software from Oracle support site
  p10404530_112030_Linux-x86-64_1of7.zip
        p10404530_112030_Linux-x86-64_2of7.zip
        p10404530_112030_Linux-x86-64_3of7.zip

2) After unzipping the grid software, run the cluvfy to verify whether the nodes are ready for the upgrade. If the check was unsuccessful, rectify and run it again.

./runcluvfy.sh stage -pre crsinst -n coltdb01,coltdb02 -fixup -fixupdir /opt/oracle –verbose


3) Check for the clusterware active and software version. It should be same

crsctl query crs activeversion
crsctl query crs softwareversion

4) Configure the SCAN IP in the DNS ( pls refer my previous post to configure SCAN IP on DNS server). You can also configure the SCAN IP in your ‘host’ file. But oracle does not recommend it.

5) Unset  ORACLE_BASE,ORACLE_HOME,ORACLE_CRS_HOME,TNS_ADMIN,ORA_NLS10
     
My current 10.2.0.3 home
ASM_HOME = /opt/oracle/product/10.2.0/asm
CRS_HOME= /u01/crs/oracle/product/10.2.0/crs

6) Backup VOTING DISK,OCR,CLUSTER HOME AND ORACLE HOME

OCR Backup 

ocrconfig -export /u01/crs/oracle/product/10.2.0/crs/cdata/crs/ocr_backup.ocr -s online

Voting disk Backup 

dd if=/dev/raw/raw3 of=/u01/crs/oracle/product/10.2.0/crs/cdata/crs/votedisk.dmp

Similarly backup the Cluster and Oracle Homes

7) Once the backup is taken, you can start the installation by executing the runInstaller.sh from 
the grid software location.

Note – The existing 10g cluster & ASM is still up and running on both the nodes

8) Once the installation is started, you can see that oracle automatically selects the  upgrade option of the installation

9) You will be asked to select the node names and given a option to select whether you want to upgrade your ASM too



10) You will get a pop up window saying “ASM cannot be upgraded using rolling upgrade”, do you want to continue. Press “YES” to continue



11) You will be asked for the SCAN name and the port number. Provide the scan name which we created earlier in our DNS server.




12) You have provide your new Gird Base home and software home, as you can see the Base home will be different from software home.



13) After this oracle will perform a pre-requisite check for you installation. You can see there will be a failure for “OCR integrity”. It seems to be a bug  and you can ignore it.
Check the following note - Checklist for PRVF-10037 : Failed to retrieve storage type for xx on node xx [ID 1233505.1]




14) You will get a summary window before the installation starts, please review it and clink on the install



15) At the end of the installation, you will be asked to execute rootupgrade.sh on all the nodes as root user. Rootupgrade.sh is an important script  which does the upgrade of the cluster



16) You can see the message stating “succeed” at the end of the script execution. If the scripts fails for various reason, you need to refer the Oracle support note to fix it (How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix [ID 969254.1])



17) After rootupgrade.sh is successfully executed, the installation proceeds with ASM upgrade. Make sure your ASM is upgraded without any issue.


18) Once the upgrade of clusterware and ASM is completed successfully, you can validate using following scripts

Post-upgrade verification

[oracle@coltdb01 bin]$ ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[oracle@coltdb01 bin]$ ./crsctl query crs softwareversion
Oracle Clusterware version on node [coltdb01] is [11.2.0.3.0]

In Oracle 11g R2 you can verify the cluster status from a single node
[oracle@coltdb01 bin]$ ./crsctl check cluster -all
**************************************************************
coltdb01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
coltdb02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online



19) OCR and Voting disk checks. As you can see there are 5 OCR files in oracle 11gR2. Also  voting disk files are listed in 11gR2

[oracle@coltdb01 bin]$ ./ocrcheck
Status of Oracle Cluster Registry is as follows :
      Version                  :          3
        Total space (kbytes)     :     240480
        Used space (kbytes)      :       6320
        Available space (kbytes) :     234160
         ID                       : 1362593683
         Device/File Name         : /dev/raw/raw1
                                    Device/File integrity check succeeded
           Device/File Name         : /dev/raw/raw2
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@coltdb01 bin]$ ./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   2a87ecd7fedadfe9ff06de264773036e (/dev/raw/raw3) []
2. ONLINE   2ca3947acf51ffa2bf271bf037bc8779 (/dev/raw/raw4) []
3. ONLINE   1a72a3dca25eefabff869f8f5a661af3 (/dev/raw/raw5) []

20) Once the post upgrade checks are completed, you can detach the Old Oracle homes (cluster and ASM from the inventory)

Ex for Cluster Home- 
/u01/crs/oracle/product/10.2.0/crs/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/crs/oracle/product/10.2.0/crs







Monday 9 July 2012

Configure a SCAN IP using DNS for Oracle Grid Infrastructure (Oracle 11gR2)


The SCAN  IP addresses must be on the same subnet as your public network in the cluster. The name must be 15 characters or less in length,not including the domain, and must be resolvable without the domain suffix (for example: “coltdb01-scan’ must be resolvable as opposed to “coltdb01-scan.example.com”).

Follow the steps to configure the SCAN IPs using DNS


1) To identify your DNS server in RHEL 



   [root@coltdb02 ~]# cat /etc/resolv.conf



2) Connect to the DNS server open the DNS administrator window
   
   Start -> Run -> dnsmgmt.msc

3) Right Click on the Domain name and select 'New Host'. Provide the Name and ip as below.



4) Repeat this step for 3 times.You have to create a single name that resolves to 3 IP
   addresses using a round-robin algorithm.

5) You can check the SCAN configuration in DNS using “nslookup”. If your DNS is set up to provide round-robin access to the IPs resolved by the SCAN entry, then run the “nslookup” command at least twice to see the round-robin algorithm work. The result should be that each time, the “nslookup” would return a set of 3 IPs in a different order.

[root@coltdb01 ~]# nslookup coltdb01-scan
Server:         10.91.119.134
Address:        10.91.119.134#53

Name:   coltdb01-scan.example.com
Address: 10.91.119.167
Name:   coltdb01-scan.example.com
Address: 10.91.119.166
Name:   coltdb01-scan.example.com
Address: 10.91.119.168

[root@coltdb01 ~]# nslookup coltdb01-scan
Server:         10.91.119.134
Address:        10.91.119.134#53

Name:   coltdb01-scan.example.com
Address: 10.91.119.166
Name:   coltdb01-scan.example.com
Address: 10.91.119.168
Name:   coltdb01-scan.example.com
Address: 10.91.119.167