Thursday, 25 October 2012

Configuring a Server with the Oracle Validated RPM


The fastest way to configure an Oracle Enterprise Linux server for an Oracle database installation is to run the Oracle Validated RPM.

Configuring YUM


A YUM server provides a repository for RPM packages and their associated metadata. This makes installing the packages and their dependencies straightforward. Oracle provides a public YUM server at http://public-yum.oracle.com, but its server provides only the packages you have already downloaded on the installation media. Subscribers to the Unbreakable Linux Network can access additional security updates and patches on top of the content available on the public YUM server. If you do not have access to the Unbreakable Linux Network or you do not wish to use the public YUM server, it is a simple enough process to configure your own from the installation media.

To Configure YUM using Proxy server


[root@coltdb01 ~]# cd /etc/yum.repos.d/
[root@coltdb01 yum.repos.d]# http_proxy=http://10.XX.XX.10:80
[root@coltdb01 yum.repos.d]# export http_proxy

For RHEL 5


[root@coltdb01 yum.repos.d]# wget http://public-yum.oracle.com/public-yum-el5.repo
--2012-10-25 16:25:53--  http://public-yum.oracle.com/public-yum-el5.repo
Connecting to 10.91.118.10:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 3974 (3.9K) [text/plain]
Saving to: `public-yum-el5.repo'

100%[===========================================================================================================================================================>] 3,974       --.-K/s   in 0.004s

2012-10-25 16:25:53 (1.04 MB/s) - `public-yum-el5.repo' saved [3974/3974]


[root@coltdb01 yum.repos.d]# yum install oracle-validated

For RHEL 6


[root@linux yum.repos.d]# wget http://public-yum.oracle.com/public-yum-ol6.repo
[root@linux yum.repos.d]# yum install oracle-rdbms-server-11gR2-preinstall




Once the Oracle Validated RPM installation completes, all the RPM packages and system configuration steps required for an Oracle Database 11g Release 2 RAC installation have also been completed. For example, the required user and groups have been created, and the necessary kernel parameters have been set.You can find the installed packages listed in /var/log/yum.log






Thursday, 4 October 2012

Setup of the new Redundant Interconnect feature in Oracle 11gR2 (HAIP)


With the introduction of Oracle 11gR2, the need for IP failover using tools such as bonding are no longer required for the private interconnect. The grid infrastructure within 11.2.0.2 supports IP failover natively using a new feature introduced known as 'redundant interconnect'. Oracle uses its ora.cluster_interconnect.haip resource in order for communication with Oracle RAC, Oracle ASM, and other related services.  The HAIP (Highly Available Internet Protocol) has the ability to activate a maximum of four private interconnect connections. These private network adapters can be configured during the initial install process of Oracle Grid or after the installation process using the oifcfg utility.

Oracle Grid currently creates an alias IP (as known as virtual private IP) on your private network adapters using the 169.254.*.* subnet for the HAIP. However, if that subnet range is already in use, Oracle Grid will not attempt to use it. The purpose of HAIP is to load balance across all active interconnect interfaces, and failover to other available interfaces if one of the existing private adapters becomes unresponsive.

It is important to note, that when adding additional HAIP addresses (maximum of four) after the installation of Oracle Grid  a restart of your Oracle Grid will be required to make these new HAIP addresses active


The example below shows a step-by-step on how to enable redundant interconnect using HAIP on a existing Oracle 11gR2 Grid Infrastructure installation.


Pre-Installation

I have added a new private physical interface on both the nodes (eth4). Oracle currently does not support having different network interfaces for each node in the cluster. The best practice is to configure all nodes with the same network interface for each public subnet and the same network interface for each private subnet.

Edit the /etc/sysconfig/network-scripts/ifcfg-eth4 on both the nodes.

[root@coltdb01 bin]# ./oifcfg getif
eth0  192.168.1.0  global  cluster_interconnect
eth1  10.91.119.0  global  public

[oracle@coltdb01 bin]$ ./oifcfg iflist
eth0  192.168.1.0
eth0  169.254.0.0
eth4  192.168.1.0
eth1  10.91.119.0


You can check the interconnect details in the database 

SQL> select * from gv$cluster_interconnects;

   INST_ID NAME            IP_ADDRESS       IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
         1 eth0:1          169.254.226.34   NO
         3 eth0:1          169.254.151.242  NO

SQL>  select * from v$cluster_interconnects;

NAME            IP_ADDRESS       IS_ SOURCE
--------------- ---------------- --- -------------------------------
eth0:1          169.254.226.34   NO



Using "oifcfg setif" set an interface type (cluster interconnect) for an interface.

oifcfg setif -global eth4/192.168.1.0:cluster_interconnect

Once Done, You must restart Oracle Clusterware on all members of the cluster when you make global changes.


Post-Installation

[root@coltdb01 bin]# ./oifcfg getif
eth0  192.168.1.0  global  cluster_interconnect
eth1  10.91.119.0  global  public
eth4  192.168.1.0  global  cluster_interconnect


[root@coltdb01 bin]# ./oifcfg iflist -p -n
eth0  192.168.1.0  PRIVATE  255.255.255.0
eth0  169.254.0.0  UNKNOWN  255.255.128.0
eth4  192.168.1.0  PRIVATE  255.255.255.0
eth4  169.254.128.0  UNKNOWN  255.255.128.0
eth1  10.91.119.0  PRIVATE  255.255.255.0

SQL> select * from gv$cluster_interconnects;

   INST_ID NAME            IP_ADDRESS       IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
         3 eth0:1          169.254.79.75    NO
         3 eth4:1          169.254.210.156  NO
         1 eth0:1          169.254.29.152   NO
         1 eth4:1          169.254.206.96   NO


SQL> select * from v$cluster_interconnects;

NAME            IP_ADDRESS       IS_ SOURCE
--------------- ---------------- --- -------------------------------
eth0:1          169.254.29.152   NO
eth4:1          169.254.206.96   NO