Oracle 12c RAC Installation on Linux 7 Using DNS

By
Advertisement

Content:

1. Introduction
2. Operating System requirements
3. Installation Prerequisites

Introduction:


This article explains the step by step procedure for the installation of two node Oracle 12c (12.1.0.2) Real Application Cluster (RAC) on Oracle Enterprise Linux by using DNS. I have performed this installation on Oracle Virtual Box by creating two virtual machines with a shared storage.

My environment:
Racnode1   - Oracle Enterprise Linux 7 (64-bit)
Racnode2   - Oracle Enterprise Linux 7 (64-bit)
Dns Server - Oracle Enterprise Linux 6 (64-bit)

Operating System requirements:

Prepare all the nodes by Installing Oracle Linux 7.0 by selecting required packages during installation. Have a private interconnects setup, and shared storage mounted on all the nodes. Have 2 public IP's, 2 virtual IP's, 2 private IP's, 1 dns IP and 3 SCAN IP's (with same public subnet), which we will use later during the installation. Recommended way for SCAN configuration is to have 3 IP addresses which are resolved through DNS.

Hardware Requirements:

A. Please verify that the processor architecture matches the Oracle software release to install. If you do not see the expected output, then you cannot install the software on this system. Please verify it by using the below command.
 # uname -m
B. At least 4.0 GB of physical RAM. Command to check amount of RAM installed on the system is,
$ free -m (or) $ grep MemTotal  /proc/meminfo
C. A minimum of 4GB swap space or more is required for the grid installation. Swap size should be proportional to RAM size as follows,
Swap space equal to Double the RAM size (if RAM < 4GB) 
Swap space equal to RAM size (if RAM > 4GB)
Swap space equal to 16GB (If RAM More than 16GB)
Check the Swap size by using,
$ free -m (or) $ grep SwapTotal  /proc/meminfo
D. Minimum of 1.0 GB free space in  /tmp directory is required and maximum of 2 TB free space. To check the amount of free space in /tmp,
$ df -h /tmp
E. More than 8 GB free space is required on the mount point to install oracle grid infrastructure. The recommended space for total RAC Installation is to have a minimum of 50 GB free space for each node.

Software Requirements:

Download Oracle 12c Grid Infrastructure and RDBMS software’s. The first 2 zip files are for database and last 2 are for Grid Infrastructure binaries. For this article, already downloaded the software and extracted the software under /u01/stage/ directory.

Perform Oracle 12c RAC Installation Prerequisites:

All software products are developed with some specific installation requirements, and to install any
software, you need to make sure that all installation prerequisites are in place for a successful installation. Incorrect configuration will lead to a failed installation and will cost time and resources. All of the prerequisites should be configured appropriately for a hassle-free installation. Follow the below steps as a pre-installation steps for RAC Installation.


OS Level Configuration:


1. On each node, set selinux to either permissive or disabled by using the below command.
 # setenforce 0
Edit /etc/selinux/config and set value for SELINUX to "disabled".

2. On each node, configure Shared Memory File System. Change the size of  /dev/shm by running the following command as root user.
# mount -o remount,size=4G /dev/shm  
Add following line in /etc/fstab file for shared memory file system. Modify the value of size based on the amount of memory you will be using for your SGA.
tmpfs                   /dev/shm                tmpfs   default,size=4g        0 0
3. On each node, disable the firewall by using the below commands.



4. Create the new groups and users by using below commands in nodes for user equivalence in rac.
# groupadd -g 54321 oinstall
# groupadd -g 54322 dba
# useradd -u 54321 -g oinstall -G dba oracle
Set the password for oracle user in each node.
# passwd oracle
5. We can perform automatic configuration by using yum command. If you want to do manual configuration, skip this step and go to next step i.e step 6.
On each node, execute following command to perform all prerequisites automatically. The below command will install all required packages which are needed for grid infrastructure and RDBMS software installation.
# yum install oracle-rdbms-server-12cR1-preinstall –y
Add the following lines to the /etc/security/limits.conf file.
# Oracle user
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728
6. Skip this step if you have followed the above step 5, otherwise perform following tasks on each node.
Install following RPM packages (or latest version) from either yum repository or from Linux 7 media
binutils-2.23.52.0.1-12.el7.x86_64
compat-libcap1-1.10-3.el7.x86_64
gcc-4.8.2-3.el7.x86_64
gcc-c++-4.8.2-3.el7.x86_64
glibc-2.17-36.el7.i686
glibc-2.17-36.el7.x86_64
glibc-devel-2.17-36.el7.i686
glibc-devel-2.17-36.el7.x86_64
libaio-0.3.109-9.el7.i686
libaio-0.3.109-9.el7.x86_64
libaio-devel-0.3.109-9.el7.i686
libaio-devel-0.3.109-9.el7.x86_64
ksh
make-3.82-19.el7.x86_64
libXi-1.7.2-1.el7.i686
libXi-1.7.2-1.el7.x86_64
libXtst-1.2.2-1.el7.i686
libXtst-1.2.2-1.el7.x86_64
libgcc-4.8.2-3.el7.i686
libgcc-4.8.2-3.el7.x86_64
libstdc++-4.8.2-3.el7.i686
libstdc++-4.8.2-3.el7.x86_64
libstdc++-devel-4.8.2-3.el7.i686
libstdc++-devel-4.8.2-3.el7.x86_64
sysstat-10.1.5-1.el7.x86_64
unixODBC-2.3.1
On each node, edit /etc/sysctl.conf and add following entries to set kernel parameters
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Execute following command after adding above lines in /etc/sysctl.conf file as root user.
# /sbin/sysctl -p
On each nodes, edit the /etc/security/limits.d file to set shell limits for oracle user.
# Oracle user
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 134217728
oracle soft memlock 134217728

7. On each node, Install oracleasm-support from yum repository or from Linux media, and download oracleasmlib-2.0.12-1.el7.x86_64.rpm and install it manually.
[root@racnode1 ~]# yum install oracleasm-support
[root@racnode1 ~]# rpm -ihv oracleasm-lib*

8. On each node, install cvuqdisk RPM. This is required for cluvfy (Cluster Verification Utility) to work properly to discover shared disks, otherwise an error will be reported while running cluvfy. Log in as root and Install cvuqdisk which will be found under unzipped Grid Infrastructure software.
[root@racnode1 ~]# cd /u01/stage/grid/rpm
[root@racnode1 ~]# rpm –ihv cvuqdisk-1.0.9-1.rpm
[root@racnode1 ~]# scp cvuqdisk-1.0.9-1.rpm racnode2:/u01/.

On racnode2:
[root@racnode2 ~]# cd /u01/
[root@racnode2 ~]# rpm -ihv cvuqdisk-1.0.9-1.rpm

9. On each node, Either configure NTP (Network Time Protocol), or make sure it is not configured so the Oracle Cluster Time Synchronization Service can synchronize the times of the RAC nodes during the installation.
If you want to deconfigure NTP, preform the following steps,
[root@racnode1 ~]# /sbin/service ntpd stop
[root@racnode1 ~]# chkconfig ntpd off
[root@racnode1 ~]# mv /etc/ntp.conf /etc/ntp.conf.org
[root@racnode1 ~]# rm /var/run/ntpd.pid

10. On each node, create the required directories and change ownership to respective users.
[root@racnode1 ~]# mkdir -p /u01/app/grid/12.1.0.2
[root@racnode1 ~]# mkdir -p /u01/app/oracle/product/12.1.0.2
[root@racnode1 ~]#
[root@racnode1 ~]# chown -R oracle:dba /u01/app/grid/12.1.0.2
[root@racnode1 ~]# chown -R oracle:dba /u01/app/oracle/product/12.1.0.2
[root@racnode1 ~]# chmod -R 775 /u01/

11. Reboot all the nodes.

Network Configuration:

1. On each node, in /etc/host file, add the IP addresses and fully qualified host names, including public IP's, virtual IP's and private IP's . If you are going to configure DNS server for your RAC installation, then no need of adding all hosts information in /etc/hosts file.
Public IP:  The public IP address is for the server. This is the same as any server IP address, a unique address with exists in /etc/hosts.
Private IP: Oracle RAC requires private IP addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer.
Virtual IP:  Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address. The VIP is used for RAC failover (TAF).
SCAN IP:  Scan provides a single name for clients to access any oracle database running in a cluster. Scan IP's are used for fast failover and easier maintenance like adding or deleting nodes in cluster. The Scan IP must be on the same subnet as the public IP address.
###Public 
192.168.1.101    racnode1.oracle.com      racnode1
192.168.1.102    racnode2.oracle.com      racnode2 

###Virtual 
192.168.1.201    racnode1-vip.oracle.com      racnode1-vip
192.168.1.202    racnode2-vip.oracle.com      racnode2-vip 

###Private
192.168.2.101     racnode1-priv.oracle.com      racnode1-priv
192.168.2.102     racnode2-priv.oracle.com      racnode2-priv 

###Dns
192.168.1.100     dns.oracle.com     dns

2. Make sure that Public interface and Private interfaces have the same name and they listed in same order in all the nodes. Here i am using public interface as enp0s3 and private interface as enp0s8 in both nodes .

3. Test the nodes connectivity with each other using ping command from each node. SCAN IP and Virtual IPs are not required to be tested at this point.

From node racnode1:
[root@racnode1 ~]#  ping racnode1
[root@racnode1 ~]#  ping racnode1-priv
[root@racnode1 ~]#  ping racnode2
[root@racnode1 ~]#  ping racnode2-priv
[root@racnode1 ~]#  ping dns 

From node racnode2:
[root@racnode2 ~]#  ping racnode2
[root@racnode2 ~]#  ping racnode2-priv
[root@racnode2 ~]#  ping racnode1
[root@racnode2 ~]#  ping racnode1-priv 
[root@racnode2 ~]#  ping dns

Configure DNS Server in Oracle Linux:

Install Oracle Enterprise Linux 6 and perform the below steps to configure DNS Server for RAC Installation.

1. Install the below rpm’s by using yum utility as follows in the dns server.
# yum install bind
# yum install bind-libs
# yum install bind-utils

2. Edit the /etc/named.conf file as below. Here 192.168.1.100 is my dns server IP, replace this IP addresses with your DNS server IP and domain name (oracle.com) with your domain name.
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

// *If you are facing problem with port number even after disabling Iptables you can //comment the listen-on port 53 columns to remove the specific port restriction.  *//
options {
 listen-on port 53 { 127.0.0.1; 192.168.1.100; };
 listen-on-v6 port 53 { ::1; };
 directory  "/var/named";
 dump-file  "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
// *Enter the IP subnet of you machines else comment the allow_query to allow all IP’s.*//
 allow-query { oracle.com; };
 recursion yes;<
 dnssec-enable yes;
 dnssec-validation yes;


 /* Path to ISC DLV key */
 bindkeys-file "/etc/named.iscdlv.key";
 managed-keys-directory "/var/named/dynamic";
};


logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};


// * Forward lookup zone *//
//* Here “oracle.com.zone” is my forward lookup zone file.*//
zone "oracle.com" IN {
        type master;
        file "oracle.com.zone";
        allow-update { none; };
};

// * Reverse lookup zone *//
//* Here “1.168.192.in-addr.arpa” is my reverse lookup zone file located under /var/named *//
zone "1.168.192.in-addr.arpa." IN {
        type master;
        file "1.168.192.in-addr.arpa";
        allow-update { none; };
};


include "/etc/named.rfc1912.zones";

3. Create a forward zone file as below under /var/named directory. Here oracle.com.zone my forward zone file and oracle.com is my domain name. Please replace it with your domain name, your IP’s and enter your nodes information as below.

# vi /var/named/oracle.com.zone
$TTL    86400
@               IN SOA  oracle.com root.oracle.com 
                                        42              ; serial (d. adams)
                                        3H              ; refresh
                                        15M             ; retry
                                        1W              ; expiry
                                        1D )            ; minimum
                          IN NS           oracle.com
localhost                IN A            127.0.0.1
oracle.com      IN A            192.168.1.100
racnode1                 IN A    192.168.1.101
racnode2                 IN A    192.168.1.102
racnode1-priv            IN A    192.168.2.101
racnode2-priv            IN A    192.168.2.102
racnode1-vip             IN A    192.168.1.103
racnode2-vip             IN A    192.168.1.104
rac-scan                 IN A    192.168.1.44
rac-scan                 IN A    192.168.1.55
rac-scan                 IN A    192.168.1.66

4. Create reverse lookup zone file as below. Here 1.168.192.in-addr.arpa  is my reverse lookup zone file. Please enter your nodes information as below.

# vi /var/named/1.168.192.in-addr.arpa
$ORIGIN 1.168.192.in-addr.arpa.
$TTL 1H
@       IN      SOA     dns.oracle.com.     root.dns.oracle.com. (      2
                                                3H
                                                1H
                                                1W
                                                1H )
1.168.192.in-addr.arpa.         IN NS      dns.oracle.com.
101     IN PTR  racnode1.oracle.com.
102     IN PTR  racnode2.oracle.com.
103     IN PTR  racnode1-vip.oracle.com.
104     IN PTR  racnode2-vip.oracle.com.
44      IN PTR  rac-scan.oracle.com.
55      IN PTR  rac-scan.oracle.com.
66      IN PTR  rac-scan.oracle.com.
Enable and restart the named service on the dns server a follow.
[root@dns ~] chkconfig named on 
[root@dns ~] service named start 
[root@dns ~] service named restart 

5. On each rac node, make sure /etc/resolv.conf file contain entries similar to the following. Replace oracle.com with your domain name and 192.168.1.100 with your names server IP address. Add the options timeout and option attempts parameters to avoid "PRVF-5636 : The DNS response time for an unreachable node exceeded 15000" ms on following nodes” error.
search oracle.com 
nameserver 192.168.1.100
options timeout:1
options attempts:5

6. Execute the following command as root user to prevent overwriting the /etcresolv.conf on network reboot or system reboot.
chattr +i /etc/resolv.conf 

7. Now the DNS configuration completed and Please check that the SCAN address is being resolved to all three IP addresses on each node.


Note: Please disable firewall and iptables if you are facing issues in nslookup.

Shared Storage Configuration:


1. Shutdown all the racnodes and follow the below steps to configure shared storage for cluster installation.

2. Go to racnode1 Settings and select the Storage settings. Then click on add hard disks button to create a disk.


Click on Create new disk button.

Choose virtual disk type as VDI(VirtualBox Disk Image) and click on Next button.

Choose fixed size and click on Next.

Specify the name, location for virtual disk image and size also. Click on Create button.



3. Make the created disks as shared disks between all the node as follows.
Go to Virtual media manager settings from File menu on Virtual Box home screen. Then Select the disk and click and click on Modify button at the top of screen.

Select the Sharable and click on Ok button.

4. Now add the shared disk to racnode2. Go to racnode2 setting and select storage options and click on add hard disks button. Click on Choose existing disk.

Select disks from the list for shared storage.


5. Start the nodes and execute the below command as root user in racnode1 to list the creates disks.
# fdisk -l 
After executing command fdisk for each disk, press following sequence keys
n, p, 1, <enter>, <enter>, w to create new partitions for each disk as follows.


After the partitioning completes, following are my partitioned disks
[root@racnode1 ~]# ls -ltr /dev/sd* 

brw-rw---- 1 root disk 8,  0 Oct 16 17:48 /dev/sda 
brw-rw---- 1 root disk 8,  1 Oct 16 17:48 /dev/sda1
brw-rw---- 1 root disk 8,  3 Oct 16 17:48 /dev/sda3
brw-rw---- 1 root disk 8,  2 Oct 16 17:48 /dev/sda2
brw-rw---- 1 root disk 8, 16 Oct 16 17:59 /dev/sdb
brw-rw---- 1 root disk 8, 32 Oct 16 17:59 /dev/sdc
brw-rw---- 1 root disk 8, 17 Oct 16 18:55 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Oct 16 18:55 /dev/sdc1
[root@racnode1 ~]# 
 

6. Configure oracleasm in all the racnodes as below.


7. Create the disks as asmdisks as shown below by using oraclasm from any one of the node.

Execute the below commands on all other nodes also and you should be able to see all the ASM disks.
Configure UDEV Rules:

8. Add IO scheduler for ASM disks in all the nodes. Modify disks names to match your environment.
[root@racnode1 ~]# echo deadline > /sys/block/sdb/queue/scheduler
[root@racnode1 ~]# echo deadline > /sys/block/sdc/queue/scheduler

9. Now Configure your UDEV rules, as shown here in racnode1.
Create and add the following to the /etc/scsi_id.config file to configure SCSI devices as trusted.
options=-g
Get the SCSI ID of disks by using the below commands
[root@racnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb
1ATA_VBOX_HARDDISK_VBf72914ad-884ce4f5
[root@racnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc
1ATA_VBOX_HARDDISK_VBeafc0594-4e0ec784
By using these above values, run the below commands to edit the "/etc/udev/rules.d/99-oracle-asmdevices.rules" file by adding the following 4 entries. All parameters for a single entry must be on the same line.
[root@racnode1 ~]# KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBf72914ad-884ce4f5", SYMLINK+="oracleasm/ASMDATA1", OWNER="oracle", GROUP="dba", MODE="0660

[root@racnode1 ~]# KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBeafc0594-4e0ec784", SYMLINK+="oracleasm/ASMDATA2", OWNER="oracle", GROUP="dba", MODE="0660" 
Load updated block device partition tables with the below commands.
[root@racnode1 ~]# /sbin/partprobe /dev/sdb1
[root@racnode1 ~]# /sbin/partprobe /dev/sdc1


Reload the UDEV rules.
# /sbin/udevadm control --reload-rules
# /sbin/start_udev
10. Add following in .bash_profile file of oracle user.
 umask 022
11. In the node where you plan to initiate the installation, open a terminal and log in as oracle user, and you can optionally execute cluster verification utility (cluvfy).
[grid@racnode1 ~]$ cd /u01/stage/grid/
[grid@racnode1 ~]$ ./runcluvfy.sh stage -pre crsinst -n racnode1 -fixup -verbose 

This will fail if user equivalence (passwordless ssh login) has not been configured from each node to other node. Passwordless login using ssh is also optional at this point, because installer will allow us to configure user equivalence during the installation.

Oracle 12c grid infrastructure installation step by step:

1. Now start installation from the extracted software directory (/u01/stage/grid), execute “./runInstaller” to begin the installation.

Installation Option:

2.  Select Install and configure Oracle Grid Infrastructure for a cluster from the given options as we are going to install and configure two node RAC. Then click on Next button.

Cluster Type:


3. Select Configure a Standard cluster and click on Next to configure standard cluster. Flex Cluster is introduced in 12c for high scalability and performance. We need GNS configuration for Flex Cluster installation.

Installation Type:


4. Here select Advance installation and click Next button. Typical Installation will install default configuration.

Product Languages:

5. Product Language screen allows you to select the language for your product. Now select a language from the available options.

Grid Plug and Play:

6. Now enter the details for Cluster Name , SCAN Name and SCAN port. Then uncheck the configure GNS check box as we are not using GNS in our environment. Now Click on Next button.

Cluster Node Information:

7. In this Cluster Node Information window click on Add button and add the second node Public Hostname and Virtual Hostname.


8. Click on SSH connectivity and enter the oracle user password. Click on Setup button to configure ssh connectivity between the nodes. After successful ssh setup click Next button.

Network Interface Usage:

9. Select the Private and Public interfaces for grid infrastructure and click on Next button.

Storage Option:

10. The Storage options window will be displayed as below. Accept the default Use standard ASM storage as we are going to configure grid with ASM storage and click Next button.

Create ASM Disk Group:

11. Enter the disk group name to create a disk group for the selected ASM disks. Select the desired redundancy option as show in below picture and click on Next button.

ASM Password:

12. Select any one option, either different password or same password option and specify password for the administrative users to access ASM Instance.

Failure Isolation:

13. Accept the default Failure Isolation Support options and click on Next button.

Management Options:

14. If You want to configure the enterprise manager cloud control, then select the  Register with Enterprise manager (EM) Cloud Control and enter the details. If you don't want to configure accept the default and click on Next to proceed installation.

Operating System Groups:

15. Enter the related groups for each in this section and go with Next option.

Installation Locations:


16. Enter the path for Oracle base and Grid Software installation. Click on Next button.

Create Inventory:


17. Select the path for Inventory location and click on Next button.

Root Script Execution:


18. If You want to run Root scripts automatically enter the password for root user. If you want run the Root script manually accept the default and click on Next button.

Summary:

19. The window will perform the prerequisites checks and list the warnings and errors. Make sure that no errors will be present in this screen before doing to proceed with installation. Next the below screen will be shown, Review the listed information and click on Install button.

Install Products:


20. The installation will process and wait for the installation to complete.

Finish:


21. The successful installation will display the below screen. Click on Close button to exit the installation window.


The Oracle 12.1.0.2 grid Installation completed successfully.

Oracle 12c RDBMS Software Installation:


Oracle 12c Database Creation:

Read more at Oracle 12c (12.1.0.2) RAC Database creation.

5 comments: