Installation Guide for
RAC 11.2.0.1 on VirtualBox(4.2.0)
List of Topics (Linux, Database, RAC, EBS)
Click Here for installing RAC in VM-Ware using Openfiler Software
(Prepare SAN Storage area using Openfiler software and discovery your SAN Storage in VMWARE and rest of the steps are same as follows in this blog)
List of Topics (Linux, Database, RAC, EBS)
Click Here for installing RAC in VM-Ware using Openfiler Software
(Prepare SAN Storage area using Openfiler software and discovery your SAN Storage in VMWARE and rest of the steps are same as follows in this blog)
I have provided the links for complete installations in following Links:
- Openfiler Installation and configuration,
- Linux Installtion
- Grid Installation
- Database Installation
- Oracle Database Documentation >> Installing and Upgrading >> Linux Installation Guides >> Grid Infrastructure Installation Guide
- Certification Information for Oracle Database on Linux x86-64 (Doc ID 1304727.2)
1.PuTTY 2.WinSCP 3. VNC Viewer
================================================================================================================
Download PuTTY: latest release
putty.exe (the SSH and Telnet client itself)
Download link Putty 64bit Download link for putty 32 bit
================================================================================================================
Installation of RAC database on 2 node using
virtual box with 2gb of ram on each nodes
·
If you are using Virtual box for first time then
Enable Intel
Virtualization Technology (intel VT) (VTx/VTd) in your BIOS
settings.
·
Install Virtual Box Software Version 4.2.0 (key
not required)
·
Install VNC viewer 5.3.0 (Key: N7N4B-LBJ3Q-J4AYM-BB5MD-X8RYA )
Host
Machine Details:
|
Guest
Machine Details:
|
Grid
& Database Version
|
OS : Win7
RAM in PC: 8gb
Bit Version : 64bit
|
OS : OEL-5.3
RAM : 2gb
Bit Version : 32bit
|
Grid Version:11.2.0.1
Database: 11.2.0.1
|
Machine 1
|
Machine 2
|
Shareable Disk
|
Name: RAC1
Hard Disk 120GB
Disk Type : Dynamically Size
Network Adapter : 2
|
Name: RAC2
Hard Disk 120GB
Disk Type : Dynamically Size
Network Adapter : 2
|
Disk Name : San Storage
Disk Size : 20GB
Disk Type : Fixed Size
|
Node
|
Public IP
|
Private IP
|
Database
|
Users
|
SID
|
Cluster
|
RAC1
|
192.168.1.11
|
10.0.0.11
|
DELL
|
Oracle,grid
|
DELL1, +ASM1
|
dellc
|
RAC2
|
192.168.1.12
|
10.0.0.12
|
DELL
|
Oracle,grid
|
DELL2, +ASM2
|
dellc
|
RAC3
|
192.168.1.13
|
10.0.0.13
|
DELL
|
Oracle,grid
|
DELL3,+ASM3
|
dellc
|
SAN for VMware
|
192.168.1.40
|
root
|
UUID ERROR VirtualBox
Open Command prompt
cd C:\Program
Files\Oracle\VirtualBox\
VBOXMANAGE.EXE
internalcommands sethduuid " D:\RAC Installations\RAC1\RAC1.vdi"
Hints Before
Installing Linux:
Sharable Disk :
hints (Do not attach the shareable disk while
installing “LINUX”)
Hint 1 : just install Linux OS in
RAC1, then copy and unzip GRID & ORACLE S/W
Hint 2 : After installation,
Copying, Unzipping completed in RAC1 then shut down the machine
Hint 3 : Now Install Linux on RAC2
and after installation shut down the machine
Hint 4 : Create a new disk on RAC1
for shareable (san1) with option (fixed size)
Hint 5 : Make it shareable by
changing it from --->file--->virtual Media Manager
Hint 6 : For machine2 (RAC2) just
attach Shareable disk(san1) and start machines (RAC1, RAC2) and verify (fdisk
-l).
Hint 7 : Both the machines should
have to mount to the attached disk
Hint 8 : By formatting (fdisk
/dev/sdb) it at node the status of disk will automatically change in node 2.
you can verify by fdisk -l
Hint 9 :
After completing above steps check for network connectivity with ping command
for both Public and Private IP
Enable Virtual Technology
Configuration of IP address in Windows:
Creation of Machines RAC1 and RAC2
#--- Preparation of Machine RAC1
Note: just to complete installation fast allocate 4gb. After
installation you can reduce its capacity to 2 gb
#---Choose Your Linux ISO File
#--- Disable USB and Audio --- and OK
#--- Make the same setup for RAC2
change settings as shown in RAC1
Creation of Machines RAC1 and RAC2 Completed
Installation of Linux In RAC1
#--- Here
Provide the IP for RAC1 in Sequence 192.168.1.11
which is your Public IP and 10.0.0.11 is
your Private IP
#--- For Machine (RAC2) Provide the IP for RAC1 in Sequence 192.168.1.12 which is your Public IP and 10.0.0.12 is your Private IP
#--- Ctrl+AÃ
Right Click Ã
Select all optional packages
#--- Do the same for all
·
Desktop Environments
·
Applications
·
Development
·
Servers
·
Base System
·
Cluster Storage
·
Clustering
·
Virtualization
Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server (Doc ID 728346.1)
Defining a "default RPMs" installation of the Oracle Linux (OL) OS (Doc ID 401167.1)
The Oracle Validated RPM Package for Installation Prerequisites (Doc ID 437743.1)
12c, Release 1 (12.1) Grid Infrastructure Installation Guide
#--- Now Check the connectivity using PuTTY and WinSCP 192.168.1.11
#--- Using Winscp Copy the RPMS,
Grid, Oracle Software to location /shrf
so that we can share this folder on other machines too using NFS service
#--- Unzip the Grid and Oracle Software using Root User because the RAM is set for 4GB so that
it will perform it faster
#--- then shut down the machine using init 0
#--- Change RAM from 4gb to 2gb
#--- Now do the same above steps for RAC2 for
installing Linux and Provide Network details as follows
·
Public IP (eth0) :
192.168.1.12
·
Private IP (eth1) : 10.0.0.12
·
Hostname : rac2.dell.com
·
Gateway : 192.168.1.2
RAC2- Network
Parameters
#--- then shut down the machine (RAC2) using init 0
#--- Now
create a new disk in RAC1 with "Fixed size" and make it ”Shareable” from à Fileà Virtual Media Manager and then attach it to RAC2
Creation of SAN Storage
-->Open,
--> OK
#---Reduce
the RAM of Two Machines (RAC1, RAC2) to 2GB
Setting up Host file:
RAC - 1
|
RAC - 2
|
# login as: root
root@192.168.1.11's password:
Last login: Tue May 10 09:30:35 2016
[root@rac1 ~]# hostname
rac1.dell.com
[root@rac1 ~]# hostname -i
192.168.1.11
[root@rac1 ~]# vi
/etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##-- Public-IP
192.168.1.11 rac1.dell.com rac1
192.168.1.12 rac2.dell.com rac2
192.168.1.13 rac3.dell.com rac3
##-- Private-IP
10.0.0.11 rac1-priv.dell.com rac1-priv
10.0.0.12 rac2-priv.dell.com rac2-priv
10.0.0.13 rac3-priv.dell.com rac3-priv
##-- Virtual-IP
192.168.1.21 rac1-vip.dell.com rac1-vip
192.168.1.22 rac2-vip.dell.com rac2-vip
192.168.1.23 rac3-vip.dell.com rac3-vip
##-- SCAN IP
192.168.1.30 dellc-scan.dell.com dellc-scan
192.168.1.31 dellc-scan.dell.com dellc-scan
192.168.1.32 dellc-scan.dell.com dellc-scan
##-- Storage-IP
192.168.1.40 san.dell.com san
[root@rac1 ~]#
[root@rac1 ~]# service
network restart
Shutting down interface
eth0:
[ OK
]
Shutting down interface
eth1:
[
OK ]
Shutting down loopback
interface:
[ OK
]
Bringing up loopback
interface:
[ OK
]
Bringing up interface
eth0:
[ OK
]
Bringing up interface
eth1:
[
OK ]
[root@rac1 ~]#
|
login as: root
root@192.168.1.12's password:
Last login: Tue May 10 10:44:19 2016
[root@rac2 ~]# hostname
rac2.dell.com
[root@rac2 ~]# hostname -i
192.168.1.12
[root@rac2 ~]# vi
/etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
##-- Public-IP
192.168.1.11 rac1.dell.com rac1
192.168.1.12 rac2.dell.com rac2
192.168.1.13 rac3.dell.com rac3
##-- Private-IP
10.0.0.11 rac1-priv.dell.com rac1-priv
10.0.0.12 rac2-priv.dell.com rac2-priv
10.0.0.13 rac3-priv.dell.com rac3-priv
##-- Virtual-IP
192.168.1.21 rac1-vip.dell.com rac1-vip
192.168.1.22 rac2-vip.dell.com rac2-vip
192.168.1.23 rac3-vip.dell.com rac3-vip
##-- SCAN IP
192.168.1.30 dellc-scan.dell.com dellc-scan
192.168.1.31 dellc-scan.dell.com dellc-scan
192.168.1.32 dellc-scan.dell.com dellc-scan
##-- Storage-IP
192.168.1.40 san.dell.com san
[root@rac2 ~]
[root@rac2 ~]# service network restart
Shutting down interface
eth0:
[ OK
]
Shutting down interface
eth1:
[
OK ]
Shutting down loopback
interface:
[ OK
]
Bringing up loopback
interface:
[ OK
]
Bringing up interface
eth0:
[ OK
]
Bringing up interface eth1:
[
OK ]
[root@rac2 ~]#
|
RAC - 1
|
RAC - 2
|
[root@rac1 ~]# nslookup dellc-scan [root@rac1 ~]# ping 192.168.1.11
PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
64 bytes from 192.168.1.11: icmp_seq=1 ttl=64 time=0.041
ms
64 bytes from 192.168.1.11: icmp_seq=2 ttl=64 time=0.171
ms
64 bytes from 192.168.1.11: icmp_seq=3 ttl=64 time=0.055
ms
--- 192.168.1.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time
2071ms
rtt min/avg/max/mdev = 0.041/0.089/0.171/0.058 ms
[root@rac1 ~]#
[root@rac1 ~]# ping 192.168.1.12
PING 192.168.1.12 (192.168.1.12) 56(84) bytes of data.
64 bytes from 192.168.1.12: icmp_seq=1 ttl=64 time=3.03 ms
64 bytes from 192.168.1.12: icmp_seq=2 ttl=64 time=0.913
ms
64 bytes from 192.168.1.12: icmp_seq=3 ttl=64 time=0.880
ms
--- 192.168.1.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time
2000ms
rtt min/avg/max/mdev = 0.880/1.608/3.032/1.007 ms
[root@rac1 ~]#
[root@rac1 ~]# ping 10.0.0.11
PING 10.0.0.11 (10.0.0.11) 56(84) bytes of data.
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=0.121 ms
64 bytes from 10.0.0.11: icmp_seq=2 ttl=64 time=0.289 ms
64 bytes from 10.0.0.11: icmp_seq=3 ttl=64 time=0.323 ms
--- 10.0.0.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time
2001ms
rtt min/avg/max/mdev = 0.121/0.244/0.323/0.089 ms
[root@rac1 ~]#
[root@rac1 ~]# ping 10.0.0.12
PING 10.0.0.12 (10.0.0.12) 56(84) bytes of data.
64 bytes from 10.0.0.12: icmp_seq=1 ttl=64 time=3.12 ms
64 bytes from 10.0.0.12: icmp_seq=2 ttl=64 time=0.550 ms
64 bytes from 10.0.0.12: icmp_seq=3 ttl=64 time=1.40 ms
--- 10.0.0.12 ping statistics ---
3 packets transmitted, 4 received, 0% packet loss, time
3000ms
rtt min/avg/max/mdev = 0.550/1.532/3.120/0.965 ms
[root@rac1 ~]#
[root@rac1 ~]# ping 192.168.1.40
PING 192.168.1.40 (192.168.1.40) 56(84) bytes of data.
64 bytes from 192.168.1.40: icmp_seq=1 ttl=64 time=0.340
ms
64 bytes from 192.168.1.40: icmp_seq=2 ttl=64 time=0.278
ms
64 bytes from 192.168.1.40: icmp_seq=3 ttl=64 time=0.276
ms
--- 192.168.1.40 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time
2999ms
rtt min/avg/max/mdev = 0.245/0.284/0.340/0.040 ms
[root@rac1 ~]#
|
[root@rac2 ~]# ping 192.168.1.12
PING 192.168.1.12 (192.168.1.12) 56(84) bytes of data.
64 bytes from 192.168.1.12: icmp_seq=1 ttl=64 time=0.043
ms
64 bytes from 192.168.1.12: icmp_seq=2 ttl=64 time=0.144
ms
64 bytes from 192.168.1.12: icmp_seq=3 ttl=64 time=0.176
ms
--- 192.168.1.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time
2001ms
rtt min/avg/max/mdev = 0.043/0.121/0.176/0.056 ms
[root@rac2 ~]#
[root@rac2 ~]# ping 192.168.1.11
PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data.
64 bytes from 192.168.1.11: icmp_seq=1 ttl=64 time=0.835
ms
64 bytes from 192.168.1.11: icmp_seq=2 ttl=64 time=0.898
ms
64 bytes from 192.168.1.11: icmp_seq=3 ttl=64 time=0.881
ms
--- 192.168.1.11 ping statistics ---
3 packets transmitted, 4 received, 0% packet loss, time
3001ms
rtt min/avg/max/mdev = 0.835/0.875/0.898/0.043 ms
[root@rac2 ~]#
[root@rac2 ~]# ping 10.0.0.11
PING 10.0.0.11 (10.0.0.11) 56(84) bytes of data.
64 bytes from 10.0.0.11: icmp_seq=1 ttl=64 time=0.993 ms
64 bytes from 10.0.0.11: icmp_seq=2 ttl=64 time=0.937 ms
64 bytes from 10.0.0.11: icmp_seq=3 ttl=64 time=1.18 ms
--- 10.0.0.11 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time
2000ms
rtt min/avg/max/mdev = 0.937/1.038/1.185/0.109 ms
[root@rac2 ~]#
[root@rac2 ~]# ping 10.0.0.12
PING 10.0.0.12 (10.0.0.12) 56(84) bytes of data.
64 bytes from 10.0.0.12: icmp_seq=1 ttl=64 time=0.213 ms
64 bytes from 10.0.0.12: icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from 10.0.0.12: icmp_seq=3 ttl=64 time=0.128 ms
--- 10.0.0.12 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time
2000ms
rtt min/avg/max/mdev = 0.057/0.132/0.213/0.065 ms
[root@rac2 ~]#
[root@rac2 ~]# ping 192.168.1.40
PING 192.168.1.40 (192.168.1.40) 56(84) bytes of data.
64 bytes from 192.168.1.40: icmp_seq=1 ttl=64 time=0.282
ms
64 bytes from 192.168.1.40: icmp_seq=2 ttl=64 time=0.232 ms
64 bytes from 192.168.1.40: icmp_seq=3 ttl=64 time=0.273
ms
--- 192.168.1.40 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time
2999ms
rtt min/avg/max/mdev = 0.232/0.350/0.616/0.155 ms
[root@rac2 ~]#
|
Configuration of Shared Storage on both NODES for Openfiler Software:
Node-1 Machine: RAC1
login as: root
root@192.168.1.11's password:
Last login: Mon Feb 20 14:45:28 2017
[root@rac1 ~]# hostname
rac1.dell.com
[root@rac1 ~]#
[root@rac1 ~]# hostname -i
192.168.1.11
[root@rac1 ~]#
[root@rac1 ~]# ifconfig
eth0 Link encap:Ethernet
HWaddr 00:0C:29:FB:3F:8D
inet addr:192.168.1.11
Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fefb:3f8d/64 Scope:Link
UP
BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX
packets:281 errors:0 dropped:0 overruns:0 frame:0
TX
packets:163 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX
bytes:47348 (46.2 KiB) TX bytes:25000 (24.4 KiB)
eth1 Link encap:Ethernet
HWaddr 00:0C:29:FB:3F:97
inet addr:10.0.0.11
Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::20c:29ff:fefb:3f97/64 Scope:Link
UP
BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX
packets:214 errors:0 dropped:0 overruns:0 frame:0
TX
packets:86 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX
bytes:36007 (35.1 KiB) TX bytes:17656 (17.2 KiB)
lo Link
encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP
LOOPBACK RUNNING MTU:16436 Metric:1
RX
packets:3758 errors:0 dropped:0 overruns:0 frame:0
TX
packets:3758 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX
bytes:8380416 (7.9 MiB) TX bytes:8380416 (7.9 MiB)
virbr0 Link encap:Ethernet HWaddr
EA:A2:66:7A:21:4C
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP
BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX
packets:0 errors:0 dropped:0 overruns:0 frame:0
TX
packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX
bytes:0 (0.0 b) TX bytes:3647 (3.5 KiB)
[root@rac1 ~]# fdisk -l
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1 38245 307202931
83 Linux
/dev/sda2
38246 41432
25599577+ 82 Linux swap / Solaris
/dev/sda3
41433 43982
20482875 83 Linux
/dev/sda4
43983 130541 695285167+
5 Extended
/dev/sda5
43983 130541 695285136
83 Linux
[root@rac1 ~]#
[root@rac1 ~]#
Your Disk is Not Available. So, Discover your IQN
[root@rac1 ~]# iscsiadm -m discovery -t
st -p 192.168.1.40
192.168.1.40:3260,1
iqn.2006-01.com.openfiler:tsn.5debe1b1c4b0
[root@rac1 ~]#
[root@rac1 ~]# service iscsi restart
iscsiadm: No matching sessions found
Stopping iSCSI daemon:
iscsid is
stopped
[ OK
]
Starting iSCSI daemon: FATAL: Error inserting cxgb3i
(/lib/modules/2.6.32-300.10.1.el5uek/kernel/drivers/scsi/cxgbi/cxgb3i/cxgb3i.ko):
Unknown symbol in module, or unknown parameter (see dmesg)
FATAL: Error inserting ib_iser
(/lib/modules/2.6.32-300.10.1.el5uek/kernel/drivers/infiniband/ulp/iser/ib_iser.ko):
Unknown symbol in module, or unknown parameter (see dmesg)
[ OK
]
[ OK
]
Setting up iSCSI targets: Logging in to [iface: default,
target: iqn.2006-01.com.openfiler:tsn.5debe1b1c4b0, portal:
192.168.1.40,3260] (multiple)
Login to [iface: default, target:
iqn.2006-01.com.openfiler:tsn.5debe1b1c4b0, portal: 192.168.1.40,3260]
successful.
[ OK
]
[root@rac1 ~]# chkconfig iscsi on
[root@rac1 ~]#
[root@rac1 ~]# chkconfig --list | grep iscsi
iscsi
0:off 1:off
2:on 3:on 4:on 5:on
6:off
iscsid
0:off 1:off 2:off
3:on 4:on 5:on 6:off
[root@rac1 ~]#
[root@rac1 ~]# fdisk -l[root@rac1 ~]# cat /etc/inittab [root@rac1 ~]# [root@rac1 ~]# who -r run-level 5 2018-01-30 07:57 last=S [root@rac1 ~]#
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1 38245 307202931
83 Linux
/dev/sda2
38246 41432
25599577+ 82 Linux swap / Solaris
/dev/sda3
41433 43982
20482875 83 Linux
/dev/sda4
43983 130541 695285167+
5 Extended
/dev/sda5
43983 130541 695285136
83 Linux
Disk /dev/sdb: 102.3 GB, 102374572032 bytes
255 heads, 63 sectors/track, 12446 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain
a valid partition table
[root@rac1 ~]#
|
Node-2 Machine: RAC2
login as: root
root@192.168.1.12's password:
Last login: Mon Feb 20 14:47:45 2017
[root@rac2 ~]#
[root@rac2 ~]#
[root@rac2 ~]# hostname
rac2.dell.com
[root@rac2 ~]# hostname -i
192.168.1.12
[root@rac2 ~]#
[root@rac2 ~]# ifconfig
eth0 Link encap:Ethernet
HWaddr 00:0C:29:36:9C:16
inet addr:192.168.1.12
Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe36:9c16/64 Scope:Link
UP
BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX
packets:200 errors:0 dropped:0 overruns:0 frame:0
TX
packets:151 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX
bytes:39700 (38.7 KiB) TX bytes:26672 (26.0 KiB)
eth1 Link encap:Ethernet
HWaddr 00:0C:29:36:9C:20
inet addr:10.0.0.12
Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::20c:29ff:fe36:9c20/64 Scope:Link
UP
BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX
packets:216 errors:0 dropped:0 overruns:0 frame:0
TX
packets:95 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX
bytes:37580 (36.6 KiB) TX bytes:19182 (18.7 KiB)
lo Link
encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP
LOOPBACK RUNNING MTU:16436 Metric:1
RX
packets:4374 errors:0 dropped:0 overruns:0 frame:0
TX
packets:4374 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX
bytes:9177312 (8.7 MiB) TX bytes:9177312 (8.7 MiB)
virbr0 Link encap:Ethernet HWaddr
0E:D4:79:B6:C8:15
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP
BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX
packets:0 errors:0 dropped:0 overruns:0 frame:0
TX
packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX
bytes:0 (0.0 b) TX bytes:4405 (4.3 KiB)
[root@rac2 ~]# fdisk -l
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1 38245 307202931
83 Linux
/dev/sda2
38246 41432
25599577+ 82 Linux swap / Solaris
/dev/sda3
41433 43982
20482875 83 Linux
/dev/sda4
43983 130541 695285167+
5 Extended
/dev/sda5
43983 130541 695285136
83 Linux
[root@rac2 ~]#
[root@rac2 ~]#
Your Disk is Not Available. So, Discover your IQN
[root@rac2 ~]# iscsiadm -m discovery -t
st -p 192.168.1.40
192.168.1.40:3260,1
iqn.2006-01.com.openfiler:tsn.5debe1b1c4b0
[root@rac2 ~]#
[root@rac2 ~]# service iscsi restart
iscsiadm: No matching sessions found
Stopping iSCSI daemon:
iscsid is
stopped
[ OK
]
Starting iSCSI daemon: FATAL: Error inserting cxgb3i
(/lib/modules/2.6.32-300.10.1.el5uek/kernel/drivers/scsi/cxgbi/cxgb3i/cxgb3i.ko):
Unknown symbol in module, or unknown parameter (see dmesg)
FATAL: Error inserting ib_iser
(/lib/modules/2.6.32-300.10.1.el5uek/kernel/drivers/infiniband/ulp/iser/ib_iser.ko):
Unknown symbol in module, or unknown parameter (see dmesg)
[ OK ]
[ OK ]
Setting up iSCSI targets: Logging in to [iface: default,
target: iqn.2006-01.com.openfiler:tsn.5debe1b1c4b0, portal:
192.168.1.40,3260] (multiple)
Login to [iface: default, target:
iqn.2006-01.com.openfiler:tsn.5debe1b1c4b0, portal: 192.168.1.40,3260]
successful.
[ OK
]
[root@rac2 ~]# chkconfig iscsi on
[root@rac2 ~]#
[root@rac2 ~]# chkconfig --list | grep iscsi
iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
[root@rac2 ~]#
[root@rac2 ~]# fdisk -l
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1 38245 307202931
83 Linux
/dev/sda2
38246 41432
25599577+ 82 Linux swap / Solaris
/dev/sda3
41433 43982
20482875 83 Linux
/dev/sda4
43983 130541 695285167+
5 Extended
/dev/sda5
43983 130541 695285136
83 Linux
Disk /dev/sdb: 102.3 GB, 102374572032 bytes
255 heads, 63 sectors/track, 12446 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain
a valid partition table
[root@rac2 ~]#
|
RAC - 1
|
RAC - 2
|
[root@rac1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor
Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in
memory only,
until you decide to write them. After that, of course, the
previous
content won't be recoverable.
The number of cylinders for this disk is set to 2610.
There is nothing wrong with that, but this is larger than
1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of
LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be
corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610,
default 2610):
Using default value 2610
Command (m for help): p
Disk /dev/sdb: 102.3 GB, 102374572032 bytes 255 heads, 63 sectors/track, 12446 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 12446 99972463+ 83 Linux Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]#
|
|
Sample output for
both RAC-1 and RAC-2
[root@rac2 ~]# fdisk –l
Disk /dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1 25496 204796588+
83 Linux
/dev/sda2
25497 29320
30716280 82 Linux swap / Solaris
/dev/sda3
29321 33144 30716280
83 Linux
/dev/sda4
33145 130541 782341402+
5 Extended
/dev/sda5
33145 130541 782341371
83 Linux
Disk /dev/sdb: 102.3 GB, 102374572032 bytes
255 heads, 63 sectors/track, 12446 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device
Boot
Start
End Blocks Id System
/dev/sdb1
1 12446 99972463+
83 Linux
[root@rac1 ~]#
|
Grid RPM Installation on both Nodes:
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]# su
cd /u01/sftwr/grid/rpm
[root@rac1 ~]# export
CVUQDISK_GRP=oinstall
[root@rac1 ~]#
[root@rac1 D]# echo $CVUQDISK
[root@rac1 D]# rpm -ivh
cvuqdisk-1.0.9-1.rpm
scp cvuqdisk* root@rac2:/u01 yes |
[root@rac2 rpms]# su
cd /u01 rpm -qa cvuqdisk*
rpm -ivh
cvuqdisk*
pwd |
Sample output for
RAC-1 and RAC2
[root@rac1 G]# rpm
-ivh cvuqdisk-1.0.7-1.rpm
Preparing...
########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk
########################################### [100%]
[root@rac1 G]#
|
Deleting Recreating with Users with new Password:
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]# su
userdel oracle
userdel grid
groupdel oinstall
groupdel dba
groupdel asmdba
groupdel asmadmin
groupdel asmoper
rm -rf /var/mail/oracle
rm -rf /home/oracle/
rm -rf /var/mail/grid
rm -rf /home/grid
groupadd -g 1000 oinstall
groupadd -g 1001 dba
groupadd -g 1002 asmdba
groupadd -g 1003
asmadmin
groupadd -g 1004 asmoper
useradd -u 1100 -g oinstall -G dba,asmdba,asmadmin oracle
useradd -u 1101 -g
oinstall -G dba,asmdba,asmadmin,asmoper grid
passwd grid
Changing password for user grid.
New UNIX password: grid
BAD PASSWORD: it is too short
Retype new UNIX password: grid
passwd: all authentication tokens updated successfully.
[root@rac1 rpms]# passwd
oracle
Changing password for user oracle.
New UNIX password: oracle
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password: oracle
passwd: all authentication tokens updated successfully.
[root@rac1 ~]#
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ id
uid=1100(oracle) gid=1000(oinstall)
groups=1000(oinstall),1001(dba),1002(asmdba),1003(asmadmin)
[oracle@rac1 ~]$
[oracle@rac1 ~]$ exit
logout
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ id
uid=1101(grid) gid=1000(oinstall)
groups=1000(oinstall),1001(dba),1002(asmdba),1003(asmadmin),1004(asmoper)
[grid@rac1 ~]$ [grid@rac1 ~]$ exit logout [root@rac1 ~]# |
[root@rac2 rpms]# su
userdel oracle
userdel grid
groupdel oinstall
groupdel dba
groupdel asmdba
groupdel asmadmin
groupdel asmoper
rm -rf /var/mail/oracle
rm -rf /home/oracle/
rm -rf /var/mail/grid
rm -rf /home/grid
groupadd -g 1000 oinstall
groupadd -g 1001 dba
groupadd -g 1002 asmdba
groupadd -g 1003
asmadmin
groupadd -g 1004 asmoper
useradd -u 1100 -g oinstall -G dba,asmdba,asmadmin oracle
useradd -u 1101 -g
oinstall -G dba,asmdba,asmadmin,asmoper grid
passwd grid
Changing password for user grid.
New UNIX password:grid
BAD PASSWORD: it is too short
Retype new UNIX password: grid
passwd: all authentication tokens updated successfully.
[root@rac2 rpms]# passwd
oracle
Changing password for user oracle.
New UNIX password: oracle
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:oracle
passwd: all authentication tokens updated successfully.
[root@rac2 rpms]#
[root@rac2 ~]# su - oracle
[oracle@rac2 ~]$ id
uid=1100(oracle) gid=1000(oinstall)
groups=1000(oinstall),1001(dba),1002(asmdba),1003(asmadmin)
[oracle@rac2 ~]$
[oracle@rac2 ~]$ exit
logout
[root@rac2 ~]# su - grid
[grid@rac2 ~]$ id
uid=1101(grid) gid=1000(oinstall)
groups=1000(oinstall),1001(dba),1002(asmdba),1003(asmadmin),1004(asmoper)
[grid@rac2 ~]$ [grid@rac2 ~]$ exit logout [root@rac2 ~]# |
Setting Directory, Permissions:
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]# su
mkdir -p /u01/app/grid
mkdir -p
/u01/app/grid_home
mkdir -p /u01/app/oracle
chown -R grid:oinstall
/u01/
chown -R oracle:oinstall
/u01/app/oracle
chmod -R 775 /u01
pwd |
[root@rac2 rpms]# su
mkdir -p /u01/app/grid
mkdir -p
/u01/app/grid_home
mkdir -p /u01/app/oracle
chown -R grid:oinstall
/u01/
chown -R oracle:oinstall
/u01/app/oracle
chmod -R 775 /u01
pwd |
Setting limits.conf:
RAC - 1
|
RAC - 2
|
[root@rac1
rpms]#
cp /etc/security/limits.conf /etc/security/limits.conf_bkp vi /etc/security/limits.conf
G$
o
#---Paste the below values at bottom of file
grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
grid hard nproc 131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000
grid hard stack 32768
# Recommended stack hard limit 32MB for grid installations oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock unlimited oracle hard memlock unlimited oracle soft stack 10240 oracle hard stack 32768
|
[root@rac2
rpms]#
cp /etc/security/limits.conf /etc/security/limits.conf_bkp vi /etc/security/limits.conf
G$
o
#---Paste the below values at bottom of file
grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
grid hard nproc 131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000
grid hard stack 32768
# Recommended stack hard limit 32MB for grid installations oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock unlimited oracle hard memlock unlimited oracle soft stack 10240 oracle hard stack 32768 |
Save and
Exit (:wq)
Setting pam.d file for both nodes:
RAC - 1
|
RAC - 2
|
[root@rac1
rpms]#
vi /etc/pam.d/login
G$
o
#---add the below in last line
session required /lib/security/pam_limits.so
|
[root@rac2
rpms]#
vi /etc/pam.d/login
G$
o
#---add the below in last line
session required /lib/security/pam_limits.so
|
Save and Exit (:wq)
# for linux 7 check semaphore parameters in both the nodes rac1,2 and set as follow: [root@rac1 ~]# cat /proc/sys/kernel/sem 32000 1024000000 500 32000 [root@rac1 ~]# ipc -ls ------ Semaphore Limits -------- max number of arrays = 32000 max semaphores per array = 32000 max semaphores system wide = 1024000000 max ops per semop call = 500 semaphore max value = 32767 [root@rac1 ~]# # To make the change permanent, add or change the following line in the file /etc/sysctl.conf. This file is used during the boot process. [root@rac1 ~]# [root@rac1 ~]# echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf [root@rac1 ~]# echo 250 32000 100 128 > /proc/sys/kernel/sem [root@rac1 ~]# sysctl -p [root@rac1 ~]# cat /proc/sys/kernel/sem 250 32000 100 128 [root@rac1 ~]# ipc -ls ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 100 semaphore max value = 32767 [root@rac1 ~]# grep kernel.sem /etc/sysctl.conf kernel.sem = 250 32000 100 128 [root@rac1 ~]# |
RAC - 1
|
RAC - 2
|
# This step can be also performed using script sshUserSetup.sh as show below: root@rac1 rpms]#
su - grid
pwd
cd /home/grid
pwd
mkdir .ssh
vi .ssh/config
iHost *
ForwardX11 no
|
[root@rac2 rpms]#
su - grid
pwd
cd /home/grid
pwd
mkdir .ssh
vi .ssh/config
iHost *
ForwardX11 no
|
Save and Exit (:wq)
Creation of .bashrc for GRID User:
RAC - 1
|
RAC - 2
|
[grid@rac1 ~]$
vi /home/grid/.bashrc
20ddi# .bashrc
# User specific
aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global
definitions
if [ -f /etc/bashrc ];
then
. /etc/bashrc
fi
if [ -t 0 ]; then
stty intr ^C
fi
|
[grid@rac2 ~]$
vi /home/grid/.bashrc
20ddi# .bashrc
# User specific
aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global
definitions
if [ -f /etc/bashrc ];
then
. /etc/bashrc
fi
if [ -t 0 ]; then
stty intr ^C
fi
|
Save and Exit (:wq)
Creation of .ssh, config for ORACLE User:
RAC - 1
|
RAC - 2
|
[grid@rac1 ~]$
exit
su - oracle
pwd
cd /home/oracle
pwd
mkdir .ssh
vi .ssh/config
iHost *
ForwardX11 no
|
[grid@rac2 ~]$
exit
su - oracle
pwd
cd /home/oracle
pwd
mkdir .ssh
vi .ssh/config
iHost *
ForwardX11 no
|
Save and Exit (:wq)
Creation of .bashrc for ORACLE User:
RAC - 1
|
RAC - 2
|
[oracle@rac1 ~]$pwd
/home/oracle
[oracle@rac2 ~]$
vi /home/oracle/.bashrc
20ddi# .bashrc
# User specific
aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global
definitions
if [ -f /etc/bashrc ];
then
. /etc/bashrc
fi
if [ -t 0 ]; then
stty intr ^C
fi
|
[oracle@rac1 ~]$pwd
/home/oracle
[oracle@rac2 ~]$
vi /home/oracle/.bashrc
20ddi# .bashrc
# User specific
aliases and functions
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global
definitions
if [ -f /etc/bashrc ];
then
. /etc/bashrc
fi
if [ -t 0 ]; then
stty intr ^C
fi
|
Save and Exit (:wq)
OracleASM Configration:
RAC - 1
|
RAC - 2
|
[oracle@rac1 ~]$ exit
[root@rac1 rpms]# fdisk -l
Disk /dev/sda: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1
13 104391 83 Linux
/dev/sda2
14 15665
125724690 8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sdb1
1 2610
20964793+ 83 Linux
[root@rac1 ~]#
[root@rac1 ~]# oracleasm
status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
[root@rac1 ~]# oracleasm
configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle
ASM library
driver. The following questions will determine
whether the driver is
loaded on boot and what permissions it will have.
The current values
will be shown in brackets ('[]'). Hitting <ENTER>
without typing an
answer will keep that current value. Ctrl-C will
abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@rac1 ~]# oracleasm
status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
|
[oracle@rac2 ~]$ exit
[root@rac2 rpms]# fdisk -l
Disk /dev/sda: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sda1
*
1
13 104391 83 Linux
/dev/sda2
14 15665
125724690 8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
Start
End Blocks Id System
/dev/sdb1
1 2610
20964793+ 83 Linux
[root@rac2 ~]#
[root@rac2 ~]# oracleasm
status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
[root@rac2 ~]# oracleasm
configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle
ASM library
driver. The following questions will determine
whether the driver is
loaded on boot and what permissions it will have.
The current values
will be shown in brackets ('[]'). Hitting
<ENTER> without typing an
answer will keep that current value. Ctrl-C will
abort.
Default user to own the driver interface []:grid
Default group to own the driver interface []:asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@rac2 ~]# oracleasm
status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
|
Creation of OracleASM Disk:
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]#
fdisk -l
oracleasm init
oracleasm createdisk
DELLASM /dev/sdb1
oracleasm scandisks
oracleasm listdisks
ll /dev/oracleasm/disks/
pwd oracleasm querydisk -d DATA10 ---> valid/invalid ASM Disk oracleasm deletedisk DELLASM ---> if using an existing DELLASM disk on which GI & DB was already installed before using openfiler.
|
[root@rac2 rpms]#
oracleasm init
oracleasm scandisks
oracleasm listdisks
ll /dev/oracleasm/disks/
pwd |
Sample output for
RAC-1
[root@rac1 rpm]# oracleasm init
Creating /dev/oracleasm mount point:
/dev/oracleasm
Loading module "oracleasm":
oracleasm
Mounting ASMlib driver filesystem:
/dev/oracleasm
[root@rac1 rpm]# oracleasm
createdisk dellasm /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@rac1 rpm]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@rac1 rpm]# oracleasm listdisks
DELLASM
[root@rac1 rpm]#
total 0
brw-rw---- 1 grid asmadmin 8, 17
Feb 4 07:43 DELLASM
[root@rac1 rpm]# pwd
|
|
Sample output for
RAC-2
[root@rac2 u01]# oracleasm init
Creating /dev/oracleasm mount point:
/dev/oracleasm
Loading module "oracleasm":
oracleasm
Mounting ASMlib driver filesystem:
/dev/oracleasm
[root@rac2 u01]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DELLASM"
[root@rac2 u01]# oracleasm listdisks
DELLASM
[root@rac1 u01]#
total 0
brw-rw---- 1 grid asmadmin 8, 17
Feb 4 07:43 DELLASM
[root@rac2 u01]# pwd
|
/etc/resolv.conf setting:
To over come with prerequisite check "PRVF-5636"
Task resolve.conf Integrity - DNS Responce Time for an unreachable Node
PRVF-5636: The DNS response Time for unreachable node exceed "15000" ms on following nodes rac1, rac2
Task resolve.conf Integrity - DNS Responce Time for an unreachable Node
PRVF-5636: The DNS response Time for unreachable node exceed "15000" ms on following nodes rac1, rac2
RAC - 1
|
RAC - 2
|
[root@rac1 ~]#
[root@rac1 ~]#
vi /etc/resolv.conf 20ddi# Generated by NetworkManager search dell.com nameserver 192.168.1.11 options attempts:2 options timeout:1 # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com |
[root@rac2 ~]#
[root@rac2 ~]#
vi /etc/resolv.conf
20ddi# Generated by NetworkManager search dell.com nameserver 192.168.1.11 options attempts:2 options timeout:1 # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com |
ntpd Service Setup/etc/sysconfig/ntpd: Observer/Active Mode
RAC - 1
|
RAC - 2
|
Active Mode Configuration. Deconfigure NTP so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. [root@rac1 ~]# [root@rac1 ~]# crsctl check ctss CRS-4700: The Cluster Time Synchronization Service is in Observer mode. [root@rac1 ~]#[root@rac1 ~]# service ntpd stop Shutting down ntpd: [ OK ] [root@rac1 ~]# chkconfig ntpd off [root@rac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.orig [root@rac1 ~]# [root@rac1 ~]# ./runInstaller [root@rac1 ~]# [root@rac1 ~]# crsctl check ctss CRS-4701: The Cluster Time Synchronization Service is in Active mode. CRS-4702: Offset (in msec): 0
[root@rac1 ~]# even after installation of grid also, CTSS deamon can be switch to Active mode. stop ntpd service and rename the ntp.cnf file as shown above. Then enable active mode with below syntax. [grid@rac1 ~]$ cluvfy comp clocksync -n all Verifying Clock Synchronization across the cluster nodes Oracle Clusterware is installed on all nodes. CTSS resource check passed Query of CTSS for time offset passed CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Verification of Clock Synchronization across the cluster nodes was successful. [grid@rac1 ~]$ cluvfy comp clocksync -n all -verbose |
Active Mode Configuration. Deconfigure NTP so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. [root@rac2 ~]# [root@rac2 ~]# crsctl check ctss CRS-4700: The Cluster Time Synchronization Service is in Observer mode. [root@rac2 ~]# [root@rac2 ~]# service ntpd stop Shutting down ntpd: [ OK ] [root@rac2 ~]# chkconfig ntpd off [root@rac2 ~]# mv /etc/ntp.conf /etc/ntp.conf.orig [root@rac2 ~]# [root@rac2 ~]# [root@rac2 ~]# [root@rac2 ~]# crsctl check ctss CRS-4701: The Cluster Time Synchronization Service is in Active mode. CRS-4702: Offset (in msec): 0
[root@rac2 ~]#
|
Observer Mode Configuration. [root@rac1 rpms]# service ntpd stop cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd_bkp vi /etc/sysconfig/ntpd |
Observer Mode Configuration. [root@rac2 rpms]# service ntpd stop cp /etc/sysconfig/ntpd /etc/sysconfig/ntpd_bkp vi /etc/sysconfig/ntpd |
# Drop root to id 'ntp:ntp' by default. OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid" # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no # Additional options for ntpdate NTPDATE_OPTIONS="" |
# Drop root to id 'ntp:ntp' by default. OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid" # Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no # Additional options for ntpdate NTPDATE_OPTIONS="" |
Save and Exit (:wq) [root@rac1 rpms]# service ntpd start chkconfig ntpd on pwd
|
Save and Exit (:wq) [root@rac2 rpms]# service ntpd start chkconfig ntpd on pwd |
Sample output for
both Nodes for Observer Mode Configuration:
[root@rac1 rpm]# service
ntpd stop
Shutting down
ntpd:
[FAILED]
[root@rac1 rpm]# cp
/etc/sysconfig/ntpd /etc/sysconfig/ntpd_bkp
[root@rac1 rpm]# vi
/etc/sysconfig/ntpd
[root@rac1 rpm]# service
ntpd start
ntpd: Synchronizing with time
server:
[FAILED]
Starting
ntpd:
[ OK ]
[root@rac1 rpm]# chkconfig
ntpd on
[root@rac1 rpm]# pwd
|
Setting Date and Time:
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]# watch -n 1 date
Tue May 10 12:16:09 AST 2016
[root@rac1 grid]# date -s "10 May 2016 12:16:00"
Sample query to check db time: SQL> select to_char(sysdate,'DD-MM-YYYY HH24:MI:SS') from dual; TO_CHAR(SYSDATE,'DD ------------------- 10-05-2016 12:16:01 SQL> When database time is running 3 min delay from current time. which was same as in sever i.e 3min delay on server as well. So, once set server time with date -s command, db time automatically updated |
[root@rac2 rpms]# date
Tue May 10 12:16:06 AST 2016
|
Configuration of sshUserSetup.sh (ORACLE METHOD)
In new 11GR2 ssh User equivalence can be setup as below.
Node-2 Machine: RAC2
[root@rac1 ~]# hostname -i
192.168.1.11
[root@rac2 ~]# su - grid
[grid@rac2 .ssh]$[grid@rac2 ~]$ cd .ssh/ [grid@rac2 .ssh]$ ll total 4 -rw-r--r-- 1 grid oinstall 21 Jul 2 16:08 config Node-1 Machine: RAC1
login as: root
root@192.168.1.11's password:
Last login: Mon Feb 20 14:45:28 2017
[root@rac1 ~]# hostname
rac1.dell.com
[root@rac1 ~]#
[root@rac1 ~]# hostname -i
192.168.1.11
[root@rac1 ~]# su - grid [grid@rac1 ~]$ cd .ssh/ [grid@rac1 .ssh]$ ll total 4 -rw-r--r-- 1 grid oinstall 21 Jul 2 16:08 config [grid@rac1 .ssh]$ [grid@rac1 grid]$ ls install response runcluvfy.sh sshsetup welcome.html readme.html rpm runInstaller stage [grid@rac1 grid]$ cd sshsetup/ [grid@rac1 sshsetup]$ ll total 32 -rwxrwxr-x 1 grid oinstall 32343 Aug 26 2013 sshUserSetup.sh [grid@rac1 sshsetup]$ [grid@rac1 sshsetup]$ ./sshUserSetup.sh -user grid -hosts "rac1 rac2" -noPromptPassphrase The output of this script is also logged into /tmp/sshUserSetup_2018-07-02-16-27-50.log Hosts are rac1 rac2 user is grid Platform:- Linux Checking if the remote hosts are reachable PING rac1.dell.com (192.168.1.11) 56(84) bytes of data. 64 bytes from rac1.dell.com (192.168.1.11): icmp_seq=1 ttl=64 time=0.008 ms 64 bytes from rac1.dell.com (192.168.1.11): icmp_seq=2 ttl=64 time=0.034 ms 64 bytes from rac1.dell.com (192.168.1.11): icmp_seq=3 ttl=64 time=0.032 ms 64 bytes from rac1.dell.com (192.168.1.11): icmp_seq=4 ttl=64 time=0.032 ms 64 bytes from rac1.dell.com (192.168.1.11): icmp_seq=5 ttl=64 time=0.032 ms --- rac1.dell.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 0.008/0.027/0.034/0.011 ms PING rac2.dell.com (192.168.1.12) 56(84) bytes of data. 64 bytes from rac2.dell.com (192.168.1.12): icmp_seq=1 ttl=64 time=0.270 ms 64 bytes from rac2.dell.com (192.168.1.12): icmp_seq=2 ttl=64 time=0.250 ms 64 bytes from rac2.dell.com (192.168.1.12): icmp_seq=3 ttl=64 time=0.236 ms 64 bytes from rac2.dell.com (192.168.1.12): icmp_seq=4 ttl=64 time=0.252 ms 64 bytes from rac2.dell.com (192.168.1.12): icmp_seq=5 ttl=64 time=0.266 ms --- rac2.dell.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4002ms rtt min/avg/max/mdev = 0.236/0.254/0.270/0.023 ms Remote host reachability check succeeded. The following hosts are reachable: rac1 rac2. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost rac1 numhosts 2 The script will setup SSH connectivity from the host rac1.dell.com to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host rac1.dell.com and the remote hosts without being prompted for passwords or confirmations. NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked. NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE directories. Do you want to continue and let the script make the above mentioned changes (yes/no)? yes The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup. Removing old private/public keys on local host Running SSH keygen on local host with empty passphrase Generating public/private rsa key pair. Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: e4:96:95:8e:f5:e1:59:13:e3:c0:3b:0b:65:9c:c5:ad grid@rac1.dell.com Creating .ssh directory and setting permissions on remote host rac1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host rac1. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host rac1. Warning: Permanently added 'rac1,192.168.1.11' (RSA) to the list of known hosts. grid@rac1's password:grid Done with creating .ssh directory and setting permissions on remote host rac1. Creating .ssh directory and setting permissions on remote host rac2 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT. The script would create ~grid/.ssh/config file on remote host rac2. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host rac2. Warning: Permanently added 'rac2,192.168.1.12' (RSA) to the list of known hosts. grid@rac2's password:grid Done with creating .ssh directory and setting permissions on remote host rac2. Copying local host public key to the remote host rac1 The user may be prompted for a password or passphrase here since the script would be using SCP for host rac1. grid@rac1's password:grid Done copying local host public key to the remote host rac1 Copying local host public key to the remote host rac2 The user may be prompted for a password or passphrase here since the script would be using SCP for host rac2. grid@rac2's password:grid Done copying local host public key to the remote host rac2 cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory SSH setup is complete. ------------------------------------------------------------------------ Verifying SSH setup =================== The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be: 1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user grid. 2. The server may have disabled public key based authentication. 3. The client public key on the server may be outdated. 4. ~grid or ~grid/.ssh on the remote host may not be owned by grid. 5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users. 6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file. ------------------------------------------------------------------------ --rac1:-- Running /usr/bin/ssh -x -l grid rac1 date to verify SSH connectivity has been setup from local host to rac1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Mon Jul 2 16:28:12 AST 2018 ------------------------------------------------------------------------ --rac2:-- Running /usr/bin/ssh -x -l grid rac2 date to verify SSH connectivity has been setup from local host to rac2. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Mon Jul 2 16:28:12 AST 2018 ------------------------------------------------------------------------ SSH verification complete. [grid@rac1 sshsetup]$ [grid@rac1 ~]$ ls -ltrh ~/.ssh/* total 24 -rw-r--r-- 1 grid oinstall 456 Jul 2 16:28 authorized_keys -rw-r--r-- 1 grid oinstall 23 Jul 2 16:28 config -rw-r--r-- 1 grid oinstall 21 Jul 2 16:28 config.backup -rw------- 1 grid oinstall 883 Jul 2 16:28 id_rsa -rw-r--r-- 1 grid oinstall 228 Jul 2 16:28 id_rsa.pub -rw-r--r-- 1 grid oinstall 1197 Jul 2 16:28 known_hosts [grid@rac1 .ssh]$ Check similar below files created at node 2: [grid@rac2 .ssh]$ ls -ltrh ~/.ssh/* total 12 -rw-r--r-- 1 grid oinstall 228 Jul 2 16:28 authorized_keys -rw-r--r-- 1 grid oinstall 23 Jul 2 16:28 config -rw-r--r-- 1 grid oinstall 21 Jul 2 16:28 config.backup -rw-r--r-- 1 grid oinstall 0 Jul 2 16:28 known_hosts [grid@rac2 .ssh]$ You should now be able to SSH and SCP between servers without entering passwords. Follow the same for ORACLE User also: [grid@rac1 sshsetup]$ ./sshUserSetup.sh -user oracle -hosts "rac1 rac2" -noPromptPassphrase [grid@rac1 sshsetup]$ ssh oracle@rac2 |
Runcluvy Test on RAC 1:
Node-1 Machine: RAC1 Recheck hostname setting on rac1, rac2 server in Linux 7 (Doc ID 2389622.1) [root@rac1 ~]# cat /etc/hostname localhost.localdomain [root@rac1 ~]# hostnamectl --static localhost [root@rac1 ~]# hostnamectl set-hostname rac1 [root@rac1 ~]# hostnamectl --static rac1 [root@rac1 ~]# cat /etc/hostname
rac1 [grid@rac1 ~]$ cd /u01/sftwr/grid
[grid@rac1 grid]$
It is advisable to run yum -y install again for recommended OS RPMS as per GI docs, to avoid any issue in installation process: (tested while installing 12.1.0.2 GI on Linux 7.5 use yum install to resolve all dependencies) [grid@rac1 grid]$ rpm -qa cvuqdisk rpm -qa binutils rpm -qa compat* rpm -qa gcc* rpm -qa glibc* rpm -qa libaio* rpm -qa ksh rpm -qa make rpm -qa libXi rpm -qa libXtst rpm -qa libgcc rpm -qa libstdc* rpm -qa sysstat rpm -qa nfs-utils rpm -qa ntp rpm -qa oracleasm-support rpm -qa oracleasmlib ##manually download for Linux 7 click here rpm -qa kmod-oracleasm rpm -qa tmux nslookup dellc-scan manually configure DNS Server from (Linux installation) rpm -qa nscd rpm -qa named rpm -qa bind-chroot [root@rac1 ~]# yum -y install xterm* xorg* xauth xclock tmux [grid@rac1 grid]$ [grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose [grid@rac1 grid]$ ./runcluvfy.sh
stage -pre crsinst -n rac1,rac2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet
"192.168.1.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet
"192.168.1.0"
Node connectivity passed for subnet "10.0.0.0"
with node(s) rac2,rac1
TCP connectivity check passed for subnet
"10.0.0.0"
Node connectivity passed for subnet
"192.168.122.0" with node(s) rac1
TCP connectivity check passed for subnet
"192.168.122.0"
Interfaces found on subnet "192.168.1.0" that
are likely candidates for VIP are:
rac2 eth0:192.168.1.12
rac1 eth0:192.168.1.11
Interfaces found on subnet "10.0.0.0" that are
likely candidates for a private interconnect are:
rac2 eth1:10.0.0.12
rac1 eth1:10.0.0.11
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet
"192.168.1.0".
Subnet mask consistency check passed for subnet
"10.0.0.0".
Subnet mask consistency check passed for subnet
"192.168.122.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast
communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast
communication with multicast group "230.0.1.0" passed.
Checking subnet "10.0.0.0" for multicast
communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast
communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.122.0" for multicast
communication with multicast group "230.0.1.0"...
Check of subnet "192.168.122.0" for multicast
communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rac2:/tmp"
Free disk space check passed for "rac1:/tmp"
Check for multiple users with UID value 1101 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group
"oinstall" [as Primary] passed
Membership check for user "grid" in group
"dba" passed
Run level check passed
Hard limits check passed for "maximum open file
descriptors"
Soft limits check passed for "maximum open file
descriptors"
Hard limits check passed for "maximum user
processes"
Soft limits check passed for "maximum user
processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for
"ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for
"libaio(x86_64)"
Package existence check passed for
"glibc(x86_64)"
Package existence check passed for
"compat-libstdc++-33(x86_64)"
Package existence check passed for
"elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for
"glibc-common"
Package existence check passed for
"glibc-devel(x86_64)"
Package existence check passed for
"glibc-headers"
Package existence check passed for
"gcc-c++(x86_64)"
Package existence check passed for
"libaio-devel(x86_64)"
Package existence check passed for
"libgcc(x86_64)"
Package existence check passed for
"libstdc++(x86_64)"
Package existence check passed for
"libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "ksh"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root
user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time
Protocol(NTP)...
NTP Configuration file check started...
NTP Configuration file check passed
Checking daemon liveness...
Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
NTP daemon slewing option check passed
NTP daemon's boot time configuration check for slewing
option passed
NTP common Time Server Check started...
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Clock time offset check passed
Clock synchronization check using Network Time
Protocol(NTP) passed
Core file name pattern consistency check passed.
User "grid" is not part of "root"
group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf"
across nodes
File "/etc/resolv.conf" does not have both
domain and search entries defined
domain entry in file "/etc/resolv.conf" is
consistent across nodes
search entry in file "/etc/resolv.conf" is
consistent across nodes
All nodes have one search entry defined in file
"/etc/resolv.conf"
The DNS response time for an unreachable node is within
acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across
nodes
Time zone consistency check passed
Pre-check for cluster services setup was successful.
[grid@rac1 grid]$
[grid@rac1 grid]$
[grid@rac1 grid]$ ./runcluvfy.sh
stage -pre crsinst -n rac1,rac2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "rac1"
Destination
Node Reachable?
------------------------------------
------------------------
rac2 yes
rac1 yes
Result: Node reachability check passed from node
"rac1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: User equivalence check passed for user
"grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status
------------------------------------
------------------------
rac2 passed
rac1 passed
Verification of the hosts config file successful
Interface information for node "rac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------
--------------- --------------- --------------- --------------- -----------------
------
eth0 192.168.1.12 192.168.1.0 0.0.0.0 192.168.1.2 00:0C:29:69:20:80 1500
eth1 10.0.0.12 10.0.0.0 0.0.0.0 192.168.1.2 00:0C:29:69:20:76 1500
Interface information for node "rac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------
--------------- --------------- --------------- ---------------
----------------- ------
eth0 192.168.1.11 192.168.1.0 0.0.0.0 192.168.1.2 00:0C:29:F3:1A:CD 1500
eth1 10.0.0.11 10.0.0.0 0.0.0.0 192.168.1.2 00:0C:29:F3:1A:D7 1500
virbr0
192.168.122.1 192.168.122.0 0.0.0.0 192.168.1.2 72:26:D6:9B:0D:B3 1500
Check: Node connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------
------------------------------
----------------
rac2[192.168.1.12]
rac1[192.168.1.11]
yes
Result: Node connectivity passed for subnet
"192.168.1.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "192.168.1.0"
Source Destination Connected?
------------------------------
------------------------------
----------------
rac1:192.168.1.11
rac2:192.168.1.12
passed
Result: TCP connectivity check passed for subnet
"192.168.1.0"
Check: Node connectivity of subnet "10.0.0.0"
Source Destination Connected?
------------------------------
------------------------------
----------------
rac2[10.0.0.12]
rac1[10.0.0.11]
yes
Result: Node connectivity passed for subnet
"10.0.0.0" with node(s) rac2,rac1
Check: TCP connectivity of subnet "10.0.0.0"
Source Destination Connected?
------------------------------
------------------------------
----------------
rac1:10.0.0.11
rac2:10.0.0.12
passed
Result: TCP connectivity check passed for subnet
"10.0.0.0"
Check: Node connectivity of subnet
"192.168.122.0"
Result: Node connectivity passed for subnet
"192.168.122.0" with node(s) rac1
Check: TCP connectivity of subnet
"192.168.122.0"
Result: TCP connectivity check passed for subnet
"192.168.122.0"
Interfaces found on subnet "192.168.1.0" that
are likely candidates for VIP are:
rac2 eth0:192.168.1.12
rac1 eth0:192.168.1.11
Interfaces found on subnet "10.0.0.0" that are
likely candidates for a private interconnect are:
rac2 eth1:10.0.0.12
rac1 eth1:10.0.0.11
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet
"192.168.1.0".
Subnet mask consistency check passed for subnet
"10.0.0.0".
Subnet mask consistency check passed for subnet
"192.168.122.0".
Subnet mask consistency check passed.
Result: Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.1.0" for multicast
communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast
communication with multicast group "230.0.1.0" passed.
Checking subnet "10.0.0.0" for multicast
communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast
communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.122.0" for multicast
communication with multicast group "230.0.1.0"...
Check of subnet "192.168.122.0" for multicast
communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Checking ASMLib configuration.
Node Name Status
------------------------------------ ------------------------
rac2 passed
rac1 passed
Result: Check for ASMLib configuration passed.
Check: Total memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 1.9486GB (2043224.0KB) 1.5GB (1572864.0KB) passed
rac1 1.9486GB (2043224.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 1.4756GB (1547316.0KB) 50MB (51200.0KB) passed
rac1 1.3349GB (1399792.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 9.767GB (1.0241428E7KB) 2.9229GB (3064836.0KB) passed
rac1 29.2933GB (3.0716272E7KB) 2.9229GB (3064836.0KB) passed
Result: Swap space check passed
Check: Free disk space for "rac2:/tmp"
Path Node Name Mount point Available Required Status
----------------
------------ ------------ ------------ ------------ ------------
/tmp rac2 /tmp 9.25GB 1GB passed
Result: Free disk space check passed for
"rac2:/tmp"
Check: Free disk space for "rac1:/tmp"
Path Node Name Mount point Available Required Status
----------------
------------ ------------ ------------ ------------ ------------
/tmp rac1 /tmp 27.7652GB 1GB passed
Result: Free disk space check passed for
"rac1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists(1101)
rac1 passed exists(1101)
Checking for multiple users with UID value 1101
Result: Check for multiple users with UID value 1101
passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Result: Group existence check passed for
"oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed exists
rac1 passed exists
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group
"oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Status
----------------
------------ ------------ ------------ ------------ ------------
rac2 yes yes yes yes passed
rac1 yes yes yes yes passed
Result: Membership check for user "grid" in
group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group
"dba"
Node Name User Exists Group Exists User in Group Status
----------------
------------ ------------ ------------ ----------------
rac2 yes yes yes passed
rac1 yes yes yes passed
Result: Membership check for user "grid" in
group "dba" passed
Check: Run level
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
rac2 5 3,5 passed
rac1 5 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file
descriptors"
Node Name Type Available Required Status
----------------
------------ ------------ ------------ ----------------
rac2 hard 131072 65536 passed
rac1 hard 131072 65536 passed
Result: Hard limits check passed for "maximum open
file descriptors"
Check: Soft limits for "maximum open file
descriptors"
Node Name Type Available Required Status
----------------
------------ ------------ ------------ ----------------
rac2 soft 131072 1024 passed
rac1 soft 131072 1024 passed
Result: Soft limits check passed for "maximum open
file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Status
----------------
------------ ------------ ------------ ----------------
rac2 hard 131072 16384 passed
rac1 hard 131072 16384 passed
Result: Hard limits check passed for "maximum user
processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Status
----------------
------------ ------------ ------------ ----------------
rac2 soft 131072 2047 passed
rac1 soft 131072 2047 passed
Result: Soft limits check passed for "maximum user
processes"
Check: System architecture
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 x86_64 x86_64 passed
rac1 x86_64 x86_64 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 2.6.32-200.13.1.el5uek 2.6.18 passed
rac1 2.6.32-200.13.1.el5uek 2.6.18 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 250 250 250 passed
rac1 250 250 250 passed
Result: Kernel parameter check passed for
"semmsl"
Check: Kernel parameter for "semmns"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 32000 32000 32000 passed
rac1 32000 32000 32000 passed
Result: Kernel parameter check passed for
"semmns"
Check: Kernel parameter for "semopm"
Node Name Current Configured Required Status Comment
---------------- ------------ ------------ ------------ ------------ ------------
rac2 100 100 100 passed
rac1 100 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 142 142 128 passed
rac1 142 142 128 passed
Result: Kernel parameter check passed for
"semmni"
Check: Kernel parameter for "shmmax"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 4398046511104 4398046511104 1046130688 passed
rac1 4398046511104 4398046511104 1046130688 passed
Result: Kernel parameter check passed for
"shmmax"
Check: Kernel parameter for "shmmni"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 4096 4096 4096 passed
rac1 4096 4096 4096 passed
Result: Kernel parameter check passed for
"shmmni"
Check: Kernel parameter for "shmall"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 1073741824 1073741824 2097152 passed
rac1 1073741824 1073741824 2097152 passed
Result: Kernel parameter check passed for
"shmall"
Check: Kernel parameter for "file-max"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 6815744 6815744 6815744 passed
rac1 6815744 6815744 6815744 passed
Result: Kernel parameter check passed for
"file-max"
Check: Kernel parameter for
"ip_local_port_range"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed
rac1 between 9000.0 &
65500.0 between 9000.0 &
65500.0 between 9000.0 & 65500.0 passed
Result: Kernel parameter check passed for
"ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 262144 262144 262144 passed
rac1 262144 262144 262144 passed
Result: Kernel parameter check passed for
"rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 4194304 4194304 4194304 passed
rac1 4194304 4194304 4194304 passed
Result: Kernel parameter check passed for
"rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 262144 262144 262144 passed
rac1 262144 262144 262144 passed
Result: Kernel parameter check passed for
"wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 1048576 1048576 1048576 passed
rac1 1048576 1048576 1048576 passed
Result: Kernel parameter check passed for
"wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Current Configured Required Status Comment
----------------
------------ ------------ ------------ ------------ ------------
rac2 3145728 3145728 1048576 passed
rac1 3145728 3145728 1048576 passed
Result: Kernel parameter check passed for
"aio-max-nr"
Check: Package existence for "make"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 make-3.81-3.el5 make-3.81 passed
rac1 make-3.81-3.el5 make-3.81 passed
Result: Package existence check passed for
"make"
Check: Package existence for "binutils"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
rac1 binutils-2.17.50.0.6-14.el5 binutils-2.17.50.0.6 passed
Result: Package existence check passed for
"binutils"
Check: Package existence for "gcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 gcc(x86_64)-4.1.2-51.el5 gcc(x86_64)-4.1.2 passed
rac1 gcc(x86_64)-4.1.2-51.el5 gcc(x86_64)-4.1.2 passed
Result: Package existence check passed for
"gcc(x86_64)"
Check: Package existence for "libaio(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
rac1 libaio(x86_64)-0.3.106-5 libaio(x86_64)-0.3.106 passed
Result: Package existence check passed for
"libaio(x86_64)"
Check: Package existence for "glibc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc(x86_64)-2.5-65 glibc(x86_64)-2.5-24 passed
rac1 glibc(x86_64)-2.5-65 glibc(x86_64)-2.5-24 passed
Result: Package existence check passed for
"glibc(x86_64)"
Check: Package existence for
"compat-libstdc++-33(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2
compat-libstdc++-33(x86_64)-3.2.3-61
compat-libstdc++-33(x86_64)-3.2.3
passed
rac1
compat-libstdc++-33(x86_64)-3.2.3-61
compat-libstdc++-33(x86_64)-3.2.3
passed
Result: Package existence check passed for
"compat-libstdc++-33(x86_64)"
Check: Package existence for
"elfutils-libelf(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2
elfutils-libelf(x86_64)-0.137-3.el5
elfutils-libelf(x86_64)-0.125
passed
rac1
elfutils-libelf(x86_64)-0.137-3.el5
elfutils-libelf(x86_64)-0.125
passed
Result: Package existence check passed for "elfutils-libelf(x86_64)"
Check: Package existence for
"elfutils-libelf-devel"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed
rac1
elfutils-libelf-devel-0.137-3.el5
elfutils-libelf-devel-0.125
passed
Result: Package existence check passed for
"elfutils-libelf-devel"
Check: Package existence for "glibc-common"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-common-2.5-65 glibc-common-2.5 passed
rac1 glibc-common-2.5-65 glibc-common-2.5 passed
Result: Package existence check passed for
"glibc-common"
Check: Package existence for
"glibc-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-devel(x86_64)-2.5-65 glibc-devel(x86_64)-2.5 passed
rac1 glibc-devel(x86_64)-2.5-65 glibc-devel(x86_64)-2.5 passed
Result: Package existence check passed for
"glibc-devel(x86_64)"
Check: Package existence for "glibc-headers"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 glibc-headers-2.5-65 glibc-headers-2.5 passed
rac1 glibc-headers-2.5-65 glibc-headers-2.5 passed
Result: Package existence check passed for
"glibc-headers"
Check: Package existence for "gcc-c++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 gcc-c++(x86_64)-4.1.2-51.el5 gcc-c++(x86_64)-4.1.2 passed
rac1 gcc-c++(x86_64)-4.1.2-51.el5 gcc-c++(x86_64)-4.1.2 passed
Result: Package existence check passed for
"gcc-c++(x86_64)"
Check: Package existence for
"libaio-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
rac1 libaio-devel(x86_64)-0.3.106-5 libaio-devel(x86_64)-0.3.106 passed
Result: Package existence check passed for
"libaio-devel(x86_64)"
Check: Package existence for "libgcc(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libgcc(x86_64)-4.1.2-51.el5 libgcc(x86_64)-4.1.2 passed
rac1 libgcc(x86_64)-4.1.2-51.el5 libgcc(x86_64)-4.1.2 passed
Result: Package existence check passed for
"libgcc(x86_64)"
Check: Package existence for "libstdc++(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 libstdc++(x86_64)-4.1.2-51.el5 libstdc++(x86_64)-4.1.2 passed
rac1 libstdc++(x86_64)-4.1.2-51.el5 libstdc++(x86_64)-4.1.2 passed
Result: Package existence check passed for
"libstdc++(x86_64)"
Check: Package existence for
"libstdc++-devel(x86_64)"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2
libstdc++-devel(x86_64)-4.1.2-51.el5
libstdc++-devel(x86_64)-4.1.2
passed
rac1 libstdc++-devel(x86_64)-4.1.2-51.el5 libstdc++-devel(x86_64)-4.1.2 passed
Result: Package existence check passed for
"libstdc++-devel(x86_64)"
Check: Package existence for "sysstat"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed
rac1 sysstat-7.0.2-11.el5 sysstat-7.0.2 passed
Result: Package existence check passed for "sysstat"
Check: Package existence for "ksh"
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
rac2 ksh-20100202-1.el5_6.6 ksh-20060214 passed
rac1 ksh-20100202-1.el5_6.6 ksh-20060214 passed
Result: Package existence check passed for "ksh"
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Starting check for consistency of primary group of root
user
Node Name Status
------------------------------------
------------------------
rac2 passed
rac1 passed
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time
Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is
available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
------------------------------------
------------------------
rac2 yes
rac1 yes
Result: Liveness check passed for "ntpd"
Check for NTP daemon or service alive passed on all nodes
Checking NTP daemon command line for slewing option
"-x"
Check: NTP daemon command line
Node Name Slewing Option
Set?
------------------------------------
------------------------
rac2 yes
rac1 yes
Result:
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file
"/etc/sysconfig/ntpd", for slewing option "-x"
Check: NTP daemon's boot time configuration
Node Name Slewing Option Set?
------------------------------------
------------------------
rac2 yes
rac1 yes
Result:
NTP daemon's boot time configuration check for slewing
option passed
Checking whether NTP daemon or service is using UDP port
123 on all nodes
Check for NTP daemon or service using UDP port 123
Node Name Port Open?
------------------------------------
------------------------
rac2 yes
rac1 yes
NTP common Time Server Check started...
NTP Time Server ".LOCL." is common to all nodes
on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[rac2, rac1]"...
Check: Clock time offset from NTP Time Server
Time Server: .LOCL.
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
------------ ------------------------ ------------------------
rac2 0.0 passed
rac1 0.0 passed
Time Server ".LOCL." has time offsets that are
within permissible limits for nodes "[rac2, rac1]".
Clock time offset check passed
Result: Clock synchronization check using Network Time
Protocol(NTP) passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in
"root" group
Node Name Status Comment
------------ ------------------------ ------------------------
rac2 passed does not exist
rac1 passed does not exist
Result: User "grid" is not part of
"root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
rac2 0022 0022 passed
rac1 0022 0022 passed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf"
across nodes
Checking the file "/etc/resolv.conf" to make
sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both
domain and search entries defined
Checking if domain entry in file
"/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is
consistent across nodes
Checking if search entry in file
"/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is
consistent across nodes
Checking file "/etc/resolv.conf" to make sure
that only one search entry is defined
All nodes have one search entry defined in file
"/etc/resolv.conf"
Checking all nodes to make sure that search entry is
"dell.com" as found on node "rac2"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
Node Name Status
------------------------------------
------------------------
rac2 passed
rac1 passed
The DNS response time for an unreachable node is within
acceptable limit on all nodes
File "/etc/resolv.conf" is consistent across
nodes
Check: Time zone consistency
Result: Time zone consistency check passed
Pre-check for cluster services setup was successful.
[grid@rac1 grid]$
|
if required run RDA report to check prereq for fulfilment of installation
OS Server Prechecks RDA Report before installing any product (DB, RAC, EBS) Remote Diagnostic Agent (RDA) - Getting Started (Doc ID 314422.1) [root@rac1 ~]$ unzip p21769913_204201020_Linux-x86-64.zip [root@rac1 ~]$ cd rda [root@rac1 ~]$ ./rda.sh -T hcve Enter the HCVE rule set number or 0 to cancel the test Press Return to accept the default (0) > 8 Enter value for < Planned ORACLE_HOME location > > /u01/app/grid_home [root@rac1 ~]$ #-- |
Install Grid Software (Cluster Name: dellc):
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]# chown
-R grid:asmadmin /u01/sftwr/grid
[root@rac1 rpms]# xhost
+
[root@rac1 rpms]# su
- grid
[grid@rac1 ~]$ cd
/u01/sftwr/grid
[grid@rac1 grid]$ ./runInstaller
|
Note: Provide the Scan Name (dellc ) which you provided in Host file
[root@rac1
~]# olsnodes
–c
dellc
Note: just for installing GRID software just Create OCR Disk group,
you can create DATA and RECO disk groups as per requirement of your database size
if you are planning to create for PROD instance with Normal redundancy then add 3 disks with same size as shown for OCR disk below
you can use small disks as 500gb per disk, no matter even your database size is 1.5 TB.
Note:
Change the Software location to /u01/app/grid_home
Note: Click
on Fix & Check Again Before
Proceeding
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]#su
[root@rac1 rpms]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
Response file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.response
Enable file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log
Setting Kernel Parameters...
fs.file-max = 327679
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576
[root@rac1 rpms]#
|
[root@rac2 rpms]# su
[root@rac2 rpms]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
Response file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.response
Enable file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log
Setting Kernel Parameters...
fs.file-max = 327679
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576
[root@rac2 rpms]#
|
Now Click OK (It might Hide Back side of Main Window)
Run Root Scripts on
both Nodes
#--- Note: After completing in successfully
both the scripts on RAC1 Then you can run
parallel on rest of the nodes RAC2, RAC3, RAC4
RAC - 1
|
RAC - 2
|
#---Script 1
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
#---Script 2
[root@rac1 ~]# /u01/app/grid_home/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER=
grid
ORACLE_HOME= /u01/app/grid_home
Enter the full pathname of the local bin directory:
[/usr/local/bin]:
Copying dbhome to
/usr/local/bin ...
Copying oraenv to
/usr/local/bin ...
Copying coraenv
to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is
created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2018-01-18 11:19:23: Parsing the host name
2018-01-18 11:19:23: Checking for super user privileges
2018-01-18 11:19:23: User has super user privileges
Using configuration parameter file:
/u01/app/grid_home/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader
wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader
root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa
cert TP
profile reader
peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been
started.
ohasd is starting
acfsroot: ACFS-9301: ADVM/ACFS installation can not
proceed:
acfsroot: ACFS-9302: No installation files found at
/u01/app/grid_home/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5uek-x86_64/bin.
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk
0e912572706f4ffcbf6473b26f9de0c0.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0e912572706f4ffcbf6473b26f9de0c0
(/dev/oracleasm/disks/DELLASM) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
rac1 2018/01/18
11:23:29
/u01/app/grid_home/cdata/rac1/backup_20180118_112329.olr
Configure Oracle Grid Infrastructure for a Cluster ...
succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 29995 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac1 ~]#
|
#---Script 1
[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
#---Script 2
[root@rac2 ~]# /u01/app/grid_home/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER=
grid
ORACLE_HOME= /u01/app/grid_home
Enter the full pathname of the local bin directory:
[/usr/local/bin]:
Copying dbhome to
/usr/local/bin ...
Copying oraenv to
/usr/local/bin ...
Copying coraenv
to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is
created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2018-01-18 11:25:22: Parsing the host name
2018-01-18 11:25:22: Checking for super user privileges
2018-01-18 11:25:22: User has super user privileges
Using configuration parameter file:
/u01/app/grid_home/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been
started.
ohasd is starting
ADVM/ACFS is not supported on oraclelinux-release-5-7.0.2
CRS-4402: The CSS daemon was started in exclusive mode but
found an active CSS daemon on node rac1, number 1, and is terminating
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
An active cluster was found during exclusive startup,
restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
rac2 2018/01/18
11:27:35
/u01/app/grid_home/cdata/rac2/backup_20180118_112735.olr
Configure Oracle Grid Infrastructure for a Cluster ...
succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 10001 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[root@rac2 ~]#
|
Note :Here
at the end you will get Oracle Cluster Verification
Utility Failed
Just ignore
and proceed
Installation of Oracle
Software:
RAC - 1
|
RAC - 2
|
[grid@rac 1~]$ exit
Password:
[root @rac1 ~]# chown -R
oracle:oinstall /u01/sftwr/database/
[root @rac1 ~]# xhost +
[root @rac1 ~]# su -
oracle
[oracle @rac1 ~]$ cd
/u01/sftwr/database
[oracle@rac1 database]$ ./runInstaller
|
RAC - 1
|
RAC - 2
|
[root@rac1 rpms]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=
/u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory:
[/usr/local/bin]: Press
ENTER Key here
The file "dbhome" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is
created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
|
[root@rac2 rpms]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=
/u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory:
[/usr/local/bin]: Press
ENTER Key here
The file "dbhome" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is
created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
|
Environment setup for
users for both nodes:
RAC - 1
|
RAC - 2
|
[root@rac1 shrf]# su
password
vi /root/grid.env
iexport
ORACLE_HOME=/u01/app/grid_home
export
PATH=$ORACLE_HOME/bin:$PATH export ORACLE_UNQNAME=DELL export ORACLE_SID=+ASM1
[root@rac1 ~]#
su - grid
vi /home/grid/grid.env
iexport
ORACLE_HOME=/u01/app/grid_home
export
PATH=$ORACLE_HOME/bin:$PATH TNS_ADMIN=$ORACLE_HOME/network/admin export ORACLE_UNQNAME=DELL export ORACLE_SID=+ASM1
[grid@rac1 ~] $
exit
su - oracle
vi /home/oracle/dell.env
i#export
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
export
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
TNS_ADMIN=$ORACLE_HOME/network/admin
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib/usr/lib;
export LD_LIBRARY_PATH #export OMS_HOME=/u01/app/oracle/middleware/ #export AGENT_HOME=/u01/app/oracle/agent/agent_inst/ export ORACLE_UNQNAME=DELL export ORACLE_SID=DELL1
|
[root@rac2 shrf]# su
password
vi /root/grid.env
iexport
ORACLE_HOME=/u01/app/grid_home
export
PATH=$ORACLE_HOME/bin:$PATH export ORACLE_UNQNAME=DELL export ORACLE_SID=+ASM2
[root@rac2 ~]#
su - grid
vi /home/grid/grid.env
iexport
ORACLE_HOME=/u01/app/grid_home
export
PATH=$ORACLE_HOME/bin:$PATH TNS_ADMIN=$ORACLE_HOME/network/admin export ORACLE_UNQNAME=DELL export ORACLE_SID=+ASM2
[grid@rac2 ~]$
exit
su - oracle
vi /home/oracle/dell.env
i#export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
export
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
TNS_ADMIN=$ORACLE_HOME/network/admin
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib/usr/lib;
export LD_LIBRARY_PATH #export OMS_HOME=/u01/app/oracle/middleware/ #export AGENT_HOME=/u01/app/oracle/agent/agent_inst/ export ORACLE_UNQNAME=DELL export ORACLE_SID=DELL2
|
Save and Exit (:wq)
Creation of Data Base
on one Node(RAC1):
RAC - 1
|
RAC - 2
|
[root@rac1
~]# su - oracle
[oracle@rac1
~]$ . dell.env
[oracle@rac1
~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@rac1 ~]$ echo $ORACLE_SID
DELL1
[oracle@rac1 ~]$ dbca |
RAC Details:
RAC - 1
|
RAC - 2
|
ORATAB Entries: [root@rac1 ~]# [root@rac1 ~]# cat /etc/oratab (path in Linux) DELL:/u01/app/oracle/product/11.2.0/dbhome_1:N # line added by Agent [root@rac1 ~]# cat /etc/var/opt/oracle/oratab (path in Solaris) [root@rac1 ~]# . dell.env
RAC Cluster Name:
[root@rac1 ~]# olsnodes -c
dellc
[root@rac1 ~]#
RAC Cluster Node Names:
[root@rac1 ~]# olsnodes
rac1
rac2
[root@rac1 ~]#
RAC Database Name: [root@rac1 ~]# srvctl config database DELL [root@rac1 ~]# srvctl add database -d $ORACLE_UNQNAME -o $ORACLE_HOME -p +DATA/UAT/parameterfile/spfileUAT.ora [root@rac1 ~]# [root@rac1 ~]# srvctl remove database -d $ORACLE_UNQNAME [root@rac1 ~]# srvctl remove database -d $ORACLE_UNQNAME -f -y RAC ASM Status: [root@rac1 ~]# srvctl config asm [root@rac1 ~]# srvctl status asm ASM is running on rac1,rac2 [root@rac1 ~]# RAC Scan Name and port: [root@rac1 ~]# srvctl config scan SCAN name: dellc-scan, Network: 1 [root@rac1 ~]# [root@rac1 ~]# srvctl config scan_listener
[root@rac1 ~]# crsctl
status server
NAME=rac1
STATE=ONLINE
NAME=rac2
STATE=ONLINE
[root@rac1 ~]#
To check list of background process running in Cluster: [root@rac1 ~]# ps -ef | grep d.bin
[grid@rac1 ~]$ sqlplus / as sysasm Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0 SQL> [grid@rac1 ~]$ asmcmd -V asmcmd version 19.8.0.0.0 ASMCMD [+] > ASMCMD [+] > showversion --releasepatch ASM version : 19.8.0.0.0 ASMCMD [+] > showversion --softwarepatch ASM version : 19.8.0.0.0
Software patchlevel : 3487688990 [root@rac1 ~]# crsctl query has releasepatch Oracle Clusterware release patch level is [3487688990] and the complete list of patches [31281355 31304218 31305087 31335188 ] have been applied on the local node. The release patch string is [19.8.0.0.0]. [root@rac1 ~]# crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.1.0] [root@rac1 ~]#
[root@rac1 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 ~]#
RAC Cluster Services Check on all Nodes:
[root@rac1 ~]# crsctl
check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@rac1 ~]#
RAC Resources (Local & Cluster)in Tabular Format:
[root@rac1 ~]# crsctl stat res -t [root@rac1 ~]# crsctl
status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE
rac1
ONLINE ONLINE
rac2
ora.LISTENER.lsnr
ONLINE ONLINE
rac1
ONLINE ONLINE
rac2
ora.asm
ONLINE ONLINE
rac1 Started
ONLINE ONLINE
rac2
Started
ora.eons
ONLINE ONLINE
rac1
ONLINE ONLINE
rac2
ora.gsd
OFFLINE
OFFLINE rac1
OFFLINE
OFFLINE rac2
ora.net1.network
ONLINE ONLINE
rac1
ONLINE ONLINE
rac2
ora.ons
ONLINE ONLINE
rac1
ONLINE ONLINE
rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE
ONLINE rac1
ora.dell.db
1 ONLINE
ONLINE rac1 Open
2 ONLINE
ONLINE rac2 Open
ora.oc4j
1 OFFLINE OFFLINE
ora.rac1.vip
1 ONLINE
ONLINE rac1
ora.rac2.vip
1 ONLINE
ONLINE rac2
ora.scan1.vip
1 ONLINE
ONLINE rac1
[root@rac1 ~]#
RAC Resources (Local & Cluster) in Single Line Format:
[root@rac1 ~]# crs_stat
-t
Name Type Target State
Host
------------------------------------------------------------
ora.DATA.dg
ora....up.type ONLINE
ONLINE rac1
ora....ER.lsnr ora....er.type ONLINE ONLINE
rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE
rac1
ora.asm
ora.asm.type ONLINE ONLINE
rac1
ora.dell.db ora....se.type
ONLINE ONLINE rac1
ora.eons
ora.eons.type ONLINE ONLINE
rac1
ora.gsd
ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE
rac1
ora.oc4j
ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE
ONLINE rac1
ora....SM1.asm application
ONLINE ONLINE rac1
ora....C1.lsnr application
ONLINE ONLINE rac1
ora.rac1.gsd
application OFFLINE OFFLINE
ora.rac1.ons
application ONLINE ONLINE
rac1
ora.rac1.vip
ora....t1.type ONLINE
ONLINE rac1
ora....SM2.asm application
ONLINE ONLINE rac2
ora....C2.lsnr application
ONLINE ONLINE rac2
ora.rac2.gsd
application OFFLINE OFFLINE
ora.rac2.ons
application ONLINE ONLINE
rac2
ora.rac2.vip
ora....t1.type ONLINE
ONLINE rac2
ora.scan1.vip
ora....ip.type ONLINE
ONLINE rac1
[root@rac1 ~]#
[root@rac1 ~]# srvctl config database
DELL
[root@rac1 ~]#
[root@rac1 ~]# srvctl config database -d dell Database unique name: DELL Database name: DELL Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: +DATA/DELL/spfileDELL.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: DELL Database instances: DELL1,DELL2 Disk Groups: DATA Mount point paths: Services: Type: RAC Database is administrator managed [root@rac1 ~]#
[root@rac1 ~]# srvctl status database -d DELL
Instance DELL1 is running on node rac1
Instance DELL2 is running on node rac2
[root@rac1 ~]#
RAC Scan Details:
[root@rac1 ~]# srvctl config scan
SCAN name: dellc-scan, Network:
1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /dellc-scan.dell.com/192.168.1.30
[root@rac1 ~]#
RAC Scan Status:
[root@rac1 ~]# srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
[root@rac1 ~]#
RAC Scan-Listener Port:
[root@rac1 ~]# srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
[root@rac1 ~]#
How to check
Master node in RAC Cluster Ware:
[root@rac1 ~]#
[root@rac1 ~]#
rac1
rac2
[root@rac1 ~]# [root@rac1 ~]# oclumon version Cluster Health Monitor (OS), Release 19.0.0.0.0 Version : 19.18.0.0.0 [root@rac1 ~]# oclumon manage -get master Master = rac1 [root@rac1 ~]# cat crsd.trc | grep 'MASTER NAME' [root@rac1 ~]# cat ocssd.trc |grep 'master node' 11G Release checks [grid@rac1 ~]$ cat $ORACLE_HOME/log/$HOSTNAME/crsd/crsd.log | grep 'OCR MASTER' | tail -1
[root@rac1 ~]# cat
/u01/app/grid_home/log/rac1/crsd/crsd.log | grep 'OCR MASTER' | tail -1
2018-01-18 11:22:54.058: [
OCRMAS][1223579968]th_master:12: I AM THE NEW OCR MASTER at
incar 1. Node Number 1
[root@rac1 ~]#
[root@rac2 ~]# cat /u01/app/grid_home/log/rac2/crsd/crsd.log | grep 'OCR MASTER' | tail -1
2018-01-18 11:22:56.819: [
OCRMAS][1223526720]th_master: NEW OCR MASTER IS 1
OCR automatic backup is done by OCR master every 4 hours. check the last updated backup details u can find Master node. [root@rac2 ~]#
[root@rac2 ~]# ocrconfig -showbackup Node name and Node Status: [root@rac1 ~]# olsnodes -n -i -s -t
rac1 1 rac1-vip Active
Unpinned
rac2 2 rac2-vip Active
Unpinned
[root@rac1 ~]#
[root@rac1 ~]# olsnodes
-l -p
rac1 10.0.0.11
[root@rac1 ~]# |
Database Details:
RAC - 1
|
RAC - 2
|
download latest opatch [root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ . dell.env
$ORACLE_HOME/OPatch/opatch lspatches $ORACLE_HOME/OPatch/opatch lsinventory | grep applied $ORACLE_HOME/OPatch/opatch lsinventory | grep description [oracle@rac1 ~]$ [oracle@rac1 ~]$ $ORACLE_HOME/OPatch/opatch version Oracle Interim Patch Installer version 1.0.0.0.64 Copyright (c) 2011 Oracle Corporation. All Rights Reserved.. Oracle recommends you to use the latest OPatch version and read the OPatch documentation available in the OPatch/docs directory for usage. For information about the latest OPatch and other support-related issues, refer to document ID 293369.1 available on My Oracle Support (https://myoraclesupport.oracle.com) OPatch Version: 1.0.0.0.64 [oracle@rac1 ~]$ $ORACLE_HOME/OPatch/opatch version | grep "Installer version" Oracle Interim Patch Installer version 1.0.0.0.64 [oracle@rac1 ~]$ [oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Thu
Jan 25 14:56:25 2018
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release
11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application
Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing
options
SQL>
SQL> select * from gv$sessions_count; -- to check 19C RAC DB Sessions count select inst_id, count(*) from gv$session group by inst_id; select inst_id, machine, count(*) from gv$session --where machine='apps_server_host'--and status='ACTIVE' group by inst_id,machine order by machine SQL> SQL> s col HOST_NAME for a9; col OPEN_MODE for a10; select INSTANCE_NUMBER,INSTANCE_NAME,HOST_NAME,STATUS,DATABASE_STATUS,INSTANCE_ROLE,ACTIVE_STATE,INSTANCE_MODE from gv$instance; INSTANCE_NUMBER INSTANCE_NAME HOST_NAME STATUS DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST INSTANCE_MO --------------- ---------------- --------- ------------ ----------------- ------------------ --------- ----------- 2 DELL2 rac2 OPEN ACTIVE PRIMARY_INSTANCE NORMAL REGULAR 1 DELL1 rac1 OPEN ACTIVE PRIMARY_INSTANCE NORMAL REGULAR
SQL>
SQL> select name,OPEN_MODE,CREATED,LOG_MODE,CONTROLFILE_TYPE,FLASHBACK_ON,DATABASE_ROLE,GUARD_STATUS,PROTECTION_MODE from gv$database;
NAME OPEN_MODE CREATED LOG_MODE CONTROL FLASHBACK_ON DATABASE_ROLE GUARD_S PROTECTION_MODE --------- -------------------- --------- ------------ ------- ------------------ ---------------- ------- -------------------- DELL READ WRITE 22-SEP-21 ARCHIVELOG CURRENT NO PRIMARY NONE MAXIMUM PERFORMANCE DELL READ WRITE 22-SEP-21 ARCHIVELOG CURRENT NO PRIMARY NONE MAXIMUM PERFORMANCE
SQL>
SQL> select INSTANCE_NUMBER,INSTANCE_NAME,NAME,HOST_NAME,STATUS,DATABASE_STATUS,OPEN_MODE,LOG_MODE,FLASHBACK_ON,DATABASE_ROLE from gv$instance,gv$database; INSTANCE_NUMBER INSTANCE_NAME NAME HOST_NAME STATUS DATABASE_STATUS OPEN_MODE LOG_MODE FLASHBACK_ON DATABASE_ROLE --------------- ---------------- --------- --------- ------------ ----------------- ---------- ------------ ------------------ ---------------- 2 CDELL2 CDELL rac2 OPEN ACTIVE READ WRITE ARCHIVELOG NO PRIMARY 2 CDELL2 CDELL rac2 OPEN ACTIVE READ WRITE ARCHIVELOG NO PRIMARY 1 CDELL1 CDELL rac1 OPEN ACTIVE READ WRITE ARCHIVELOG NO PRIMARY 1 CDELL1 CDELL rac1 OPEN ACTIVE READ WRITE ARCHIVELOG NO PRIMARY SQL> SQL> show parameter cluster
NAME
TYPE VALUE
------------------------------------
----------- ------------------------------
cluster_database
boolean TRUE
cluster_database_instances
integer 2
cluster_interconnects
string
SQL>
SQL>
SQL> col INST_NAME for a20; select * from SYS.V_$ACTIVE_INSTANCES; INST_NUMBER INST_NAME CON_ID ----------- -------------------- ---------- 1 rac1:CDELL1 0 2 rac2:CDELL2 0 SQL> select instance from SYS.V_$THREAD;
C-R-D file Locations & Archive log status:
SQL>
SQL> select name from v$controlfile;
NAME
--------------------------------------------------------------------------------
+DATA/dell/control01.ctl
+DATA/dell/control02.ctl
SQL>
SQL> col member for a30;
select * from gv$logfile;
INST_ID GROUP# STATUS TYPE MEMBER IS_ ---------- ---------- ------- ------- ------------------------------ --- 1 2 ONLINE +DATA/dell/redo02.log NO 1 1 ONLINE +DATA/dell/redo01.log NO 1 3 ONLINE +DATA/dell/redo03.log NO 1 4 ONLINE +DATA/dell/redo04.log NO 2 2 ONLINE +DATA/dell/redo02.log NO 2 1 ONLINE +DATA/dell/redo01.log NO 2 3 ONLINE +DATA/dell/redo03.log NO 2 4 ONLINE +DATA/dell/redo04.log NO 8 rows selected.
SQL> SQL> select * from gv$log; INST_ID GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC ---------- ---------- ---------- ---------- ---------- ---------- ---------- --- STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME ---------------- ------------- --------- ------------ --------- 1 1 1 1 52428800 512 1 NO INACTIVE 945184 30-JAN-18 949294 30-JAN-18 1 2 1 2 52428800 512 1 NO CURRENT 949294 30-JAN-18 2.8147E+14 1 3 2 1 52428800 512 1 NO CURRENT 954820 30-JAN-18 2.8147E+14 30-JAN-18 INST_ID GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC ---------- ---------- ---------- ---------- ---------- ---------- ---------- --- STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME ---------------- ------------- --------- ------------ --------- 1 4 2 0 52428800 512 1 YES UNUSED 0 0 2 1 1 1 52428800 512 1 NO INACTIVE 945184 30-JAN-18 949294 30-JAN-18 2 2 1 2 52428800 512 1 NO CURRENT 949294 30-JAN-18 2.8147E+14 INST_ID GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC ---------- ---------- ---------- ---------- ---------- ---------- ---------- --- STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME ---------------- ------------- --------- ------------ --------- 2 3 2 1 52428800 512 1 NO CURRENT 954820 30-JAN-18 2.8147E+14 30-JAN-18 2 4 2 0 52428800 512 1 YES UNUSED 0 0 8 rows selected. SQL> col TABLESPACE_NAME for a20;
col FILE_NAME for a40;
select TABLESPACE_NAME, FILE_NAME,STATUS from dba_data_files;
TABLESPACE_NAME
FILE_NAME
STATUS
--------------------
---------------------------------------- ---------
USERS
+DATA/dell/users01.dbf
AVAILABLE
UNDOTBS1
+DATA/dell/undotbs01.dbf
AVAILABLE
SYSAUX
+DATA/dell/sysaux01.dbf
AVAILABLE
SYSTEM
+DATA/dell/system01.dbf
AVAILABLE
UNDOTBS2
+DATA/dell/undotbs02.dbf
AVAILABLE
SQL>
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 1
Current log sequence 2
SQL> SQL> SELECT value AS db_charset FROM nls_database_parameters WHERE parameter = 'NLS_CHARACTERSET'; -- Database characterset DB_CHARSET ---------------------------------------------------------------- AR8MSWIN1256 SQL> SELECT value AS db_ncharset FROM nls_database_parameters WHERE parameter = 'NLS_NCHAR_CHARACTERSET'; -- Database characterset DB_NCHARSET ---------------------------------------------------------------- AL16UTF16 SQL> SQL> select comp_id,comp_name,version,status from dba_registry; SQL> select description from dba_registry_sqlpatch; -- last applied BUNDLE PATCH (BP) in EBS DB SQL> Query to Verify ASM Disks and Disk Groups and Disks Space: SELECT SUBSTR(dg.name,1,16) AS diskgroup, SUBSTR(d.name,1,16) AS asmdisk,dg.TYPE, dg.STATE,d.mount_status, d.state,d.TOTAL_MB AS per_disk,dg.TOTAL_MB, SUBSTR(d.failgroup,1,16) AS failgroup FROM V$ASM_DISKGROUP dg, V$ASM_DISK d WHERE dg.group_number = d.group_number; SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle@rac1 ~]$ |
ASM Instance Details:
RAC - 1 | RAC - 2 |
[root@rac1 ~]# su - grid [grid@rac1 ~]$ . dell.env [grid@rac1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.1.0 Production on Thu Jan 25 14:56:25 2018 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> SQL> set linesize 200; select INSTANCE_NAME, STATUS, DATABASE_STATUS, ACTIVE_STATE, INSTANCE_MODE, EDITION from gv$instance; INSTANCE_NAME STATUS DATABASE_STATUS ACTIVE_ST INSTANCE_MO EDITION ---------------- ------------ ----------------- --------- ----------- ------- +ASM1 STARTED ACTIVE NORMAL REGULAR EE +ASM2 STARTED ACTIVE NORMAL REGULAR EE SQL> SQL> COLUMN name FORMAT A25 COLUMN value FORMAT A65 set linesize 200 SELECT * FROM gv$diag_info; $ORACLE_BASE = /u01/app/grid 19c Cluster Log: tailf /u01/app/grid/diag/crs/$HOSTNAME/crs/trace/alert.log 19c ASM Log: tailf /u01/app/grid/diag/asm/+asm/$ORACLE_SID/trace/alert_$ORACLE_SID.log |
OCR & Voting Disks Details:
RAC - 1
|
RAC - 2
|
Oracle RAC 11gR2
Voting Disks & OCR Backups:
Voting Disks:
In
11g release 2 you no longer have to take voting disks backup. In fact
according
to
Oracle documentation restoration of voting disks that were copied using the
"dd"
or "cp" command may prevent your clusterware from starting up.
So,
In 11g Release 2 your voting disk data is automatically backed up in
the
OCR whenever there is a configuration change.
Also
the data is automatically restored to any voting that is added.
OCR BACKUP:
Automatic
backups : -
a)
Oracle Clusterware (CRSD) automatically creates OCR backups every 4 hours.
b)
A backup is created for each full day.
c)
A backup is created at the end of each week.
d)
Oracle Database retains the last three copies of OCR.
Manual
backups: -
- can be taken using the "ocrconfig
-manualbackup" command
Example
: -
[root@rac1
~]# ocrconfig
-showbackup
PROT-24:
Auto backups for the Oracle Cluster Registry are not available
PROT-25:
Manual backups for the Oracle Cluster Registry are not available
[root@rac1
~]#
Since the system is fresh one and it does not contain any backup.
Now lets perform a manual backup and check results. [root@rac1 ~]# ocrconfig -manualbackup
rac1 2018/06/24 08:44:32
/u01/app/grid_home/cdata/dellc/backup_20180624_084432.ocr
[root@rac1
~]#
[root@rac1
~]# ocrconfig
-showbackup
PROT-24:
Auto backups for the Oracle Cluster Registry are not available
rac1 2018/06/24 08:44:32
/u01/app/grid_home/cdata/dellc/backup_20180624_084432.ocr
[root@rac1
~]#
Example 2 : - Automatic Backup of OCR for every 4 Hours
[root@rac1 ~]#
[root@rac1 ~]# ocrconfig
-showbackup
rac1 2018/06/25 06:48:04
/u01/app/grid_home/cdata/dellc/backup00.ocr
rac1 2018/06/25 02:48:03 /u01/app/grid_home/cdata/dellc/backup01.ocr
rac1 2018/06/24 22:48:03
/u01/app/grid_home/cdata/dellc/backup02.ocr
rac1 2018/06/24 14:48:02 /u01/app/grid_home/cdata/dellc/day.ocr
rac1 2018/06/24 14:48:02 /u01/app/grid_home/cdata/dellc/week.ocr
rac1 2018/06/25 07:29:47
/u01/app/grid_home/cdata/dellc/backup_20180625_072947.ocr
[root@rac1
~]#
OCRDUMP:
[root@rac1 ~]# cd /u01/app/grid_home/log/rac1/client
[root@rac1 client]#
[root@rac1 client]# ls
-ltr
-rw-r--r-- 1 root root
254 Jun 24 11:47 ocrdump_4406.log
[root@rac1 client]#
[root@rac1 client]#
[root@rac1 client]# cat
ocrdump_4406.log
Oracle Database 11g
Clusterware Release 11.2.0.4.0 - Production Copyright 1996, 2011 Oracle. All
rights reserved.
2018-06-24
11:47:18.871: [ OCRDUMP][1001924320]ocrdump starts...
2018-06-24
11:47:20.557: [ OCRDUMP][1001924320]Exiting [status=success]...
[root@rac1 client]#
Voting Disk:
[root@rac1 ~]# crsctl
query css votedisk
## STATE
File Universal Id
File Name Disk group
-- -----
-----------------
--------- ---------
1. ONLINE
0b953b34f6d04ffabfac9e19afc98c07 (/dev/oracleasm/disks/DELLASM) [DATA]
Located 1 voting
disk(s).
[root@rac1 ~]#
|
RAC - 1 | RAC - 2 |
[root@rac1 ~]# oifcfg getif eth0 192.168.1.0 global public eth1 10.0.0.0 global cluster_interconnect,asm [root@rac1 ~]# Flex Mode Setup for HUB/Leafe Nodes:(Mandatory from GI-12.2) FAQ: Oracle Flex ASM 12c / 12.1 (Doc ID 1573137.1) [root@rac1 ~]# su - grid [grid@rac1 ~]$ crsctl get node role status -all Node 'rac1' active role is 'hub' Node 'rac2' active role is 'hub' [grid@rac1 ~]$ asmcmd lsdg [grid@rac1 ~]$ asmcmd -p ASMCMD [+] > ASMCMD [+] > showclustermode ASM cluster : Flex mode disabled - Direct Storage Access ASMCMD [+] > showclusterstate Normal ASMCMD [+] > Can not convert back to standard ASM cluster from Oracle Flex Cluster. From 12.2, Flex is enabled by default and cannot be disabled. |
RAC Basic Commands:
OHAS - Oracle High Availability
Service
CRS - Cluster Ready Services
CSS -
Cluster Synchronization Service
EVM - Event Manager
CTSS -
Cluster Time Synchronization Service
CRSCTL - Cluster Ready Service
Control
SRVCTL - Server Control
OCR - Oracle Cluster Registry
ASM - Automatic Storage Managment
Voting Disk
adrci - (show log) for grid listener logs (/u01/app/grid/diag/tnslsnr/rac1/listener/alert)
blkid | grep asm ( display ASM disk & OS Disk partition exm below)
/dev/sdc1: LABEL="DATA001" TYPE="oracleasm"
lsblk
kfod
$ kfod op=DISKS label=TRUE disks=ALL name=TRUE (Identify ASM Disks Labels)
$ kfod op=DISKS disks=ALL name=TRUE (Identify ASM Disks Labels)
$ kfod op=groups (Check DiskGroup Redundancy)
==================
==================
*****************************************
========== RAC Practice
Document ==========
*****************************************
OHAS Oracle High Availability Service
CRS Cluster Ready Services
CSS Cluster Synchronization Service
EVM Event Manager
CTSS Cluster Time Synchronization Service
CRSCTL Cluster Ready Service Control
SRVCTL Server Control
OCR Oracle Cluster Registry
Voting Disk
ASM Automatic Storage Managment
=========================================
========== RAC Basic Commands
===========
=========================================
Alert log file location for Grid
/u01/app/grid_home/log/rac1/alertrac1.log
ASM Alert log file location
/u01/app/grid/diag/crs/rac1/crs/trace/alert.log:
olsnodes -c (name
of Cluster)
cemutlo -n (name of Cluster)
olsnodes -n -i -s -t (nodename and node status)
olsnodes -n -i -s -t (nodename and node status)
olsnodes -l -p (Private
IP Details)
ocrcheck (location
of current ocrfile)
ocrcheck –config (location
of ocrfile)
ocrcheck –config -local (location
of ocrfile)
ocrcheck -local (location
of backup of ocrfile in local machine)
ocrcheck -h
ocrconfig -showbackup (Last 3Hrs, Daily, Weekly, Manual)
ocrconfig -manualbackup (Perform a Manual backup)
#---CRSCTL Commands
crsctl status server
crsctl get hostname (hostname
of local node)
crsctl query crs releasepatch (Exact Cluster Version )
crsctl query crs releaseversion (Cluster
Version)
crsctl query crs activeversion (Cluster
Version)
crsctl query crs softwarepatch
crsctl query crs softwareversion rac2 (Cluster
Version on RAC2)
crsctl check/start/stop crs (
Health of Oracle Clusterware on the local server )
crsctl stop crs -f ( Force Stop Oracle Clusterware on the local server )
crsctl check cluster (
OHAS not-included in Check for local node )
crsctl check cluster -all ( OHAS not-included in Check for
all nodes )
crsctl stat res –t ( Resources Running on Cluster )
crsctl status resource -t –init ( Background Process for Resources )
crs_stat -t ( Status of all nodes in Single Line)
crs_stat -t ( Status of all nodes in Single Line)
crs_stat -t -v ( Status of all nodes in Single Line)
crsctl get node role status -all ( to know Hub/Leafe Node)
crsctl query css votedisk
crsctl config/enable/disable crs ( OHAS autostart if server reboots {Only CRS will be up not db} )
[root@rac1 ~]# srvctl config database
[root@rac1 ~]# crsctl status resource ora.dell.db -p | grep AUTO_START
AUTO_START=restore
[root@rac1 ~]# AUTO_START (Doc ID 2427635.1)
#---SRVCTL Commands
srvctl -V (version
of SRVCTL)
srvctl config database (
to check how many database configured )
srvctl config database -d $ORACLE_UNQNAME ( to check configured database details )
srvctl config database -d $ORACLE_UNQNAME ( to check configured database details )
srvctl remove database -d $ORACLE_UNQNAME -f -y
srvctl start/stop/status database -d $ORACLE_UNQNAME (
to apply change of database on all nodes )
srvctl start database -d $ORACLE_UNQNAME -o mount/nomount ( start database in nomount or mount stage on all nodes )
srvctl start/stop/status instance -d $ORACLE_UNQNAME -n rac2 ( to apply change on single instance )
srvctl start/stop/status instance -d $ORACLE_UNQNAME -i $ORACLE_SID
srvctl start/stop/status listener (
to Check Listener Status on all Nodes)
srvctl start/stop/status listener -n rac2 ( to Check Listener Status on particularNodes)
srvctl start/status nodeapps (
Additional RAC Components)
srvctl start/status nodeapps -n rac2 (
Additional RAC Components)
srvctl stop nodeapps -f (
Stop Additional RAC Components on all nodes)
srvctl stop nodeapps -f -n rac2 (
Stop Additional RAC Components on particular node)
srvctl config asm (Config Info of ASM)
srvctl config asm -detail (Detailed Config Info of ASM)
srvctl config scan (Config
Info of SCAN IP)
srvctl config nodeapps -viponly (Config Info of VIP IP)
srvctl config scan_listener (scan lintner name and port)
srvctl status scan (status fo Scan VIP)
srvctl status scan_listener
srvctl status asm (ASM Status on node Running /Not running)
srvctl status asm -detail (Detailed Status fo ASM on all nodes)
Expected Errors and correction during RAC Installation:
RAC - 1
|
RAC - 2
|
1. Node Connectivity status Failed -
Sol: Check ip
& Subnetmask of Both machine in Setup & Neat Commands
|
|
2. Error- PRVE-0426: the size of in-memory files= system mounted
/dev/shm is '998' which is less than required size of 2014mb on
node""
Sol: mount -t tmpfs shmfs -o size=3G /dev/shm
|
|
3. zeroconfcheck and OS Kernel Parameter: panic_on_oops is
Sol: Fixed with
/tmp/CVU_12.1.0.2.0_grid/runfixup.sh
|
|
4. Task resolve.conf Integrity - DNS Responce Time for an
unreachble Node
Sol: set parameters
Options in resolve.conf file
|
|
5.2017/09/19 16:11:04 CLSRSC-1003: Failed to start resource OC4J
Sol: remove # from
hosts file @127
127.0.0.1
localhost.localdomain localhost
|
|
6. PRVF-5636: Task resolve.conf Integrity - This task checks
consistency of file
/etc/resolve.conf file across nodes
Sol: check
/etc/resolve.conf
search dell.com
nameserver
192.168.1.11
options attempts:2
options timeout:1
7. ASM entry in oratab for ASM instance Setting on all Nodes: Linux:/etc/oratab Solaris:/etc/opt/oracle/oratab Sol: +ASM1:/u01/app/grid_home:N 8. Unable to start/stop ora.storage CRS-2883: Resource 'ora.storage' failed during Clusterware stack start. Sol: start asm instance manually by "STARTUP" command SQL> startup ASM instance started Total System Global Area 3213326304 bytes Fixed Size 8878048 bytes Variable Size 3170893824 bytes ASM Cache 33554432 bytes ASM diskgroups mounted SQL> select INSTANCE_NAME, STATUS, DATABASE_STATUS, ACTIVE_STATE, INSTANCE_MODE, EDITION from gv$instance; Now the status of crs is ok in node 1... 9. Unable fot find port number to listener dependencies Sol: ASMNET1LSNR_ASM |
This Article helps those who would like to Install and Configure Grid Infrastructure
Thanks for Reading
Regards,
Mohammed Areefuddin.
Suggested Topics :
Linux | DATABASE | RMAN | RAC | EBS |
R1229 M7 Clone | ||||
RAC DataGuard | Pluggable DB Clone | |||
appsutil for DB | ||||
JDK JRE upgrade | ||||
Add EBS Node | ||||
No comments:
Post a Comment