In RAC cluster, 2 disks (to be presise files) are important
- 1. Voting Disk - Heartbeat of the RAC components
- 2. OCR - Oracle Cluster Registry - It has all list of RAC components (it is just like windows registry)
It has all the components of RAC as DB Instances, ASM Instances, Listeners, SCAN listeners, RAC nodes
Voting disk checks for the heart beat of all the RAC nodes and if it can't receive the heart beat from a particular node, then voting disk evicts that particular node from RAC cluster.
Voting disk gets the details of all the nodes it is supposed to check the alive heartbeat from OCR disk.
Utilities to determine the location of OCR
- OLR (Oracle Local Registry) - registry specific to a node
OCR - Registry for overall RAC
OLR - Registry for specific node
- OCRCHECK
- ASMCMD
- To work in OCR, the environment need to be set to ASM instance (+ASM1/2) or GI
--------------------------------------------------------------------------------------------
- To know which environment we need to mention in the rac cluster on executing . oraenv, we need to check the details in the /etc/oratab
[oracle@rac1 ~]$ cat /etc/oratab
#Backup file is /u01/app/oracle/product/12.1.0/dbhome_1/srvm/admin/oratab.bak.rac1 line added by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/u01/app/12.1.0/grid:N: # line added by Agent
-MGMTDB:/u01/app/12.1.0/grid:N: # line added by Agent
MyDB:/u01/app/oracle/product/12.1.0/dbhome_1:N: # line added by Agent
- On entering the ORACLE_SID value while setting the environment variable, oracle RAC shouldn't prompt us with any other parameter to enter, that confirms we have set the environment correctly.
[oracle@rac2 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1 <Entering wrong ASM instance value +ASM1 instead of +ASM2, environment asked for ORACLE_HOME value to enter as below>
ORACLE_HOME = [/home/oracle] ?
[oracle@rac2 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM2 <On entering the correct ASM instance +ASM2 on node 2, system didn't prompt
for any variable value to be entered. It successfully set the environment as below>
The Oracle base has been changed from exit to /u01/app/oracle
[oracle@rac2 ~]$
----------------------------------------------------------------------------------------------------------------------------------
Find location of OCR Local disk file:
To find the location of the olr disk(or file), use ocrcheck with local clause as below.
Per the below details, the olr binary file is located at - /u01/app/12.1.0/grid/cdatarac1.olr
- On node 1 (rac1)
[root@rac1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@rac1 ~]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 968
Available space (kbytes) : 408600
ID : 882531601
Device/File Name : /u01/app/12.1.0/grid/cdata/rac1.olr
Device/File integrity check succeeded
Local registry integrity check succeeded
Logical corruption check succeeded
- On node 2 (rac2)
[root@rac2 ~]# . oraenv
ORACLE_SID = [+asm2] ? +ASM2
The Oracle base has been changed from /home/oracle to /u01/app/oracle
[root@rac2 ~]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 976
Available space (kbytes) : 408592
ID : 873311553
Device/File Name : /u01/app/12.1.0/grid/cdata/rac2.olr
Device/File integrity check succeeded
Local registry integrity check succeeded
Logical corruption check succeeded
Alternatively one can find the location of the olr file from olr.loc file present in /etc/oracle
- On node 1 (rac1)
[root@rac1 oracle]# pwd
/etc/oracle
[root@rac1 oracle]# ls -lart
total 2876
drwxr-xr-x 3 root oinstall 4096 Nov 8 06:12 scls_scr
drwxrwxr-x 5 root oinstall 4096 Nov 8 06:12 oprocd
-rws--x--- 1 root oinstall 2903931 Nov 8 06:12 setasmgid
-rw-r--r-- 1 root root 0 Nov 8 06:12 olr.loc.orig
-rw-r--r-- 1 root oinstall 80 Nov 8 06:12 olr.loc
-rw-r--r-- 1 root root 0 Nov 8 06:12 ocr.loc.orig
-rw-r--r-- 1 root oinstall 37 Nov 8 06:12 ocr.loc
drwxr-xr-x 6 root oinstall 4096 Nov 8 06:12 .
drwxrwx--- 2 root oinstall 4096 Nov 8 06:29 lastgasp
drwxrwxrwt 2 root oinstall 4096 Nov 16 23:30 maps
drwxr-xr-x. 123 root root 12288 Nov 17 18:56 ..
[root@rac1 oracle]# more olr.loc
olrconfig_loc=/u01/app/12.1.0/grid/cdata/rac1.olr
crs_home=/u01/app/12.1.0/grid
- On node 2 (rac2)
[root@rac2 ~]# cd /etc/oracle
[root@rac2 oracle]# ls -lart
total 2876
drwxr-xr-x 3 root oinstall 4096 Nov 8 06:36 scls_scr
drwxrwxr-x 5 root oinstall 4096 Nov 8 06:36 oprocd
-rws--x--- 1 root oinstall 2903931 Nov 8 06:36 setasmgid
-rw-r--r-- 1 root root 0 Nov 8 06:36 ocr.loc.orig
-rw-r--r-- 1 root oinstall 37 Nov 8 06:36 ocr.loc
-rw-r--r-- 1 root root 0 Nov 8 06:36 olr.loc.orig
-rw-r--r-- 1 root oinstall 80 Nov 8 06:36 olr.loc
drwxr-xr-x 6 root oinstall 4096 Nov 8 06:36 .
drwxrwx--- 2 root oinstall 4096 Nov 8 06:42 lastgasp
drwxrwxrwt 2 root oinstall 4096 Nov 16 23:30 maps
drwxr-xr-x. 123 root root 12288 Nov 17 18:56 ..
[root@rac2 oracle]# more olr.loc
olrconfig_loc=/u01/app/12.1.0/grid/cdata/rac2.olr
crs_home=/u01/app/12.1.0/grid
Find location of OCR disk file:
- OCR & Voting disk must reside in the ASM disk group
- OCR & Voting disks works hand-in-hand to determine what are the nodes currently active and responding to heartbeat
- In RAC cluster all nodes should have the same server timestamp, else if they are off by even few secs, voting disks considers those nodes as not active and evicts them from the cluster. To maintain the same timestamp across the cluster nodes, NTP(Network Time Protocol) is used.
Using ocrcheck one can find the location of the ASM disk group in which OCR file is located
- [root@rac1 bin]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1528
Available space (kbytes) : 408040
ID : 121423581
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
- Location of the OCR disk file in ASM disk group
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1 ~]$ asmcmd
ASMCMD> ls
DATA/
ASMCMD> cd data
ASMCMD> ls
ASM/
MYDB/
_MGMTDB/
orapwasm
rac-cluster/ <-- This is the cluster name given during the GI installation
ASMCMD> cd rac-cluster
ASMCMD> ls
ASMPARAMETERFILE/
OCRFILE/
ASMCMD> cd ocrfile
ASMCMD> ls
REGISTRY.255.830932005 <--OCR file is located in ASM disk grip
+data/rac-cluster/ocrfile/registry.255.830932005 (Binary file)
Find location of Voting Disk:
- Typically CRSCTL utility commands should be issued from the Grid Infrastructure environment(+ASM1 or +ASM2 ...)
SRVCTL utility commands should be issued from Database environment (+<DBInstance>1, +<DBInstance>2, ..)
- To know the location of the voting disk file, execute the below command in GI environment as below
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 7427f69521004f49bf9221af584d4e6e (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
No comments:
Post a Comment