Latest Blogs

Wednesday, January 26, 2022

OCR and VOTE disks moving to newly added DISK - 19c


Rem: Move existing OCR, Votedisk, Spfile, Password file to New ASM Diskgroup


Rem: This is performed on a test setup. On production without proper testing don't perform. 

Rem: The authors claim no responsibility that the steps will work as it is in your environment

Rem: It is only for knowledge sharing

Rem: Author: Hayat Mohammad Khan


(1)

Create new Physical disk from storage and make it visible at OS level

-New physical /dev/sdf1 disk created 

oracleasm createdisk ASMDISK_OCR2 /dev/sdf1


(2)

--New ASM Diskgroup creation

SQL> 

CREATE DISKGROUP OCR2 EXTERNAL REDUNDANCY  

DISK '/dev/oracleasm/disks/ASMDISK_OCR2' SIZE 1023M

ATTRIBUTE 'compatible.asm'='19.0.0.0','au_size'='4M' 

 

(3)

--Verify existing diskgroup and asm disks 

select name, state, type from v$asm_diskgroup;

select name,path,group_number,header_status,total_mb,free_mb from v$asm_disk;


set lines 999;

col diskgroup for a15

col diskname for a15

col path for a35

select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path,b.header_status

from v$asm_disk b, v$asm_diskgroup a

where a.group_number (+) =b.group_number

order by b.group_number,b.name;

 

(4)

------------------OCR new disk addition

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          4

Total space (kbytes)     :     491684

Used space (kbytes)      :      84268

Available space (kbytes) :     407416

ID                       : 1171341430

Device/File Name         :        +GI

                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded


---Configure additional disk for ocr

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrconfig -add +OCR2

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          4

Total space (kbytes)     :     491684

Used space (kbytes)      :      84268

Available space (kbytes) :     407416

ID                       : 1171341430

Device/File Name         :        +GI

                                    Device/File integrity check succeeded

Device/File Name         :      +OCR2

                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded


---delete old ocr disk

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrconfig -delete +GI

[root@dbwr1 ~]# 


---------------verification

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          4

Total space (kbytes)     :     491684

Used space (kbytes)      :      84268

Available space (kbytes) :     407416

ID                       : 1171341430

Device/File Name         :      +OCR2

                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded


(5)

----------Vote Disk

[root@dbwr1 ~]# /u01/app/19c/grid/bin/crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   03efa5aa41234f82bf19e8c7a04faf04 (/dev/oracleasm/disks/OCR) [GI]

Located 1 voting disk(s).

[root@dbwr1 ~]# 

---assume new votedisk disk +OCR2

[root@dbwr1 ~]# /u01/app/19c/grid/bin/crsctl replace votedisk +OCR2

Successful addition of voting disk 9354b5963a284fc4bf0ebcd1dd23968c.

Successful deletion of voting disk 03efa5aa41234f82bf19e8c7a04faf04.

Successfully replaced voting disk group with +OCR2.

CRS-4266: Voting file(s) successfully replaced


[root@dbwr1 ~]# /u01/app/19c/grid/bin/crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   9354b5963a284fc4bf0ebcd1dd23968c (/dev/oracleasm/disks/ASMDISK_OCR2) [OCR2]

Located 1 voting disk(s).

[root@dbwr1 ~]#


(6)

---Drop old diskgroup GI

------- Super important 

-------Before dropping verify that no critical file exists on it. I have forgotten about the password file, and it caused my 2nd instance ASM not to start.


DROP DISKGROUP GI INCLUDING CONTENTS;

*

ERROR at line 1:

ORA-15039: diskgroup not dropped

ORA-15027: active use of diskgroup "GI" precludes its dismount


Use below URL steps to fix it

https://aprakash.wordpress.com/2012/04/24/ora-15027-active-use-of-diskgroup-data-precludes-its-dismount/


>>> in my case SPFILE was in GI diskgroup

create pfile='/home/oracle/initASM.ora' from spfile;

create spfile='+OCR2' from pfile='/home/oracle/initASM.ora';


/u01/app/19c/grid/bin/gpnptool get


Node1:

/u01/app/19c/grid/bin/crsctl stop crs

Node2:

/u01/app/19c/grid/bin/crsctl stop crs


Node1:

/u01/app/19c/grid/bin/crsctl start crs

Node2:

/u01/app/19c/grid/bin/crsctl start crs


/u01/app/19c/grid/bin/crsctl check cluster -all


$ asmcmd ls -l +DATA/asm/asmparameterfile

Type Redund Striped Time Sys Name

ASMPARAMETERFILE UNPROT COARSE JUN 25 10:00:00 Y REGISTRY.253.722601213

$ asmcmd rm +DATA/asm/asmparameterfile/registry.253.722601213



---Verify Password file

asmcmd pwget --asm


--if corrupted disk path

asmcmd> pwcreate --asm +OCR2/orapwASM 'Welcome_1' -f

[root@dbwr1 disks]# /u01/app/19c/grid/bin/ocrdump /tmp/ocr.dmp


vi /tmp/ocr.dmp and get hash code of CRSUSER__ASM_001

Example:

[SYSTEM.ASM.CREDENTIALS.USERS.CRSUSER__ASM_001]

ORATEXT : 4946c0bced4eefeeff49dcb4749944f0:oracle

SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_NONE, USER_NAME : oracle, GROUP_NAME : oinstall}


/u01/app/19c/grid/bin/crsctl query credmaint -path ASM/Self -credtype userpass

[oracle@dbwr2 trace]$ crsctl get credmaint -path /ASM/Self/4946c0bced4eefeeff49dcb4749944f0 -credtype userpass -id 0 -attr passwd -local

pZxlhryTWaQcB25d2wCc0A00yCslI


[oracle@dbwr2 trace]$ 

asmcmd>

orapwusr --add sys

orapwusr –-add ASMSNMP

orapwusr --grant sysdba ASMSNMP

orapwusr --grant sysasm ASMSNMP

orapwusr --add CRSUSER__ASM_001

---supply this password pZxlhryTWaQcB25d2wCc0A00yCslI


orapwusr --grant sysdba CRSUSER__ASM_001

orapwusr --grant sysasm CRSUSER__ASM_001

lspwusr


crsctl start crs -wait


Credit to below websites:

https://levipereira.wordpress.com/2012/01/11/explaining-how-to-store-ocr-voting-disks-and-asm-spfile-on-asm-diskgroup-rac-or-rac-extended/

https://eclipsys.ca/oracle-rac-12c-a-recipe-to-recover-from-losing-ocr-voting-disk-or-asm-password/

https://blogs.dbcloudsvc.com/oracle/how-to-recreate-the-lost-asm-password-in-oracle-clusterware/

https://dbamarco.wordpress.com/2016/09/15/losing-the-asm-password-file/

https://www.thegeekdiary.com/how-to-move-asm-spfile-to-a-different-disk-group/



--New physical /dev/sdf1 disk created 


oracleasm createdisk ASMDISK_OCR2 /dev/sdf1


--New ASM Diskgroup creation


SQL> 

CREATE DISKGROUP OCR2 EXTERNAL REDUNDANCY  

DISK '/dev/oracleasm/disks/ASMDISK_OCR2' SIZE 1023M

ATTRIBUTE 'compatible.asm'='19.0.0.0','au_size'='4M' 

 


--Verify existing diskgroup and asm disks 

select name, state, type from v$asm_diskgroup;


select name,path,group_number,header_status,total_mb,free_mb from v$asm_disk;


set lines 999;

col diskgroup for a15

col diskname for a15

col path for a35

select a.name DiskGroup,b.name DiskName, b.total_mb, (b.total_mb-b.free_mb) Used_MB, b.free_mb,b.path,b.header_status

from v$asm_disk b, v$asm_diskgroup a

where a.group_number (+) =b.group_number

order by b.group_number,b.name;

 


------------------OCR new disk addition


[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          4

Total space (kbytes)     :     491684

Used space (kbytes)      :      84268

Available space (kbytes) :     407416

ID                       : 1171341430

Device/File Name         :        +GI

                                    Device/File integrity check succeeded


                                    Device/File not configured


                                    Device/File not configured


                                    Device/File not configured


                                    Device/File not configured


Cluster registry integrity check succeeded


Logical corruption check succeeded


[root@dbwr1 ~]# 



---Configure additional disk for ocr

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrconfig -add +OCR2


[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          4

Total space (kbytes)     :     491684

Used space (kbytes)      :      84268

Available space (kbytes) :     407416

ID                       : 1171341430

Device/File Name         :        +GI

                                    Device/File integrity check succeeded

Device/File Name         :      +OCR2

                                    Device/File integrity check succeeded


                                    Device/File not configured


                                    Device/File not configured


                                    Device/File not configured


Cluster registry integrity check succeeded


Logical corruption check succeeded



---delete old ocr disk

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrconfig -delete +GI

[root@dbwr1 ~]# 


---------------verification

[root@dbwr1 ~]# /u01/app/19c/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows :

Version                  :          4

Total space (kbytes)     :     491684

Used space (kbytes)      :      84268

Available space (kbytes) :     407416

ID                       : 1171341430

Device/File Name         :      +OCR2

                                    Device/File integrity check succeeded


                                    Device/File not configured


                                    Device/File not configured


                                    Device/File not configured


                                    Device/File not configured


Cluster registry integrity check succeeded


Logical corruption check succeeded



----------Vote Disk


[root@dbwr1 ~]# /u01/app/19c/grid/bin/crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   03efa5aa41234f82bf19e8c7a04faf04 (/dev/oracleasm/disks/OCR) [GI]

Located 1 voting disk(s).

[root@dbwr1 ~]# 


---assume new votedisk disk +OCR2

[root@dbwr1 ~]# /u01/app/19c/grid/bin/crsctl replace votedisk +OCR2

Successful addition of voting disk 9354b5963a284fc4bf0ebcd1dd23968c.

Successful deletion of voting disk 03efa5aa41234f82bf19e8c7a04faf04.

Successfully replaced voting disk group with +OCR2.

CRS-4266: Voting file(s) successfully replaced



[root@dbwr1 ~]# /u01/app/19c/grid/bin/crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   9354b5963a284fc4bf0ebcd1dd23968c (/dev/oracleasm/disks/ASMDISK_OCR2) [OCR2]

Located 1 voting disk(s).

[root@dbwr1 ~]#



---Drop old diskgroup GI

------- Super important 

-------Before drop verify that no critical file exist on it. I have forget about password file, and it caused my 2nd instance ASM not started.


DROP DISKGROUP GI INCLUDING CONTENTS;


*

ERROR at line 1:

ORA-15039: diskgroup not dropped

ORA-15027: active use of diskgroup "GI" precludes its dismount


https://aprakash.wordpress.com/2012/04/24/ora-15027-active-use-of-diskgroup-data-precludes-its-dismount/


>>> in my case SPFILE was in GI diskgroup


create pfile='/home/oracle/initASM.ora' from spfile;

create spfile='+OCR2' from pfile='/home/oracle/initASM.ora';



/u01/app/19c/grid/bin/gpnptool get


Node1:

/u01/app/19c/grid/bin/crsctl stop crs


Node2:

/u01/app/19c/grid/bin/crsctl stop crs



Node1:

/u01/app/19c/grid/bin/crsctl start crs


Node2:

/u01/app/19c/grid/bin/crsctl start crs



/u01/app/19c/grid/bin/crsctl check cluster -all



$ asmcmd ls -l +DATA/asm/asmparameterfile

Type Redund Striped Time Sys Name

ASMPARAMETERFILE UNPROT COARSE JUN 25 10:00:00 Y REGISTRY.253.722601213

$ asmcmd rm +DATA/asm/asmparameterfile/registry.253.722601213



---Verify Password file



asmcmd pwget --asm


--if corrupted disk path


asmcmd> pwcreate --asm +OCR2/orapwASM 'Welcome_1' -f


[root@dbwr1 disks]# /u01/app/19c/grid/bin/ocrdump /tmp/ocr.dmp


vi /tmp/ocr.dmp and get hash code of CRSUSER__ASM_001


Example:

[SYSTEM.ASM.CREDENTIALS.USERS.CRSUSER__ASM_001]

ORATEXT : 4946c0bced4eefeeff49dcb4749944f0:oracle

SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_NONE, USER_NAME : oracle, GROUP_NAME : oinstall}




/u01/app/19c/grid/bin/crsctl query credmaint -path ASM/Self -credtype userpass


[oracle@dbwr2 trace]$ crsctl get credmaint -path /ASM/Self/4946c0bced4eefeeff49dcb4749944f0 -credtype userpass -id 0 -attr passwd -local

pZxlhryTWaQcB25d2wCc0A00yCslI


[oracle@dbwr2 trace]$ 



asmcmd>

orapwusr --add sys

orapwusr –-add ASMSNMP

orapwusr --grant sysdba ASMSNMP

orapwusr --grant sysasm ASMSNMP

orapwusr --add CRSUSER__ASM_001

---supply this password pZxlhryTWaQcB25d2wCc0A00yCslI


orapwusr --grant sysdba CRSUSER__ASM_001

orapwusr --grant sysasm CRSUSER__ASM_001

lspwusr


crsctl start crs -wait


Credit to below websites:

https://levipereira.wordpress.com/2012/01/11/explaining-how-to-store-ocr-voting-disks-and-asm-spfile-on-asm-diskgroup-rac-or-rac-extended/

https://eclipsys.ca/oracle-rac-12c-a-recipe-to-recover-from-losing-ocr-voting-disk-or-asm-password/

https://blogs.dbcloudsvc.com/oracle/how-to-recreate-the-lost-asm-password-in-oracle-clusterware/

https://dbamarco.wordpress.com/2016/09/15/losing-the-asm-password-file/

https://www.thegeekdiary.com/how-to-move-asm-spfile-to-a-different-disk-group/

https://eclipsys.ca/oracle-rac-12c-a-recipe-to-recover-from-losing-ocr-voting-disk-or-asm-password/


Step by Step "Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup"

REM: Step by Step "Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup"
REM: Written By: Hayat Mohammad Khan, SM App&DB
REM: Dated: 07-JAN-2014
REM: Envrionment Used: Laptops, 11g R2 using Windows 7

Run at Primary:
select current_scn from v$database;

assume:1447102
---Maji PR>4393776



Run at Secondary:
select current_scn from v$database;

assume:1301571
---Hay DR>4390455


Run at Primary:
select scn_to_timestamp(1447102) from dual;

Run at Secondary:
select scn_to_timestamp(1301571) from dual;


Run at Secondary:
alter database recover managed standby database cancel;

shutdown immediate;


Run at Primary:

--DR site SCN 1301571
--Hay SCN: 439045

RMAN>
run {
allocate channel c1 type disk format 'E:\app\DBarchivedest\%U.rmb';
backup incremental from scn 439045 database;
}


sql>alter database create standby controlfile as '/u01/oraback/DEL1_standby.ctl';
Actual Command:sql>alter database create standby controlfile as 'E:\app\DBarchivedest\CONTROLFILE01.CTL';



Copy both Backup and Controlfile to Secondary site using OS Commands


Run at Secondary:
sql>startup nomount;


>>Replace the controlfile with the one you just created in primary.


sql>alter database mount standby database;

rman target /

-- backup taken at PR Site path
rman>catalog start with '/u01/oraback';

ActualCommand: rman>catalog start with 'E:\app\DBarchivedest';

Do you really want to catalog the above files (enter YES or NO)? yes


rman> recover database;

alter database recover managed standby database disconnect from session;


Run at both sides to verify:

select current_scn from v$database;



For complete Reading:
http://arup.blogspot.com/2009/12/resolving-gaps-in-data-guard-apply.html                              


Oracle Standby Site Plan Switch Over...10g/11g

REM: Prepared By Hayat Mohammad Khan
REM: Senior DBA PTCL, Pakistan
REM: hayathk@hotmail.com
REM: 06-Feb-2012
REM: Use it by your own risk, Author and Oracle is not responsible for any command failure or mal function

Tip: Alert Log file reading is the Key to successful DR Site Switch Over Activity
Tip: Keep Calm, and Wait for the successful Execution of Command, Avoid Hurry Ness.
Tip: Verify the Listener from both sites, tnsping pr-site, tnsping dr-site
Tip: Always take Backup Before Start of Switch Over Activity.
Tip: Keep Your documents in DR-DRILL Folder
Tip: Oracle Erros Details Document especially for Data Guard
Tip: Keep List of Oracle V$ views Related to Data Guard


===================================
SPFILE Verification:::
===================================

PR-SITE::
FAL_SERVER=STAN
FAL_CLIENT=PRIM
STANDBY_FILE_MANAGEMENT=????

SPFILE Verification::
DR-SITE
FAL_SERVER=PRIM
FAL_CLIENT=STAN
STANDBY_FILE_MANAGEMENT=????

Read Below Website for Parameters For Fal_Client & Fal_Server:
http://myoracledba.blogspot.com/2008/08/falserver-falclientdataguard-falclient.html


===================================
GAP Verification:::
===================================

ON-PR-SITE Apply Log Switch::
alter system switch logfile;

ON-DR-SITE Verify::
SQL>select sequence#, applied from v$archived_log order by sequence#;



############################################################################################################################
{{{{{{{ Pl only do this exercise if above things are sure}}}}}}}
############################################################################################################################

===================================
SWITCH OVER:::
===================================

You Must Verify:

1. PR Site Instance in Open, DR Site Instance in Mount Stage
2. NO Active users connected to the databases
3. Make sure the last redo data transmitted from the Primary database was applied
on the standby database
select sequence#, applied from v$archvied_log;
Perform Log Switch from Priamry to verify at DR Site
4. Execute before and after and Document Below Query Results>>
 select database_role, open_mode, protection_mode, db_unique_name from v$database;
5. Always cancel recovery at DR Site




Please Verify You have Off Other DB Instances (If ORACLE RAC is there), Stop them

Example
[DB-Name: R3P, Instance Name R3P002..... pl replace for your environment]

$ srvctl stop instance -d R3P -i R3P002


Switchover Steps

********************
SPRSite-1.
********************

Connect to PR Site

SQL>SELECT SWITCHOVER_STATUS FROM V$DATABASE;
VEIRFY STATUS - IF FAILED DESTINATIONS returned DON'T PROCEED  [very very important], VERIFY CONNECTIIVTY TO DR ........ and GAP....

SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

Note: Pl see Alert Log Entry in teminal sessions, tail -f aler*

********************
SDRSITE-2.
********************

If step# SPRSite-1 Completed successfully


----------------------------------------------------------------------------------
Connect to DR Site
----------------------------------------------------------------------------------
cancel any recovery if active

11G
SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
10G
SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

Note: Pl see Alert Log Entry in teminal sessions,  tail -f aler*

----------------------------------------------------------------------------------
Once Enter SQL Command on DR Site for becoming Priamry,
Enter immediately Below Command on PR SITE
----------------------------------------------------------------------------------
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;


********************
SPRSITE-3.   [new primary site]
********************

Recommended to Perform this action on NEW Priamery (Karachi Site)

SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

oR

SQL>ALTER DATABASE OPEN;

********************
SPRDRSITE-3.
********************

vERIFY THE DB Mode

-----------
PR SITE
-----------

SQL> select open_mode from v$database;
select database_role, open_mode, protection_mode, db_unique_name from v$database;

OUTPUT>>>READ WRITE

-----------
DR SITE
-----------

SQL> select open_mode from v$database;
select database_role, open_mode, protection_mode, db_unique_name from v$database;

OUTPUT>>>MOUNT


********************
SPRSITE-4.
********************

SQL>ALTER SYSTEM SWITCH LOGFILE;

VERIFY GAP, and Applying Status.


********************
SPRSITE-5.
********************

Auto Redo Apply at New DR Site

SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;








===============================
Appendix A
==============================

SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
SELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS;
SELECT DESTINATION, ERROR FROM V$ARCHIVE_DEST;
SELECT * from v$dataguard_status order by timestamp desc
SELECT status, pid, sequence# from v$managed_standby where process like 'MRP%';
SELECT * FROM V$ARCHIVE_GAP;
SELECT * FROM $DATABASE
SELECT * FROM V$LOG_HISTORY

--Determining Which Log Files Were Not Received by the Standby Site
SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
(SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL
WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
 THREAD# = LOCAL.THREAD#);

alter system set log_archive_dest_state_2='defer' scope=both sid='*';

--- Determine the most recent archived redo log file at each destination
SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
 FROM V$ARCHIVE_DEST_STATUS
 WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

alter database register or replace physical logfile '/ccsarch/1_339657_720905336.dbf';


===============================
Appendix B
==============================

TNS ENTRIES AT PR SITE

TNS ENTRIES AT DR SITE


V$SERVICES OUTPUT ATTACHED

RECOVER STANDBY DATABASE;

ALTER SYSTEM ARCHIVE LOG Current

alter database recover managed standby database disconnect from session;

recover managed standby database disconnect;

alter database recover managed standby database cancel;

SELECT SWITCHOVER_STATUS FROM V$DATABASE;

Thread 2:: select sequence#, applied from v$archived_log where thread#=2 and sequence#='xxxxxx';

select sequence#, applied from v$archived_log where applied='NO' order by sequence#

select sequence#, applied from v$archived_log where applied='YES' order by sequence#

select * from v$dataguard_status;

select process,status,THREAD#,SEQUENCE# from v$managed_standby;

select THREAD#,SEQUENCE#,APPLIED from v$archived_log where THREAD#=2 and SEQUENCE# BETWEEN 342340 AND 342450 ORDER BY 3;

select 'APPLIED', max(sequence#), thread# from v$archived_log where APPLIED='YES'group by thread#
union all
select 'ARCHIVED', max(sequence#),thread# from v$archived_log where archived='YES'group by thread# order by 3

select timestamp , facility, dest_id, message_num, error_code, message from v$dataguard_status order by timestamp;

Select process, status, thread#, sequence#, blocks, delay_mins from
v$managed_standby
where process = 'MRP0';

select status, DEST_NAME, DESTINATION from v$archive_dest where status = 'VALID';


-- At PR
select thread#, max(sequence#) "Last Primary Seq Generated"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
group by thread# order by 1;


-- At DR
select thread#, max(sequence#) "Last Standby Seq Received"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
group by thread# order by 1;

--- At DR

select thread#, max(sequence#) "Last Standby Seq Applied"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
and val.applied='YES'
group by thread# order by 1;

ASM DB instance Restoration to Non-ASM DB instance


REM: Prepared by Hayat Mohammad Khan
REM: hayathk@hotmail.com
REM: Dated: 02-July-2013
REM: For Non-RAC Database
REM: Assume ORCL is DB name
REM: Run with your own risk


Step#1

Restore SPFILE
Create Password File or Just copy from Source system if there....

In SPFILE modify the path of Control File
control_files='/u01/app/oracle/oradata/orcl/control01.ctl','/u01/app/oracle/oradata/orcl/control02.ctl'

Step#2

SQL> startup nomount;

Step#3

In RMAN Session:

rman>set dbid=numericnumberofyourdb
rman>restore controlfile from '/u01/app/oracle/backup/mycontrolfilebackupname_xxxxxxx';

Step#4
alter database mount;


Step#5
Catalog to new backup path where backup files restored from tape/disk

RMAN> catalog start with '/u01/app/oracle/backup/';


Step#6
Restore Backup

RMAN> run {
SET NEWNAME FOR DATABASE   TO  '/u01/app/oracle/oradata/orcl/%b';
SET NEWNAME FOR tempfile  1 TO  '/u01/app/oracle/oradata/orcl/%b';
restore database;
switch datafile all;
switch tempfile all;
}


Step#7
Complete the recovery

run {
--set until sequence 145 thread 1;   //depdends on your archive files
recover database;
}


Step#8

RMAN> sql 'alter database open resetlogs';


Step#9

Verify your data and logfiles

SQL> select member from v$logfile;



CREDIT TO: http://gavinsoorma.com/2013/02/restoring-a-asm-backup-to-non-asm-and-restoring-from-rac-to-single-instance/
Dated: 31st March 2015

Copy ASM Backup to FILE system directory

ASMCMD> cp nnndf0_tag20130218t093350_0.345.807701631 /u02/app/backup

Check if naming conversion requires
*.db_file_name_convert='+DATA/orcl/onlinelog/','/u01/app/oracle/oradata/orcl/'

STEP BY STEP ORACLE 11G R2 NODE REMOVAL


Prepared by: Hayat Mohammad Khan (DBA) hayathk@hotmail.com  - +92-333-5193460
Maroof Ud Din (DBA) maroofuddinkhan@yahoo.com 
  Saturday, March 01, 2014
     
Disclaimar The steps performed in below documents are our own and do not necessarily reflect the views of Oracle Corporation. Steps may vary to actual activity, environment, platform, and database and grid version and patches.
     
Assumptions DB Version Oracle 11.2.0.4 two nodes cluster
OS Version IBM AIX 6.1
Cluster File System IBM GPFS 3.4
DB Nodes Names SETTDB1, SETTDB2
DB OS Names SETT1,SETT2
DB-Home /oracle/app/grid/product/11.2.0.3/dbhome_3
Grid-Home /oracle/app/11.2.0.3/grid_2/
  Homes Sharing Status Not Shared
Recommendations Others Node1 refered to remaining node, Node2 refered to deleting node
Others2 Please always read logs files as mentioned in each step in separate terminal window
Others3 Always paste command to notepad / text editor and verify it, otherwise there is chances that it may exeucte on all nodes
Higly Recommended Pl take proper Backup of Oracle homes and database software before start of this activity
High Level Steps    
  Remove Oracle instance of deleting node
  Remove Oracle database Binaries  of deleting node
  Remove deleting node from Oracle Grid 
  Remove Oracle Grid software from deleting node
     
Step#1 NODE ON NODE 2
  Purpose For Stoping the instnace db services which you want to remove
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path From any where
  Substep#  
  Purpose-Substep  
  Command-Syntax srvctl stop instance -d settdb -i settdb2
  Expected OutPut  



Step#2 NODE ON NODE 1
  Purpose Disabling Thread Which you want to remove
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path Connect from Sqlplus/ as sysdba    -----> From Node 1  --- settdb1
  Substep#  
  Purpose-Substep  
  Command-Syntax alter database disable thread 2;         --we want to remove thread 2
  Expected OutPut  



Step#3 NODE ON NODE 2
  Purpose Instance Will Be removed
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path .PROFILE must be set to oracle binaries
  Substep#  
  Purpose-Substep  
  Command-Syntax srvctl remove instance -d settdb -i settdb2         --we want to remove node2 i.e settdb2
  Expected OutPut  



Step#4 NODE ON NODE 2
  Purpose the below command will update Inventoy on this node
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/grid/product/11.2.0.3/dbhome_3/deinstall
  Substep#  
  Purpose-Substep  
  Command-Syntax ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/grid/product/11.2.0.3/dbhome_3 "CLUSTER_NODES={sett2}" -local
  Expected OutPut
  Starting Oracle Universal Installer...
  Checking swap space: must be greater than 500 MB.   Actual 16384 MB    Passed
  The inventory pointer is located at /etc/oraInst.loc
  The inventory is located at /oracle/app/oraInventory
  'UpdateNodeList' was successful.



Step#5 NODE  FROM NODE 2 
  Purpose For Deinstalling ORACLE HOME software
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/grid/product/11.2.0.3/dbhome_3/deinstall
  Substep#  
  Purpose-Substep  
  Command-Syntax ./deinstall -local
  Expected OutPut Checking for required files and bootstrapping ...
  Please wait ...
  Location of logs /oracle/app/oraInventory/logs/
   
  ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
   
   
  ######################### CHECK OPERATION START #########################
  ## [START] Install check configuration ##
   
   
  Checking for existence of the Oracle home location /oracle/app/grid/product/11.2.0.3/dbhome_3
  Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
  Oracle Base selected for deinstall is: /oracle/app/grid
  Checking for existence of central inventory location /oracle/app/oraInventory
  Checking for existence of the Oracle Grid Infrastructure home /oracle/app/11.2.0.3/grid_2
  The following nodes are part of this cluster: sett2
  Checking for sufficient temp space availability on node(s) : 'sett2'
   
  ## [END] Install check configuration ##
   
   
  Network Configuration check config START
   
  Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_check2014-03-25_05-34-41-PM.log
   
  Network Configuration check config END
   
  Database Check Configuration START
   
  Database de-configuration trace file location: /oracle/app/oraInventory/logs/databasedc_check2014-03-25_05-34-47-PM.log
   
  Database Check Configuration END
   
  Enterprise Manager Configuration Assistant START
   
  EMCA de-configuration trace file location: /oracle/app/oraInventory/logs/emcadc_check2014-03-25_05-34-52-PM.log 
   
  Enterprise Manager Configuration Assistant END
  Oracle Configuration Manager check START
  OCM check log file location : /oracle/app/oraInventory/logs//ocm_check723.log
  Oracle Configuration Manager check END
   
  ######################### CHECK OPERATION END #########################
   
   
  ####################### CHECK OPERATION SUMMARY #######################
  Oracle Grid Infrastructure Home is: /oracle/app/11.2.0.3/grid_2
  The cluster node(s) on which the Oracle home deinstallation will be performed are:sett2
  Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'sett2', and the global configuration will be removed.
  Oracle Home selected for deinstall is: /oracle/app/grid/product/11.2.0.3/dbhome_3
  Inventory Location where the Oracle home registered is: /oracle/app/oraInventory
  The option -local will not modify any database configuration for this Oracle home.
   
  No Enterprise Manager configuration to be updated for any database(s)
  No Enterprise Manager ASM targets to update
  No Enterprise Manager listener targets to migrate
  Checking the config status for CCR
  Oracle Home exists with CCR directory, but CCR is not configured
  CCR check is finished
  Do you want to continue (y - yes, n - no)? [n]: y
  A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2014-03-25_05-34-31-PM.out'
  Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2014-03-25_05-34-31-PM.err'
   
  ######################## CLEAN OPERATION START ########################
   
  Enterprise Manager Configuration Assistant START
   
  EMCA de-configuration trace file location: /oracle/app/oraInventory/logs/emcadc_clean2014-03-25_05-34-52-PM.log 
   
  Updating Enterprise Manager ASM targets (if any)
  Updating Enterprise Manager listener targets (if any)
  Enterprise Manager Configuration Assistant END
  Database de-configuration trace file location: /oracle/app/oraInventory/logs/databasedc_clean2014-03-25_05-36-32-PM.log
   
  Network Configuration clean config START
   
  Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_clean2014-03-25_05-36-32-PM.log
   
  De-configuring Local Net Service Names configuration file...
  Local Net Service Names configuration file de-configured successfully.
   
  De-configuring backup files...
  Backup files de-configured successfully.
   
  The network configuration has been cleaned up successfully.
   
  Network Configuration clean config END
   
  Oracle Configuration Manager clean START
  OCM clean log file location : /oracle/app/oraInventory/logs//ocm_clean723.log
  Oracle Configuration Manager clean END
  Setting the force flag to false
  Setting the force flag to cleanup the Oracle Base
  Oracle Universal Installer clean START
   
  Detach Oracle home '/oracle/app/grid/product/11.2.0.3/dbhome_3' from the central inventory on the local node : Done
   
  Delete directory '/oracle/app/grid/product/11.2.0.3/dbhome_3' on the local node : Done
   
  The Oracle Base directory '/oracle/app/grid' will not be removed on local node. The directory is in use by Oracle Home '/oracle/app/11.2.0.3/grid'.
   
  Oracle Universal Installer cleanup was successful.
   
  Oracle Universal Installer clean END
   
   
  ## [START] Oracle install clean ##
   
  Clean install operation removing temporary directory '/tmp/deinstall2014-03-25_05-34-09PM' on node 'sett2'
   
  ## [END] Oracle install clean ##
   
   
  ######################### CLEAN OPERATION END #########################
   
   
  ####################### CLEAN OPERATION SUMMARY #######################
  Cleaning the config for CCR
  As CCR is not configured, so skipping the cleaning of CCR configuration
  CCR clean is finished
  Successfully detached Oracle home '/oracle/app/grid/product/11.2.0.3/dbhome_3' from the central inventory on the local node.
  Successfully deleted directory '/oracle/app/grid/product/11.2.0.3/dbhome_3' on the local node.
  Oracle Universal Installer cleanup was successful.
   
  Oracle deinstall tool successfully cleaned up temporary directories.
  #######################################################################
   
   
  ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
   
   
   



Step#6 NODE  From NODE 1
  Purpose For updating Inventory
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/grid/product/11.2.0.3/dbhome_3
  Substep#  
  Purpose-Substep  
  Command-Syntax ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/grid/product/11.2.0.3/dbhome_3 "CLUSTER_NODES={sett1}" 
  Expected OutPut Starting Oracle Universal Installer...
    Checking swap space: must be greater than 500 MB.   Actual 16384 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /oracle/app/oraInventory
    'UpdateNodeList' was successful.



Step#7 NODE  From any remaing Node  in our case it is NODE1
  Purpose UNPIN NODE 2
  Run-From User From ROOT user
  Path  
  Substep#  
  Purpose-Substep  
  Command-Syntax crsctl unpin css -n sett2
  Expected OutPut Node sett2 successfully unpinned.



Step#8 NODE  From deleting Node in this example from NODE2
  Purpose Removing node from cluster
  Run-From User From ROOT user
  Path /oracle/app/11.2.0.3/grid/crs/install
  Substep#  
  Purpose-Substep  
  Command-Syntax ./rootcrs.pl -deconfig -deinstall -force 
  Expected OutPut Using configuration parameter file: ./crsconfig_params
    Network exists: 1/10.254.157.64/255.255.255.192/en0, type static
    VIP exists: /sett1-vip/10.254.157.83/10.254.157.64/255.255.255.192/en0, hosting node sett1
    VIP exists: /sett2-vip/10.254.157.84/10.254.157.64/255.255.255.192/en0, hosting node sett2
    GSD exists
    ONS exists: Local port 6100, remote port 6200, EM port 2016
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'sett2'
    CRS-2673: Attempting to stop 'ora.crsd' on 'sett2'
    CRS-2677: Stop of 'ora.crsd' on 'sett2' succeeded
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'sett2'
    CRS-2673: Attempting to stop 'ora.crf' on 'sett2'
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'sett2'
    CRS-2673: Attempting to stop 'ora.ctssd' on 'sett2'
    CRS-2673: Attempting to stop 'ora.evmd' on 'sett2'
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'sett2' succeeded
    CRS-2677: Stop of 'ora.mdnsd' on 'sett2' succeeded
    CRS-2677: Stop of 'ora.evmd' on 'sett2' succeeded
    CRS-2677: Stop of 'ora.crf' on 'sett2' succeeded
    CRS-2677: Stop of 'ora.ctssd' on 'sett2' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'sett2'
    CRS-2677: Stop of 'ora.cssd' on 'sett2' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'sett2'
    CRS-2677: Stop of 'ora.gipcd' on 'sett2' succeeded
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'sett2'
    CRS-2677: Stop of 'ora.gpnpd' on 'sett2' succeeded
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'sett2' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    This may take several minutes. Please wait ...
    0518-307 odmdelete: 1 objects deleted.
    0518-307 odmdelete: 1 objects deleted.
    0518-307 odmdelete: 1 objects deleted.
    Successfully deconfigured Oracle clusterware stack on this node



Step#9 NODE  NODE 1
  Purpose Updating GRID HOME inventory
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/11.2.0.3/grid_2/oui/bin
  Substep#  
  Purpose-Substep  
  Command-Syntax ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0.3/grid_2 "CLUSTER_NODES={sett1}" CRS=TRUE
  Expected OutPut Starting Oracle Universal Installer...
    Checking swap space: must be greater than 500 MB.   Actual 16384 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /oracle/app/oraInventory
    'UpdateNodeList' was successful.



Step#10 NODE  From NODE 1
  Purpose Deleting node2 from cluster
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/11.2.0.3/grid_2/bin
  Substep#  
  Purpose-Substep  
  Command-Syntax ./crsctl delete node -n sett2
  Expected OutPut CRS-4661: Node sett2 successfully deleted.



Step#11 NODE  From Node2
  Purpose Updating Grid Inventory
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/11.2.0.3/grid_2/oui/bin
  Command-Syntax ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0.3/grid_2 "CLUSTER_NODES={sett2}" CRS=TRUE -local
  Expected OutPut Starting Oracle Universal Installer...
    Checking swap space: must be greater than 500 MB.   Actual 16384 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /oracle/app/oraInventory
    'UpdateNodeList' was successful.
     





Please Please..... At end it will asked u to execute scripts as root  from respective nodes........... please DONT RUSH...........and execute in sequential order and as per instructions............
Step#12 NODE  From NODE2
  Purpose Deinstalling Grid HOME
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/11.2.0.3/grid_2/deinstall
  Command-Syntax ./deinstall -local
  Expected OutPut Checking for required files and bootstrapping ...
    Please wait ...
    Location of logs /oracle/app/oraInventory/logs/
     
    ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
     
     
    ######################### CHECK OPERATION START #########################
    ## [START] Install check configuration ##
     
     
    Checking for existence of the Oracle home location /oracle/app/11.2.0.3/grid_2
    Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
    Oracle Base selected for deinstall is: /oracle/app/grid
    Checking for existence of central inventory location /oracle/app/oraInventory
    Checking for existence of the Oracle Grid Infrastructure home 
    The following nodes are part of this cluster: sett2
    Checking for sufficient temp space availability on node(s) : 'sett2'
     
    ## [END] Install check configuration ##
     
    Traces log file: /oracle/app/oraInventory/logs//crsdc.log
    Enter an address or the name of the virtual IP used on node "sett2"[sett2-vip]
     > 
     
    The following information can be collected by running "/sbin/ifconfig -a" on node "sett2"
    Enter the IP netmask of Virtual IP "10.254.157.84" on node "sett2"[255.255.255.0]
     > 
     
    Enter the network interface name on which the virtual IP address "10.254.157.84" is active
     > 
     
    Enter an address or the name of the virtual IP[]
     > 
     
     
    Network Configuration check config START
     
    Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_check2014-03-25_06-18-27-PM.log
     
    Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]:
     
    Network Configuration check config END
     
    Asm Check Configuration START
     
    ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_check2014-03-25_06-18-57-PM.log
     
     
    ######################### CHECK OPERATION END #########################
     
     
    ####################### CHECK OPERATION SUMMARY #######################
    Oracle Grid Infrastructure Home is: 
    The cluster node(s) on which the Oracle home deinstallation will be performed are:sett2
    Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'sett2', and the global configuration will be removed.
    Oracle Home selected for deinstall is: /oracle/app/11.2.0.3/grid_2
    Inventory Location where the Oracle home registered is: /oracle/app/oraInventory
    Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1
    Option -local will not modify any ASM configuration.
    Do you want to continue (y - yes, n - no)? [n]: y
    A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2014-03-25_06-17-52-PM.out'
    Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2014-03-25_06-17-52-PM.err'
     
    ######################## CLEAN OPERATION START ########################
    ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_clean2014-03-25_06-19-17-PM.log
    ASM Clean Configuration END
     
    Network Configuration clean config START
     
    Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_clean2014-03-25_06-19-17-PM.log
     
    De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1
     
    De-configuring listener: LISTENER
        Stopping listener on node "sett2": LISTENER
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
     
    De-configuring listener: LISTENER_SCAN1
        Stopping listener on node "sett2": LISTENER_SCAN1
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
     
    De-configuring Naming Methods configuration file...
    Naming Methods configuration file de-configured successfully.
     
    De-configuring backup files...
    Backup files de-configured successfully.
     
    The network configuration has been cleaned up successfully.
     
    Network Configuration clean config END
     
     
    ---------------------------------------->
     
    The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.



Pl carefully execute it on deleting node only
Run the following command as the root user or the administrator on node "sett2".


 
Subset 12.1 Node Execute from Deleting Node [Node2]
  Run-From User From Root User
  Substep#  
  Purpose-Substep  
  Command /tmp/deinstall2014-03-25_06-17-30PM/perl/bin/perl -I/tmp/deinstall2014-03-25_06-17-30PM/perl/lib
     -I/tmp/deinstall2014-03-25_06-17-30PM/crs/install /tmp/deinstall2014-03-25_06-17-30PM/crs/install/rootcrs.pl
     -force  -deconfig -paramfile "/tmp/deinstall2014-03-25_06-17-30PM/response/deinstall_Ora11g_gridinfrahome3.rsp"
  Expected Output  
    sett2:/ >/tmp/deinstall2014-03-25_06-17-30PM/perl/bin/perl -I/tmp/deinstall2014-03-25_06-17-30PM/perl/lib -I/tmp/deinstall2014-03-25_06-17-30PM/crs/install /tmp/deinstall2014-03-25_06-17-30PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-03-25_06-17-30PM/response/deinstall_Ora11g_gridinfrahome3.rsp"
    Using configuration parameter file: /tmp/deinstall2014-03-25_06-17-30PM/response/deinstall_Ora11g_gridinfrahome3.rsp
    Usage: srvctl <command> <object> [<options>]
        commands: enable|disable|start|stop|status|add|remove|modify|getenv|setenv|unsetenv|config|upgrade
        objects: database|service|asm|diskgroup|listener|home|ons
    For detailed help on each command and object and its options use:
      srvctl <command> -h or
      srvctl <command> <object> -h
    PRKO-2012 : nodeapps object is not supported in Oracle Restart
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Stop failed, or completed with errors.
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Modify failed, or completed with errors.
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Delete failed, or completed with errors.
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Stop failed, or completed with errors.
    ################################################################
    # You must kill processes or reboot the system to properly #
    # cleanup the processes started by Oracle clusterware          #
    ################################################################
    This may take several minutes. Please wait ...
    Either /etc/oracle/olr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Removing Trace File Analyzer
    Either /etc/oracle/olr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Failure in execution (rc=-1, 256, No such file or directory) for command /etc/ohasd deinstall
    Successfully deconfigured Oracle clusterware stack on this node
    sett2:/ >
     
    ========================================================= PRESS ENTER AFTER EXECUTION OF ABOVE SCRIPT FROM ROOT USER :+++=======================
     
    Press Enter after you finish running the above commands
     
    <----------------------------------------
     
    Remove the directory: /tmp/deinstall2014-03-25_06-17-30PM on node: 
    Setting the force flag to false
    Setting the force flag to cleanup the Oracle Base
    Oracle Universal Installer clean START
     
    Detach Oracle home '/oracle/app/11.2.0.3/grid_2' from the central inventory on the local node : Done
     
    Delete directory '/oracle/app/11.2.0.3/grid_2' on the local node : Done
     
    The Oracle Base directory '/oracle/app/grid' will not be removed on local node. The directory is in use by Oracle Home '/oracle/app/11.2.0.3/grid'.
     
    Oracle Universal Installer cleanup was successful.
     
    Oracle Universal Installer clean END
     
     
    ## [START] Oracle install clean ##
     
    Clean install operation removing temporary directory '/tmp/deinstall2014-03-25_06-17-30PM' on node 'sett2'
     
    ## [END] Oracle install clean ##
     
     
    ######################### CLEAN OPERATION END #########################
     
     
    ####################### CLEAN OPERATION SUMMARY #######################
    Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
    Oracle Clusterware is stopped and successfully de-configured on node "sett2"
    Oracle Clusterware is stopped and de-configured successfully.
    Successfully detached Oracle home '/oracle/app/11.2.0.3/grid_2' from the central inventory on the local node.
    Successfully deleted directory '/oracle/app/11.2.0.3/grid_2' on the local node.
    Oracle Universal Installer cleanup was successful.
     
    For complete clean up of Oracle Clusterware software from the system, deinstall the following old clusterware home(s). Refer to Clusterware Install guide of respective old release for details.
        /oracle/app/11.2.0.3/grid on nodes : sett2
    Oracle deinstall tool successfully cleaned up temporary directories.
    #######################################################################
     
     
    ############# ORACLE DEINSTALL & DECONFIG TOOL END #############



Step#13 NODE  From Node1 
  Purpose Updating Grid Inventory on Node1
  Run-From User Must be execute from Grid or Oracle user (OS USER)
  Path /oracle/app/11.2.0.3/grid_2/oui/bin
  Substep#  
  Purpose-Substep  
  Command-Syntax ./runInstaller -updateNodeList ORACLE_HOME=/oracle/app/11.2.0.3/grid_2 "CLUSTER_NODES={sett1}"  CRS=TRUE
  Expected OutPut Starting Oracle Universal Installer...
    Checking swap space: must be greater than 500 MB.   Actual 16384 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /oracle/app/oraInventory
    'UpdateNodeList' was successful.






References Site accessed at Dated: 14th March2014 http://blog.csdn.net/staricqxyz/article/details/8468774
Site accessed at Dated: 14th March2014 http://doc.oracle.com
All time http://www.google.com