Tuesday, November 15, 2016

Converting physical standby database to snapshot standby

Converting physical standby database to snapshot standby

All the following steps to be executed on standby database only..


Set the size for recovery area.

Alter system set db_recovery_file_dest_size=100G;

Set Flash recovery area.

Alter system set db_recovery_file_dest='+RECO_DG'

Stop managed recovery if it is active.
 
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Bring the physical standby database to mount stage.

Startup mount;

Convert physical standby database to snapshot standby database.

ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;

Shutdown Immediate;

Startup;

select open_mode,database_role from v$database;

OPEN_MODE  DATABASE_ROLE
---------- ----------------
READ WRITE SNAPSHOT STANDBY


Converting Snapshot standby database to physical standby


Shutdown Immediate;

Startup mount;

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Shutdown immediate;

Startup mount;

select open_mode,database_role from v$database;

OPEN_MODE  DATABASE_ROLE
---------- ----------------
MOUNTED    PHYSICAL STANDBY

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;


Note: During this time archivelogs are getting shipped to standby from primary


Convert Physical standby database to active standby for testing

To convert physical standby database to primary for testing scenarios


Step 1 - In Standby database

Set up a flash recovery area.

If Flash Recovery Area ( FRA ) is not configured in the standby then enable it and make sure to give enough space for to FRA

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=500G;
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+RECO_DG';

Cancel Redo Apply and create a guaranteed restore point.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
CREATE RESTORE POINT testing GUARANTEE FLASHBACK DATABASE;

To Confirim the details of restore point and its scn and time stamp run

 select NAME,SCN,TIME from v$restore_point;

NAME                                                               SCN                    TIME
--------------------------------------------------     -------------    ------------------------------
TESTING     2254478210    11-JUN-16 01.10.21.000000000 P


Step 2 - In Primary Database

ALTER SYSTEM ARCHIVE LOG CURRENT;

Optional - Defer log archive destinations pointing to the standby that will be activated.

--ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

Step 3 - In Standby database

Activate the physical standby database:

ALTER DATABASE ACTIVATE STANDBY DATABASE;

Once its done you can check the controlfile status will be changed from Standby to Current

select CONTROLFILE_TYPE from v$database;

CONTROL
-------
CURRENT


Then open the database.

ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
ALTER DATABASE OPEN;

Revert the active standby database back to Physical standby database


Mount the database.
Flashback the database to restore point.

STARTUP MOUNT;

ORACLE instance started.
Total System Global Area  289406976 bytes
Fixed Size                  1290208 bytes
Variable Size             159383584 bytes
Database Buffers          125829120 bytes
Redo Buffers                2904064 bytes
Database mounted.

FLASHBACK DATABASE TO RESTORE POINT testing ;


You can confirm the same by checking the controlfile status. It will be now backup controlfile

select controlfile_type from v$database;

CONTROL
--------------
BACKUP

Convert to Standby database

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
STARTUP MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
select controlfile_type from v$database;

CONTROL
--------------
STANDBY


In Primary database

Optional(If deferred before) - Re-enable archiving to the physical standby database:

ALTER SYSTEM ARCHIVE LOG CURRENT;
--ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

In Standby database

Drop the restore point

STARTUP FORCE MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
DROP RESTORE POINT testing ;


Note: Archivelogs will not be shipped when the db is in read/write mode from primary. So make sure you are not deleting the archivelogs in primary until they are shipped here. If you have space constraints or archive logs are purged by a job, then flash back to restore point and do a incremental backup on primary and recover using it based on SCN.

Also during this time flashback_on parameter of database is set to NO

Wednesday, November 9, 2016

Migrating Single node Instace to two node RAC using Data Guard

Goal is to migrate single instance RAC DB to 2 Node RAC on a new server

This is to perform switch over with minimum downtime

PRIMARY(old server)
db_unique_name=test
db_name=test
instance_name=test1


Standby(new server)
db_unique_name=testdr
db_name=test
instance_name=test1,test2

Scan names, Vip names are different as this is not a server refresh

All the files are managed by oracle

datafile are stored in DATA_DG
Redologs are mirrored in DATA_DG and RECO_DG
Archive logs and backups are stored in RECO_DG


Folder structure for example (Automatically generated by oracle. No particular location specified)

primary
+DATA_DG/test/xxx.dbf      -- Just for reference

STANDBY
+DATA_DG/testdr/xxx.dbf   -- Just for reference



Run below at primary database

alter database force logging;

-- Give the primary name followed by primary unique name
alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(test,testDR)' scope=both sid='*';

-- Log_archive_dest1 is set for the current database
alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=test' scope=both sid='*';

-- Log archive_dest_2 is set for standby database. This is where log shipping will be done
alter system set LOG_ARCHIVE_DEST_2='SERVICE=testDR ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testDR' scope=both sid='*';


alter system set log_archive_max_processes=8 scope=both sid='*';

-- Server will always be the other database
alter system set fal_server=testDR scope=both sid='*';

-- Set to auto because we want all the file management to be exactly same as primary and will be managed by oracle
alter system set standby_file_management=auto scope=both sid='*';

-- Commented the below lines because i am not creating standby redo logs as i am using dataguard just to do a cut over inorder to minimize downtime. We will not be using this as a DR

-- Standby redologs are used in maximum protection and maximum security modes of dataguard. We are using maximum performance so i skipped it

-- If you have requirement to create standby redo logs change file management to Manual. Then create logs and change management to Auto.

-- I am not using db_file_name_convert and log_file_name_convert because i am not creating folders manually. These are being created by oracle when performing duplicate



--alter system set standby_file_management=manual scope=both sid='*';
--alter system set db_file_name_convert='TESTDR','TEST' scope=spfile sid='*';
--alter system set log_file_name_convert='TESTDR','TEST' scope=spfile sid='*';

--mkdir +RECO_DG/TEST/STANDBYLOG

--alter database add standby logfile thread 1 group 4 '+RECO_DG/TEST/STANDBYLOG/standby_group_04.log' size 52M;
--alter database add standby logfile thread 1 group 5 '+RECO_DG/TEST/STANDBYLOG/standby_group_05.log' size 52M;
--alter database add standby logfile thread 1 group 6 '+RECO_DG/TEST/STANDBYLOG/standby_group_06.log' size 52M;

alter system set standby_file_management=auto scope=both sid='*';

create pfile='pfile_for_standby.txt' from spfile;

Copy pfile to the new server

scp $ORACLE_HOME/dbs/pfile_for_standby.txt exadb01:$ORACLE_HOME/dbs/pfile_for_standby.txt

Copy the password file from primary server to standby 2 nodes.

scp $ORACLE_HOME/dbs/orapwtest exadb01:$ORACLE_HOME/dbs/orapwtest1
scp $ORACLE_HOME/dbs/orapwtest exadb02:$ORACLE_HOME/dbs/orapwtest2

*********************************************************************************

Perform Below at STANDBY

Node1
Create a listener using db home. This one is used just for connecting to db as auxiliary during rman duplicate. This is because some times we get service blocked in listener status  if we use standard listener

Just create on one node. This can be stopped or removed after duplication is done.

LISTENER_TEMP=
      (DESCRIPTION=
       (ADDRESS_LIST=
        (ADDRESS= (PROTOCOL=TCP)(HOST=exadb01)(PORT=1525))
      )
     )

SID_LIST_LISTENER_TEMP =
    (SID_LIST=
     (SID_DESC=
     (SID_NAME=test1)
     (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)
     )
    )

Once the listener is created start it and perform the below at each standby node


Node 1
mkdir -p /u01/app/oracle/admin/test/adump
mkdir -p /u01/app/oracle/diag/rdbms/testdr/test1
cd /u01/app/oracle/diag/rdbms/testdr/test1
mkdir trace cdump


Node 2
mkdir -p /u01/app/oracle/admin/test/adump
mkdir -p /u01/app/oracle/diag/rdbms/testdr/test2
cd /u01/app/oracle/diag/rdbms/testdr/test2
mkdir trace cdump


Edit pfile node1 at standby and change the following

*.audit_file_dest='/u01/app/oracle/admin/test/adump'

-- Just give diskgroups name here because rman duplicate will create the files automatically in the disk groups. After everything is done you need to modify this to reflect new files.
*.control_files='+DATA_DG','+RECO_DG'


*.db_unique_name='testdr'

-- Server will always be the other database
*.fal_server='test'

--Log_archive_dest1 is set for the current database
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=testdr'

--Log_archive_dest2 is set for the standby database
*.log_archive_dest_2='SERVICE=test ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=test'

*.remote_listener='exadb-scan:1521'

-- I am not using db_file_name_convert and log_file_name_convert because i am not creating folders manually. These are being created by oracle when performing duplicate
--*.log_file_name_convert='test','testdr'
--*.db_file_name_convert='test','testdr'

Modify tns and add testdr entries and test entries.. these entries should be in all tns files both in primary and standby



test =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = proddb-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = test)
    )
  )

testDR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = exadb01.hke.local)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdr)
 (SID = test1)
    )
  )

testDR1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = exadb01.hke.local)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdr)
      (SID = test1)
    )
  )

testDR2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = exadb02.hke.local)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = testdr)
      (SID = test2)
    )
  )

TEMP =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = exadb01.hke.local)(PORT = 1525))
    (CONNECT_DATA =
      (SID = test1)
    )
  )


On standby run the following

Note: Set proper environment settings before you do the below. instance names are case sensitive. Export the db with instance number before starting the db.

Also remove any parameters related to interconnect

sql > startup nomount pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/pfile_for_standby.txt';


On primary run the following


rman target sys/welcome1@test auxiliary sys/welcome1@TEMP

duplicate target database for standby from active database dorecover;

I am using active duplicate here. Using this we can skip the tedious process of taking backup of primary and copying it to standby and doing duplicate.

Using active duplicate directly copies everything from current running database to the new one.

*****
This might have impact on your network. So please check before performing this.

***** 
I am migrating these db's from exadata x4 to exadata x6. And the network link is faster. So i was able to complete duplication of a 7TB database very quickly.

Once this is done and database is in up state. The spfile will be completely messed up because while performing active clone it creates an spfile from memory.

So for future to keep the parameter file clean use the old pfile txt that we copied the beginning

On standby perform the below


Shutdown immediate;

Now edit the controlfile parameter to point to the newly created controlfiles because the parameterfile that we currently have doesnt contain exact controlfile location.

*.control_files='+DATA_DG/testdr/controlfile/current.411.926159161','+RECO_DG/testdr/controlfile/current.5242.926159161'

add the following parameters as well as copy the instance 1 parameters and rename it to instance 2

cluster_database=TRUE
test1.undo_tablespace='UNDOTBS1'
test2.undo_tablespace='UNDOTBS2'
test1.instance_number=1
test2.instance_number=2
test1.thread=1
test2.thread=2


On standby node 1



startup nomount pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/pfile_for_standby.txt';


create spfile='+data_dg/testdr/parameterfile/spfiletest.ora' from pfile='/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/pfile_for_standby.txt';

On standby node 1


vi inittest1.ora
spfile='+data_dg/testdr/parameterfile/spfiletest.ora'


On standby node 2


vi inittest2.ora
spfile='+data_dg/testdr/parameterfile/spfiletest.ora'


*********************************************************************************
Since we dont have 2nd node for this database in primary. I will need to create redologs for the 2nd thread

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;


alter database add logfile thread 2 group 16 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 17 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 18 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 19 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 20 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 21 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 22 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 23 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 24 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 25 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 26 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 27 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 28 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 29 ('+DATA_DG','+RECO_DG') size 4G reuse;
alter database add logfile thread 2 group 30 ('+DATA_DG','+RECO_DG') size 4G reuse;


ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;


*********************************************************************************

Adding the 2 instances to cluster



srvctl add database -d test -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -p +data_dg/testdr/parameterfile/spfiletest.ora -r physical_standby -a DATA_DG,RECO_DG
srvctl add instance -d test -i test1 -n exadb01
srvctl add instance -d test -i test2 -n exadb02
srvctl start database -d test

srvctl status database -d test

srvctl config database -d test


To change service name

alter system set service_names=test scope=both sid='*';

alter system register;

Now shut down everything

srvctl stop database -d test

*********************************************************************************

On standby Node 1


Now just start 1st instance manually without srvctl to mount state.

sql > Starup Mount


ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

To stop MRP

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

*********************************************************************************

Test dataguard functionality

Standby

--select process,status,sequence#,thread# from v$managed_standby;

select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

PRIMARY

select thread#,max(sequence#) from v$archived_log group by thread#;

or

select max(sequence#) from v$archived_log where  resetlogs_change#=(SELECT resetlogs_change# FROM v$database)

or

SELECT sequence# || '|' ||
thread# || '|' ||
TO_CHAR(completion_time, 'DD-MON-RRRR HH24:MI:SS')
FROM v$archived_log
WHERE sequence#= (SELECT MAX(sequence#) FROM v$archived_log where resetlogs_change#=(SELECT resetlogs_change# FROM v$database))
AND thread#=1
AND resetlogs_change#=(SELECT resetlogs_change# FROM v$database)
AND dest_id=1
UNION ALL
SELECT sequence# || '|' ||
thread# || '|' ||
TO_CHAR(completion_time, 'DD-MON-RRRR HH24:MI:SS')
FROM v$archived_log
WHERE sequence#= (SELECT MAX(sequence#) FROM v$archived_log where resetlogs_change#=(SELECT resetlogs_change# FROM v$database))
AND resetlogs_change#=(SELECT resetlogs_change# FROM v$database)
AND thread#=2
AND dest_id=1;


*********************************************************************************


Note: If you no longer want to use standby db or you just did dataguard as part of migration make sure you disable force logging on current primary db

ALTER DATABASE FORCE LOGGING;
*********************************************************************************
Best practice
 alter system set log_archive_dest_2 = 'service=STDBY ARCH OPTIONAL MAX_CONNECTIONS=5 VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) db_unique_name=STDBY' scope = both ;
 But I think you should use LGWR
 alter system set log_archive_dest_2 = 'service=STDBY LGWR ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) db_unique_name=STDBY' scope = both ;

For LGWR use standby redo logs

create 5 groups of SRL
alter database add standby logfile thread 1 group 10 (' full path file name ') size 104857600 ;
alter database add standby logfile thread 1 group 11 (' full path file name ') size 104857600 ;
alter database add standby logfile thread 1 group 12 (' full path file name ') size 104857600 ;
alter database add standby logfile thread 1 group 13 (' full path file name ') size 104857600 ;
alter database add standby logfile thread 1 group 14 (' full path file name ') size 104857600 ;

Then change the redo transport
alter system set log_archive_dest_2 = 'service=STDBY ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) db_unique_name=STDBY' scope = both ;

Make sure you have enough db_Cache size on stdby

Featured Post

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016) Unzip the patch 22191577 Unzip latest Opatch Version in or...