High Availability and

Disaster Recovery on

Oracle Exadata using

Oracle Active DataGuard



Successful high availability (HA) architectures prevent downtime and data loss by using redundant systems and software to eliminate single points of failure. The same principle applies to mission critical databases. Administrator error, data corruption caused by system or software faults, or complete site failure can impact the availability of a database. Even a clustered database running on multiple servers is exposed to single points of failure if not adequately protected. While a clustered database can provide excellent server HA, it is ultimately a tightly coupled system running a single database on shared storage.


Oracle Active Data Guard is the most comprehensive solution available to eliminate single points of failure for mission critical Oracle Databases. It prevents data loss and downtime in the simplest and most economical manner by maintaining a synchronized physical replica of a production database at a remote location. If the production database is unavailable for any reason, client connections can quickly, and in some configurations transparently, failover to the synchronized replica to restore service. Active Data Guard eliminates the high cost of idle redundancy by allowing reporting applications, ad-hoc queries, and data extracts to be offloaded to read-only copies of the production database. Active Data Guard’s deep integration with Oracle Database and complete focus on real-time data protection and availability avoids compromises found in storage remote mirroring or other host based replication solutions.



High Availability


The Oracle Exadata Database Machine provides an optimal solution for all database workloads, ranging from scan-intensive data warehouse applications to highly concurrent OLTP applications. Exadata delivers extreme performance in a highly available and highly secure environment. Oracle Data Guard is Oracle’s disaster recovery solution prescribed by the Maximum Availability Architecture (MAA) to protect mission critical databases residing on Exadata. Data Guard is also used to maintain availability should any outage unexpectedly impact the production database and to minimize downtime during planned maintenance.

Oracle DataGuard  is the MAA best practice recommendation for Exadata Database Machine for:

o Disaster Recovery

o High Availability

o Migrations to Exadata storage



Disaster recovery (DR)


Data Guard protects against data loss and downtime should the primary site become unavailable. Continuous Oracle validation enables Data Guard to provide the best data protection for the Oracle database. Data Guard physical standby databases transparently support all Oracle data types, database features, and applications, and can meet the demanding performance requirements of Exadata.



High Availability (HA)


Data Guard supports up to 30 standby databases in a single configuration. An increasing number of customers use this flexibility to deploy both a local Data Guard standby for HA and a remote Data Guard standby for DR. A local Data Guard standby database complements the internal HA features of Exadata by maintaining availability when


unexpected failures or human error make the production database unavailable even though the remainder of the site is still operational. Low network latency between the production database and the local standby makes it easy to use synchronous redo transport and achieve zero data loss if a failover is required. Likewise, the close proximity of the local standby to the application tier also enables fast redirection of application clients to the new primary database. Following a failover to a local standby database, the remote standby database in the Data Guard configuration will recognize that the failover has occurred and automatically begin receiving redo from the new primary database - maintaining disaster protection at all times.




Migrations to Exadata storage


Data Guard is one of several approaches available from Oracle to facilitate initial migration to Exadata with minimal downtime. A Data Guard standby database can also be used to reduce downtime for other types of planned maintenance.Transport methods within an Exadata Environment depending upon how the environment is architected can utili transport methods:

Standard Client Network which would be used by SCAN/VIP Network, typically 1gE


➢ IB/SDP Protocol Deployment, this is only applicable if the Primary/Standby are connected via Spine Switch's the same data center physically connected to each other.

➢ High Speed Transport using 10gE Interface SFP Module over LAN or WAN






Disaster Recovery on Oracle Exadata using Oracle Active DataGuard


Pre-requisite is that Grid Insfrastructure is the same version and the Oracle RDBMS must be the same version an both primary/standby sites. Software is already installed at both Primary/Standby environments with the same Exadata Bundle Patch's applied GRID & RDBMS Oracle along with any required overlay patch's.


Primary is 2 Nodes

  Host Names # bakuexa1dbadm01.kfsaz.local bakuexa1dbadm02.kfsaz.local


 DB Home  # /u01/app/oracle/product/


 oracle@bakuexa1dbadm01$ srvctl status database -d MISDEV

 Instance MISDEV1 is running on node bakuexa1dbadm01

 Instance MISDEV2 is running on node bakuexa1dbadm02


Standby is 2 Nodes


  Host Names # bakuexa1dbadm03.kfsaz.local bakuexa1dbadm04.kfsaz.local


 DB Home  # /u01/app/oracle/product/


On the primary Database


All Data Guard environments should enable force logging at the database level in order to guard against nologging tablespaces from being added.




 Database altered.


Create standby redo logs


You will need to create sufficient Standby Redo Logs to cater for redo apply. A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone. You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.


SQL> alter database add standby logfile thread 1 group 9 ('+DATAC1','+DBFS_DG') size 500M, 3 group 10 ('+DATAC1','+DBFS_DG') size 500M;


Database altered.


SQL> alter database add standby logfile thread 2 group 11 ('+DATAC1','+DBFS_DG')

size 500M, 3 group 12 ('+DATAC1','+DBFS_DG') size 500M;

Database altered.


On the primary & standby Database


Setup Entries in tnsnames.ora. Pre-requisite here is that a listener exists on all nodes in Primary/Standby hosts


Before defining the below sql*net entries use netca to create the require listeners.


$ cat tnsnames.ora



    (ADDRESS = (PROTOCOL = TCP)(HOST = bakuexa1-scan)(PORT = 1521))









    (ADDRESS = (PROTOCOL = TCP)(HOST = bakuexa1-scan)(PORT = 1521))







On the Primary Database Node1


Copy password file from Primary to Standby. Every database in a Data Guard configuration must use a password file, and the password for the SYS user must be identical on every system for redo data transmission to succeed, and All Data Guard environments require the use of a password file in order to allow the databases to connect to each other.


$ cd $ORACLE_HOME/dbs


$ cp orapwMISDEV1,



$ cp orapwMISDEV2,  oracle@bakuexa1dbadm04:/u01/app/oracle/product/


On the standby Database


Setup a dummy pfile in $ORACLE_HOME/dbs on node1 and node2. That the example assumes the use of the ARCn processes (the default) to transmit redo data. If you specify the LGWR process to transmit redo data to both the local and remote destinations, also include the NET_TIMEOUT attribute  on the LOG_ARCHIVE_DEST_2 initialization parameter.





MISDEV1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment MISDEV2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment *.pga_aggregate_target=1157627904








*.cluster_database=true *.compatible=''

*.control_files='+DATAC1/misdevdg/controlfile/control01.ctl','+DBFS_DG/misdevdg/controlfile/ control01.ctl'










*.dg_broker_config_file1='+DATAC1/misdev/BROKER/dr1MISDEVDG.dat' *.dg_broker_config_file2='+DATAC1/misdev/BROKER/dr2MISDEVDG.dat'










*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST','valid_for=(ALL_LOGFILES, ALL_ROL)





*.log_archive_min_succeed_dest=1 MISDEV2.log_archive_trace=0 MISDEV1.log_archive_trace=0 *.log_file_name_convert='MISDEV','MISDEVDG'



*.remote_listener=' bakuexa1-scan:1521'











On the standby DB


Using the dummy parameter file startup the instance's


Standby Node1:


SQL> startup nomount pfile=/u01/app/oracle/product/

ORACLE instance started.

Total System Global Area 3457368064 bytes

Fixed Size 2233272 bytes

Variable Size 754977864 bytes

Database Buffers 2684354560 bytes

Redo Buffers 15802368 bytes



Repeat on node 2 and start the instance in nomount as well…….


On the primary & standby Database


So, at the moment we can start test connection. On both Primary and Standby Test Sql*Net Connections.


$ sqlplus sys/password@MISDEVDG as sysdba


SQL*Plus: Release Production on Thu Jan 29 15:10:47 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved.


Connected to:


Oracle Database 11g Enterprise Edition Release - 64bit Production


With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options




On the Primary Database


Perform an RMAN Duplicate. For the RMAN duplicate active database method, the init.ora file (initdb1.ora in the example) requires only one parameter: db_name (it doesn't even have to be the real name of the database - any name will do). RMAN will copy the spfile from the primary, so this init.ora file is only needed during the first phase of the duplication.


From Primary :


[oracle@bakuexa1dbadm01]$ rman target sys@MISDEV auxiliary sys@MISDEVDG


Recovery Manager: Release - Production on Thu Jan 1 09:46:27 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

target database Password:


connected to target database: MISDEV (DBID=247039367) auxiliary database Password:

connected to auxiliary database: MISDEV (not mounted)


RMAN> duplicate target database for standby from active database spfile

parameter_value_convert 'MISDEV','MISDEVDG'

set db_unique_name='MISDEVDG'

set db_file_name_convert='MISDEV','MISDEVDG'

set log_file_name_convert='MISDEV','MISDEVDG'

set control_files='+DATAC1/misdevdg/controlfile/control01.ctl','+DBFS_DG/misdevdg/controlfile/contr set standby_file_management='AUTO'

set db_recovery_file_dest='+DBFS_DG'

set audit_file_dest='/u01/app/oracle/admin/misdevADG/adump'


On the Standby Database


Create spfile from the pfile created. Broker changes database parameters during startup and role transitions via ALTER SYSTEM commands.  An spfile is required to persist these changes.


SQL> create spfile='+DATAC1' from pfile='/u01/app/oracle/product/';

Next Step connected to 'ASMCMD'  copied spfile location +DATAC1/misdevADG/PARAMETERFILE to +DATAC1/misdevMISDEVDG/spfileMISDEVDG.ora


$ cat $ORACLE_HOME/dbs/initMISDEV1.ora +DATAC1/misdevdg/spfileMISDEVDG.ora

$ cat $ORACLE_HOME/dbs/initMISDEV2.ora +DATAC1/misdevdg/spfileMISDEVDG.ora


On the Standby Database


Verify both Standby Instances can start in mount mode.  It’s at this point we verify group services registration with the underlying CSSD daemon. Once you have started them both make sure you see both instances.


oracle@bakuexa1dbadm01:/u01/app/oracle/product/$ sqlplus "/ as sysdba";


SQL*Plus: Release Production on Wed Jan 4 12:35:43 2015


Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:

Oracle Database 11g Enterprise Edition Release - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options


SQL> select instance_number, instance_name from gv$instance;



--------------- ----------------

              1 MISDEV1

              2 MISDEV2


On the Primary & Standby Database


Define Log Transport Method to be used.

In Step 3 we used the example of creating the DataGuard configuration on the SCAN Client Network. You now have the option to change this if architecturally intended to use the 10gE or IB network.


$ srvctl config scan


   SCAN name: bakuexa1-scan, Network: 1/

   SCAN VIP name: scan1, IP: /bakuexa1-scan/172.30.91.x

   SCAN VIP name: scan2, IP: /bakuexa1-scan/172.30.91.x

   SCAN VIP name: scan3, IP: /bakuexa1-scan/172.30.91.x


% cat /proc/net/bonding/bondeth0 |grep eth Currently Active Slave: eth1

Slave Interface: eth1

Slave Interface: eth2


On the Primary & Standby Database


We can enable Data Guard Broker / On Primary, and Standby. Broker stores it configuration information in a mirrored set of files outside the database. By default, both files are stored in $ORACLE_HOME/dbs. To protect the files, it's good practice to store them in separate filesystems.


SQL> alter system set dg_broker_start=true scope=both sid='*';

System altered.

On Primary Node1:


SQL> alter system set dg_broker_config_file1='+DATAC1/misdev/BROKER/dr1MISDEV.dat' scope=spfile;


System altered.


SQL> alter system set dg_broker_config_file2='+DATAC1/misdev/BROKER/dr2MISDEV.dat' scope=spfile; System altered.


On Standby Node1:


SQL> alter system set dg_broker_config_file1='+DATAC1/misdev/BROKER/dr1MISDEVDG.dat' scope=spfile;


System altered.


SQL> alter system set dg_broker_config_file2='+DATAC1/misdev/BROKER/dr2MISDEVDG.dat' scope=spfile; System altered.


Then Copy the Broker Files connecting as GRID ‘s ware owner using ASMCMD in GRID Home When first enabling creates config files locally (file system) we can place these on shared storage in this case +ASM diskgroup.


Primary DB on Node 1:


$ asmcmd cp /u01/app/oracle/product/ +DATAC1/misdev/BROKER/dr1 copying /u01/app/oracle/product/ -> +DATAC1/misdev/BROKER/dr1A


$ asmcmd cp /u01/app/oracle/product/ +DATAC1/misdev/BROKER/dr2 copying /u01/app/oracle/product/ -> +DATAC1/misdev/BROKER/dr2A

Standby DB on Node 1:



ASMCMD> cp /u01/app/oracle/product/  copying /u01/app/oracle/product/ -> +DATAC1/misdevADG/BROKER/dr1MISDEVDG.dat


ASMCMD> cp /u01/app/oracle/product/  copying /u01/app/oracle/product/ -> +DATAC1/misdevADG/BROKER/dr2MISDEVDG.dat

Restart the broker for changes to take effect.



On Primary, and Standby


SQL> alter system set dg_broker_start=false scope=both sid='*';


System altered.


SQL> alter system set dg_broker_start=true scope=both sid='*';


System altered.


Primary DB on Node 1:


DGMGRL> create configuration MISDEV_Active_DG as primary database is MISDEV connect identifier is A Configuration "misdev_active_dg" created with primary database "misdev"


DGMGRL> add database MISDEVDG as connect identifier is MISDEVDG maintained as physical; Database "MISDEVDG" added


DGMGRL> enable configuration;




DGMGRL> show configuration;


Configuration - misdev_active_dg


Protection Mode: MaxPerformance




MISDEV - Primary database


MISDEVDG - Physical standby database


Fast-Start Failover: DISABLED


Configuration Status:




On the Standby Database


Register Standby Database Resources with Clusterware


';$ srvctl add database -d MISDEVDG -o /u01/app/oracle/product/ -p +DATAC1/misdevdg/spfileMISDEVDG.ora -r physical_standby


$ srvctl add instance -d MISDEVDG -i MISDEV1 -n bakuexa03-scan

$ srvctl add instance -d MISDEVDG -i MISDEV2 -n bakuexa04-scan


So, Active DataGuard on Exadata X4-2 High Rac System completed. Restart of the Standby Resource using srvctl will automatically start MRP with one of th instance threads.


oracle@bakuexa1dbadm03$ sqlplus "/ as sysdba";


SQL*Plus: Release Production on Wed Feb 4 18:26:27 2015


Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:

Oracle Database 11g Enterprise Edition Release - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options


SQL> set linesize 1000

SQL> set pagesize 100

SQL> select name, value from gv$dataguard_Stats;




NAME                             VALUE

-------------------------------- ----------------------------------------------------------------

transport lag                    +00 00:00:00

apply lag                        +00 00:00:00

apply finish time                +00 00:00:00.000

estimated startup time           19




So,  Active Data Guard is the simplest, highest performance solution for maintaining a synchronized independent copy of the Oracle Database. It is the only replication technology able to support the very high volumes driven by the Oracle Database Machine and Oracle Exadata. Oracle 11g Release 2 tests using an Oracle Database Machine proved Active Data Guard was able to apply changes at a sustained rate

greater than 500MB/second



End Result


Active Data Guard can do what no other data protection technology can do for the Oracle Database. It provides unique levels of simplicity, performance, reliability and the security of having an independent, exact replica of the production database. It eliminates the unknown by being able to validate the production readiness of the standby database by querying it at any time – without impacting data protection or

your ability to immediately transition it to the primary role.





More Information please read MOS id: How to Setup Active DataGuard on Exadata (Doc ID 1580796.1)


OTECH MAGAZINE #7  spring 2015        copyright otech magazine 2015