How you can Configure Data Guard Employing a Physical Standby Database
I just decided to test out a rollback strategy we were pondering while upgrading our databases to Oracle 11g. We were using Oracle 10g at the time, and we had the below data guard set up:
Principal DB, 10. 2 . zero. 4
Physical standby DB 1, 10. 2 . zero. 4
Physical standby DB 2, 10. 2 . zero. 4
NOTE: Our primary system is Glass windows Server 2003, SP2. Though these instructions are for the Windows environment, they will also work towards any other setting, the only variation being some of the Windows precise configurations, such as Windows assistance creation.
The Plan For Replacing Data Guard
The plan would concurrently upgrade both data shield DBs to 11g as the primary. As part of the upgrade, we needed to have a fallback plan so that if the update were to fail we would have the ability to restore the primary and life databases to a point with time before the upgrade took place. We didn’t want to have to repair the standby databases when the upgrade failed because that might be time-consuming and require much more work. The primary data source rollback plan was relatively simple:
Shutdown the database
Have a “snap” of the LUNs at the SAN level
Bring the data source up and run within the upgrade
Drop snap in case the upgrade successful
Rollback towards the image if the promotion unsuccessful
The problem that people faced with the standby directories was that we didn’t can snap the disks within the SANs that they were operating on because the SAN technologies were older, and we did not have the licenses for them. Many of us also had some inquiries about the configuration of the data shield and how it would work:
When we roll back the primary DB, how can we back the actual standby DBs if they have also been applying logs during the upgrade?
Can we stop the life DBs applying logs, roll back the primary DB to the breeze version, and start shipping records again?
Will the physical life DB pick up from exactly where it left off whenever we remove the logs made while the upgrade was working?
If the primary DB just visited archive log sequence 100 during the snap, which results in 50 logs throughout the upgrade process, and we snap the idea back, will it continue via sequence 100 again?
Many of us made educated guesses regarding precisely what we thought through the questions above should be able to present you with some idea about the types of questions we were unsure associated with before we had completed any testing. The best way to prove anything at all is to test it away, so I set out to test our theory. What was the idea? Nicely, we thought (I imply they hoped… ) that if all of us canceled the log transport while the upgrade was managing (DEFER the log save destinations), rolled the primary DB back to the snap type of the disks, deleted every one of the logs which were produced while the upgrade was running and enabled the log to save destinations again that the records guard physical standby listings would pick up from just where they left off, non-e the wiser to the been unsuccessful upgrade on the primary repository.
What We are Going to Test
Listed here is a list of exactly what I will be going for walks through in the example under:
Configure data guard in version 10. 2 . zero. 4 (although this will connect with an 11g data guard, also)
Create the backup of your respective primary DB in the groundwork to restore it to create your physical standby databases.
Often the restore process to set up your physical standby databases
Tips to get log shipping working from primary to the standby listings
Configuring the rollback tactic above
Simulation of a was unable upgrade attempt
Rolling into the snap version with the database (at the SAN level)
A test to see if it can all be working as is the case before the upgrade
SAN Vs. Flashback Database: Personal View…
I should speak about that for this test; I use working with the storage crew to get a snapshot of the disk(s) before any upgrade performed is complete. I find that this is a much better option for this kind of work as compared to using Oracle’s flashback technological innovation for several reasons:
Flashback takes a large amount of space to store virtually any changes
I have encountered difficulties when using Oracle flashback technological know-how while performing an improvement that corrupted my secured restore point. Very undesirable!
The SAN snapshot is rather quick, clean, and requires no configuration changes from an Oracle point of view – just a clean-up shutdown of the DB while the snapshot is taken.
Therefore, in this test, I present how to create a standby DB from your primary DB authoritarian commands that I needed to work with. Hopefully, it is all self-explanatory for a knowledgeable Oracle DBA. If not, please ask almost any questions at the bottom of the website in the comments section and email me. I will do my best to guide you.
Step-by-Step Guide to Creating in addition to Configuring Data Guard Real Standby
On your primary DB, shrink the size of DB wherever possible so that the backup, copy, in addition, to restore to the standby databases is quicker. I typically use this little script to make it dynamically for a specific tablespace. It will attempt to decrease each tablespace by 500MB; you could configure that to whatever you want it to be and do the item for every tablespace by removing the “where tablespace_name =” part of the statement.
The spool reduces in size. out
set server out with
for i inside (select ‘alter database datafile ‘||file_id||’ resize ‘||TRUNC(bytes : 512000000)||’; ‘
as cmd from dba_data_files where tablespace_name = ‘DW3_L1_X128K’) loop
dbms_output. put_line(i. cmd);
RMAN Level0 and Archive Log Function
The first stage is to consider an RMAN level zero backup, which you will use afterward to restore and create your bodily standby database. Let’s assume you don’t want to take the repository down to complete the copy. Hopefully, your databases are already running in ARCHIVELOG style. You can check with this query:
PICK OUT log_mode
When your primary database is not in ARCHIVELOG mode, you can often take a cold backup, my partner and i. e. while the database is down, or put, in that case, database in ARCHIVELOG style. To do this, configure your log_archive_dest_1 parameter and switch the item on as follows:
ALTER PROCESS SET log_archive_dest_1=’LOCATION=D: \TEMP\TESTDB\ARCHLOGS’ SCOPE=SPFILE;
ALTER DATABASE ARCHIVELOG;
ADJUST DATABASE OPEN;
ALTER TECHNIQUE SWITCH LOGFILE;
Now ensure the archived redo records appear in the spot specified by the log_archive_dest_1 domain.
Force Database Logging
For anyone who is creating a physical standby data bank, it is a requirement intended for there to be no unlogged transactions in your database, similar to direct path loading as well as transactions that do not generate updates because your database is an identical physical copy of the main therefore all transactions should be applied in the correct purchase. To enforce this, you have to run the following command:
CHANGE DATABASE FORCE LOGGING;
Okay, so now we’re up and running throughout ARCHIVELOG mode and normally log all transactions from the REDO stream. We can go on a level 0 backup, making use of the following commands:
rman targeted sys/< pwd> @< SID>
backup as compressed backup set incremental level = 0 database plus archivelog;
The length of time how the backup takes will depend on how extensive your database is. If you use the conventional compression level for RMAN, your backup will be all-around 20% of the size of your database.
Once the level zero backup is complete, it is advisable to copy the files on the standby host ready for reestablishing to create the physical life database.
Initialisation Parameter Construction
Before reestablishing the physical standby data bank, we need to configure some more parameters on the primary data bank for the data guard to do something as a primary database while visiting the data guard configuration. I possess listed below the guidelines which I needed to change in an attempt to configure the primary DB intended for Data Guard:
NOTE: We have also configured some of the variables only required if the primary database becomes the life database, in other words, if a failover or switchover scenario was to occur.
NOTE: In the cases below, TEST is the principal DB, and TESTDG could be the physical standby database
BE AWARE: The file locations in the Data Guard database can differ only by the name of typically the DB. For example, the path ‘D: \ORADATA\TEST\’ will become ‘D: \ORADATA\TESTDG\.’
alter system set log_archive_config=’DG_CONFIG=(TEST, TESTDG)’ scope=both;
— This kind of shows that there are to be a pair of databases in the Data Officer configuration
alter system arranged LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both;
— Allows the first archive log location
alter system set log_archive_dest_2=’SERVICE=TESTDG ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE)
— configures aged REDO log shipping while the database is running in the main role to the Data Officer DB (TESTDG)
alter program set LOG_ARCHIVE_DEST_STATE_2=DEFER scope=both;
— Enables the second archive record destination
alter system fixed remote_login_passwordfile=EXCLUSIVE scope=spfile;
— This kind of value must be the same on both databases for the records to ship
alter technique set LOG_ARCHIVE_FORMAT=%t_%s_%r. arc scope=both;
— Configures the style for the names of the aged redo logs
alter technique set LOG_ARCHIVE_MAX_PROCESSES=4 scope=both;
— This is the max number of techniques that can be shipping logs.
Putting more can increase COMPUTER usage if there is a large number of records to ship
alter technique set FAL_SERVER=TESTDG scope=both;
— Setting the Fetch Organize Log server as TESTDG, the standby database
— Only required for when major switches to standby
modify system set DB_FILE_NAME_CONVERT=’TESTDG’, ‘TEST’ scope=spfile;
— This turns all data file trails where there is TESTDG in the label and replaces it together with TEST
— Only needed for when primary switches are able to standby
alter system placed LOG_FILE_NAME_CONVERT=’TESTDG’, ‘TEST’ scope=spfile;
— This converts all diary file paths where there is TESTDG in the name and changes it with TEST
— Only required for when most important switches to standby
transform system set STANDBY_FILE_MANAGEMENT=AUTO scope=both;
— Ensures that when data are added and taken away that they are also added in addition to being removed from the Data Guard databases
Control File and PFILE Creation
Now that the initialization parameters have been configured for the primary database, it’s a chance to create standby control data and the Data Guard repository parameter file.
ALTER REPOSITORY CREATE STANDBY CONTROLFILE SINCE ‘C: \temp\TEST. title;
GENERATE PFILE=’C: \temp\initTEST. ora’ COMING FROM SPFILE=’G: \Oracle\oradata\TEST\admin\pfile\spfileTEST. ora;
— Create parameter file for life DB
— I always clearly define where the PFILE and also SPFILE locations are to stay away from any issues whereby an unacceptable file is used
— Replicate this file along with the life control file to the Info Guard DB server looking forward to the restore
You should currently take an interest. Ora pedoman file and often modify the settings so that it can be used for any physical standby database.
— The DB_NAME remains to be the same as the primary DB; I would like to point that to you here
— Change this to echo the name of your Data Guard databases; I’m using TESTDG the following
CONTROL_FILES=’D: \ORADATA\TESTDG\CONTROL01. CTL’, ‘D: \ORADATA\TESTDG\CONTROL02. CTL’, ‘D: \ORADATA\TESTDG\CONTROL03. CTL’
— Should be at least two of these, I use several, and the only difference inside the path is that I have improved TEST to TESTDG
— Same setup as before, but that one converts the primary data record names to the Data Protect names
— As before, the sign file names in the journey are changed to reflect their particular location on the disk
LOG_ARCHIVE_DEST_1=’LOCATION=O: \flash_recover_TEST\ArchLogs VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=TESTDG’
— This is where the aged REDO logs will be inserted when in the standby and first roles
LOG_ARCHIVE_DEST_2=’SERVICE=TEST ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=TEST’
— That specifies the parameters intended for when the database is roaming the primary role
— Solely required when standby knobs to primary
— Ensure that both the storage destinations are ENABLED
— This points to the Primary database
OK, thus, we’ve configured all initialization parameters for the primary and physical standby databases. We’ve also added the relevant details should we decide to be unsuccessful or switch the primary jobs and life jobs in the Data Guard setup.
Windows Service, Listener, TNSNames, etc
The next step is to maintain the Windows service around the server, the listeners, TNSNames files, and password files that an SPFILE for the life database. Let’s take a look at the way you are going to do that.
1 . Put TNSNames entries to present Oracle clients on major and standby database hosts
Following on using the Make sure TESTDG database names, even as we have done in the earlier parts of this post, here are the TNSNames items for those DBs:
TEST. DEV. INT. COM =
(ADDRESS = (PROTOCOL = TCP)(Host = TESTlistener)(Port = 1521))
(SERVICE_NAME = TEST. DEV. INT. COM)
(SERVER = DEDICATED)
TESTING. DEV. INT. COM =
(ADDRESS = (PROTOCOL = TCP)(Host = TESTDGListener)(Port = 1521))
(SERVICE_NAME = TESTDG. DEV. INTCOM)
(SERVER = DEDICATED)
NOTE: I am using any DNS entry for the WEB HOST parameter. If you are using DNS, ensure that the DNS label resolves correctly using the nslookup command from both hosts.
NOTE: You need to ensure that SQLNet traffic is permitted in the directions so that log transport can occur.
2 . Bring listener entries to the Records Guard DB server along with the primary DB server
You could create new guests if you don’t have them installed by now or add the setting to one you have already fitted; it’s up to you. These are often the entries that I have for any TEST and TESTDG DBs:
— Data Guard Listener Setting
(DESCRIPTION_LIST sama dengan
(ADDRESS sama dengan (PROTOCOL = TCP)(HOST sama dengan 10. 63. 41. 57)(PORT = 1521))
(SID_LIST sama dengan
(GLOBAL_DBNAME sama dengan TESTDG. DEV. INT. COM)
(ORACLE_HOME = D: \oracle\product. 2 . 0\dbhome_11203)
(SID_NAME sama dengan TEST)
— Primary DB Listener Construction
(DESCRIPTION_LIST sama dengan
(ADDRESS sama dengan (PROTOCOL = TCP)(HOST sama dengan 10. 60. 41. 57)(PORT = 1521))
(SID_LIST sama dengan
(GLOBAL_DBNAME sama dengan TEST. DEV. INT. COM)
(ORACLE_HOME = D: \oracle\product. 2 . 0\dbhome_11203)
(SID_NAME sama dengan TEST)
NOTICE: Hopefully, you will have noticed that within the configuration for the Data Officer listener, the SID_NAME sama dengan TEST not TESTDG. If you remember, the actual DB_NAME is the same as the primary (TEST); however, the DB_UNIQUE_NAME is TESTDG.
Three. Create Windows Service upon Data Guard DB Machine
We will use the ORADIM tool to create the Windows assistance on the Data Guard DB server.
D: \oracle\product. Installment payments on your 0\dbhome_11203\oradim – NEW -SID TESTDG -STARTMODE m
BE AWARE: I’m specifying the entire route to the paradigm utility because the server is multi-homed. You want to avoid any potential confusion over which executable the idea uses. It’s always safest to explicitly define which executable/file/etc. You want to use. Then you know precisely what you are going to get.
BE AWARE: You can alter the settings with this Windows service at any time by opening up the registry configurations (start-> run-> regedit HKEY_LOCAL_MACHINE-> software-> Oracle-> your_oracle_home_name)
four. Create a Password File for Information Guard DB
D: \oracle\product\11. 2 . 0\dbhome_11203\orapwd file=’D: \ORADATA\TESTDG\pwdTEST. ora’ password=change_on_install
5. Produce an SPFILE for the Information Guard Database
SQLPLUS “sys/change_on_install as sysdba.”
STARTUP NOMOUNT PFILE=’D: \ORADATA\TESTDG\ADMIN\PFILE\initTEST. ora’
CREATE SPFILE=’D: \ORADATA\TESTDG\ADMIN\PFILE\SPFILETEST. ora’ FROM PFILE=’D: \ORADATA\TESTDG\ADMIN\PFILE\initTEST. ora’
Nearly there… We are going to make good progress up to now. We’ve done a lot of settings; now it’s time to find out if it’s correct! This next phase is restoring the RMAN level0 backup obtained from the primary database and duplicating over.
SET NLS_DATE_FORMAT=YYYY-MM-DD: HH24: MI: SS
RMAN TARGETED sys/change_on_install@TESTDG
ALTER DATABASE SUPPORT; — Remember that you’ve mounted the database
LISTING START WITH ‘O: \Oracle\flash_recover_TESTDG\flashback\TEST\BACKUPSET12_05_29’;
— Catalog the RMAN degree 0 files so that the data source is aware of the presence of the file
SELECT ‘set newname intended for datafile ”’||file_name||”’ to ”’||replace(file_name, ‘TEST’, ‘TESTDG’)||”’; ‘
while datafile from dba_data_files;
— Command used to get filenames for switch/set newname functioning
— To be run contrary to the primary database and employed in the RUN block listed below
ALLOCATE CHANNEL c1 DEVICE TYPE DISK FORMAT ‘O:\Oracle\flash_recover_TEST\flashback\TEST\BACKUPSET12_05_29\’;
ALLOCATE CHANNEL c2 DEVICE TYPE DISK FORMAT ‘O:\Oracle\flash_recover_TEST\flashback\TEST\BACKUPSET12_05_29\’;
set newname for datafile ‘D:\ORADATA\TEST\DATA\SYSTEM01.DBF’ to ‘D:\ORADATA\TESTDG\DATA\SYSTEM01.DBF’;
set newname for datafile ‘D:\ORADATA\TEST\DATA\UNDOTBS01.DBF’ to ‘D:\ORADATA\TESTDG\DATA\UNDOTBS01.DBF’;
set newname for datafile ‘D:\ORADATA\TEST\DATA\SYSAUX01.DBF’ to ‘D:\ORADATA\TESTDG\DATA\SYSAUX01.DBF’;
set newname for datafile ‘D:\ORADATA\TEST\DATA\USERS01.DBF’ to ‘D:\ORADATA\TESTDG\DATA\USERS01.DBF’;
RESTORE DATABASE FROM TAG ‘TAG20120529T110507’;
SWITCH DATAFILE ALL;
— This kind of RUN block allocates a pair of channels to use for the reestablish and recover operation.
You need to use more channels if you have far more horsepower!
— The “set newname” commands are created from the previous dynamic SQL software. Copy and paste throughout
— The RESTORE order specifies the TAG to the level0 RMAN backup
— The commands then REESTABLISH the database and MOVE all the data file labels as per the set new brand commands
That’s it! It should take some time to restore the data records. The larger the DB, the longer the restore time.
Now you must ensure the primary DB is enabled to ship the actual REDO logs:
ALTER PROGRAM SET log_archive_dest_state_2=ENABLE SCOPE=both;
— On Primary DB
CHANGE DATABASE RECOVER MANAGED LIFE DATABASE DISCONNECT FROM PROGRAM;
— On Standby DB
Hopefully, you have configured everything correctly, and it will work the very first time… For the majority of those who have not, here are some suggestions to help you out:
Feasible Data Guard Configuration Problems
Error 1017 received signing on to the standby.
Ensure that the primary and standby are utilizing a password file, that remote_login_passwordfile is set to DISCUSSED or EXCLUSIVE, and that the SYS password is the same within the password files.
returning mistake ORA-16191
Ensure that you did the following:
1 . Added TNS entries on both the primary and standby DBs. If you have several homes, ensure you’ve altered the correct one
2 . Check that several posts have been added for the fans on both DB servers
. Check a SQL And also log on from primary DB to standby and the other way round
4. Copy the password file from the primary and employ it for the standby DB
Five various. Check network connectivity into the standby host on the accurate port (1521 by default)
Now we should have the Records Guard database configured and grow happy that the logs usually are shipping from the primary into the standby database – follow through first by switching firewood on the primary a few times. You might also check that the logs are deciding on the physical standby applying this query:
set lines 120, watch
set pages 1000
modify session set nls_date_format sama dengan ‘DD-MM-YYYY HH24: MI: SS’;
SELECT SEQUENCE#, APPLIED, FIRST_TIME, NEXT_TIME
JUST WHERE FIRST_TIME > SYSDATE -3
ORDER BY SEQUENCE#;
Another stage of the experiment is always to take snaps, simulate an unsuccessful upgrade and then test the particular rollback. That’s what I may walk you through now.
Tests Rollback Using Snaps
1 ) Switch the logfile on major
ALTER SYSTEM SWITCH LOGFILE;
2 . Ensure applied to life
set lines 120
placed pages 1000
alter time set nls_date_format = ‘DD-MM-YYYY HH24: MI: SS’;
ALTER DATABASE RECOVER
MAINTAINED STANDBY DATABASE DISCONNECT BY SESSION;<br/><br/>13. Check archive logs are shipping and deciding on standby DB<br/><br/><br/><br/>set wrinkles 120<br/> selected pages thousands of<br/><br/>alter session set nls_date_format = ‘DD-MM-YYYY HH24: CONMIGO: SS’;<br/><br/>SELECT SEQUENCE#, PUT ON, FIRST_TIME, NEXT_TIME<br/>FROM V$ARCHIVED_LOG<br/>WHERE FIRST_TIME > SYSDATE -3<br/>ORDER BY SEQUENCE#;<br/><br/>Now that you have completed many of these steps you should hopefully identify that the archived redo firewood is shipping once more to the data guard physical life database. And most importantly, that test confirmed this my theory was accurate. You can use snaps to help rollback a failed upgrade for the primary database and often allow the physical standby databases to stay recovering from where they eventually left off before the upgrade.<br/><br/>I can point out here that it’s in addition possible if you wanted to the long-running
upgrade into your most important database and not stop the particular log shipping. You could also consider snaps of the disks around the physical standby database. Doing it this way would mean you need to restore the Data Guard repository if the upgrade should be unsuccessful, but if it was successful, it could already be up-to-date with the major so no catching from the REDO logs could be required.<br/><br/>Which option you choose is up to you and may depend on your specific requirements, and what you deem appropriate and necessary.
<br/><br/>And finally, in case you liked what you read and they are interested in learning how to become a learn Oracle DBA, come and obtain a FREE report on how to accomplish excellence as an Oracle DBA<br/><br/>We have hundreds of resource content articles online to help you resolve your own Oracle error messages.
Read also: The Advantages And Cons Of Cheap VPS Hosting