Tuesday 17 December 2013

Install Oracle 11.2.0.3 with ASM on Centos 6.3

Linux Basic commands

Thursday 12 December 2013

Define Multiple Location for archive log

alter system set log_archive_dest_2='location=/u01/app/dest2/arch' scope=both;

Wednesday 11 December 2013

Basic SQL statement performance diagnosis - HOW TO, step by step instructions

http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html

Monday 9 December 2013

PLSQL Tutorial

Declarations 
Declare
message varchar2(20):='Hello World';
Begin
dbms_output.put_line(message);
End;
/

Sunday 8 December 2013

Materialized Views in Oracle

A materialized view, or snapshot as they were previously known, is a table segment whose contents are periodically refreshed based on a query, either against a local or remote table. Using materialized views against remote tables is the simplest way to achieve replication of data between sites. The example code in this article assumes DB1 is the master instance and DB2 is the materialized view site.

Basic syntax

CREATE MATERIALIZED VIEW view-name
BUILD [IMMEDIATE | DEFERRED]
REFRESH [FAST | COMPLETE | FORCE ]
ON [COMMIT | DEMAND ]
[[ENABLE | DISABLE] QUERY REWRITE]
[ON PREBUILT TABLE]
AS
SELECT ...;
 
  • IMMEDIATE : The materialized view is populated immediately.
  • DEFERRED : The materialized view is populated on the first requested refresh.
The following refresh types are available.
  • FAST : A fast refresh is attempted. If materialized view logs are not present against the source tables in advance, the creation fails.
  • COMPLETE : The table segment supporting the materialized view is truncated and repopulated completely using the associated query.
  • FORCE : A fast refresh is attempted. If one is not possible a complete refresh is performed.
A refresh can be triggered in one of two ways.

  • ON COMMIT : The refresh is triggered by a committed data change in one of the dependent tables.
  • ON DEMAND : The refresh is initiated by a manual request or a scheduled task.
 The QUERY REWRITE clause tells the optimizer if the materialized view should be consider for query rewrite operations.

The ON PREBUILT TABLE clause tells the database to use an existing table segment, which must have the same name as the materialized view and support the same column structure as the query.

Grant  Permissions
Check the user who will own the materialized views has the correct privileges. At minimum they will require the CREATE MATERIALIZED VIEW privilege. If they are creating materialized views using database links, you may want to grant them CREATE DATABASE LINK privilege also.


CONNECT sys@db2

GRANT CREATE MATERIALIZED VIEW TO scott;
GRANT CREATE DATABASE LINK TO scott;
 

Create Materialized View 

Connect to the materialized view owner and create the database link and the materialized view itself.
CONNECT scott/tiger@db2

CREATE DATABASE LINK DB1.WORLD CONNECT TO scott IDENTIFIED BY tiger USING 'DB1.WORLD';

CREATE MATERIALIZED VIEW emp_mv
BUILD IMMEDIATE 
REFRESH FORCE
ON DEMAND
AS
SELECT * FROM emp@db1.world;
 
 
Remember to gather stats after building the materialized view.
 
BEGIN
  DBMS_STATS.gather_table_stats(
    ownname => 'SCOTT',
    tabname => 'EMP_MV');
END;
/ 
Create Materialized View Logs

Since a complete refresh involves truncating the materialized view segment and re-populating it using the related query, it can be quite time consuming and involve a considerable amount of network traffic when performed against a remote table. To reduce the replication costs, materialized view logs can be created to capture all changes to the base table since the last refresh. This information allows a fast refresh, which only needs to apply the changes rather than a complete refresh of the materialized view.
To take advantage of the of the fast refresh, connect to the master instance and create the materialized view log.

CONNECT scott/tiger@db1

CREATE MATERIALIZED VIEW LOG ON scott.emp
TABLESPACE users
WITH PRIMARY KEY
INCLUDING NEW VALUES;
 

Refresh Materialized Views

 
If a materialized view is configured to refresh on commit, you should never need to manually refresh it, unless a rebuild is necessary. Remember, refreshing on commit is a very intensive operation for volatile base tables. It makes sense to use fast refreshes where possible.
For on demand refreshes, you can choose to manually refresh the materialized view or refresh it as part of a refresh group.
The following code creates a refresh group defined to refresh every minute and assigns a materialized view to it.

BEGIN
   DBMS_REFRESH.make(
     name                 => 'SCOTT.MINUTE_REFRESH',
     list                 => '',
     next_date            => SYSDATE,
     interval             => '/*1:Mins*/ SYSDATE + 1/(60*24)',
     implicit_destroy     => FALSE,
     lax                  => FALSE,
     job                  => 0,
     rollback_seg         => NULL,
     push_deferred_rpc    => TRUE,
     refresh_after_errors => TRUE,
     purge_option         => NULL,
     parallelism          => NULL,
     heap_size            => NULL);
END;
/

BEGIN
   DBMS_REFRESH.add(
     name => 'SCOTT.MINUTE_REFRESH',
     list => 'SCOTT.EMP_MV',
     lax  => TRUE);
END;
/
 
A materialized view can be manually refreshed using the DBMS_MVIEW package.
 
EXEC DBMS_MVIEW.refresh('EMP_MV');
 
Cleaning Up

CONNECT scott/tiger@db2
DROP MATERIALIZED VIEW emp_mv;
DROP DATABASE LINK DB1.WORLD;

BEGIN
  DBMS_REFRESH.destroy(name => 'SCOTT.MINUTE_REFRESH');
END;
/

CONNECT scott/tiger@db1
DROP MATERIALIZED VIEW LOG ON scott.emp;
 
 

Aggregations and Transformations

Materialized views can be used to improve the performance of a variety of queries, including those performing aggregations and transformations of the data. This allows the work to be done once and used repeatedly by multiple sessions, reducing the total load on the server.
The following query does an aggregation of the data in the EMP table.

CONN scott/tiger
SET AUTOTRACE TRACE EXPLAIN

SELECT deptno, SUM(sal)
FROM   emp
GROUP BY deptno;
 
Create a materialized view to perform the aggregation in advance, 
making sure you specify the ENABLE QUERY REWRITE clause. 

CREATE MATERIALIZED VIEW emp_aggr_mv
BUILD IMMEDIATE 
REFRESH FORCE
ON DEMAND
ENABLE QUERY REWRITE 
AS
SELECT deptno, SUM(sal) AS sal_by_dept
FROM   emp
GROUP BY deptno;

EXEC DBMS_STATS.gather_table_stats(USER, 'EMP_AGGR_MV');
 


The same query is now rewritten to take advantage of the pre-aggregated data in the materialized view, instead of the session doing the work for itself.
 
 
--ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED; 
--ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;
SET AUTOTRACE TRACE EXPLAIN

SELECT deptno, SUM(sal)
FROM   emp
GROUP BY deptno; 

Before using materialized views and materialized view logs, consider the following:

  • Populating a materialized view adds load to both servers involved. The source server is queried to capture the data, which is inserted into the destination server. Be sure the additional load does not adversely affect your primary system.
  • Although materialized view logs improve the performance of materialized view refreshes, they do increase the work needed to perform DDL on the base table. Check the additional work does not adversely affect performance on the primary system.
  • If regular refreshes are not performed, materialized view logs can grow very large, potentially reducing the performance of their maintenance and blowing tablespace limits.
  • Depending on the Oracle version and the complexity of the associated query, fast refreshes may not be possible.
  • When using materialized views to improve performance of transformations and aggregations, the QUERY_REWRITE_INTEGRITY and QUERY_REWRITE_ENABLED parameters must be set or the server will not be able to automatically take advantages of query rewrites. These parameters may be set in the pfile or spfile file if they are needed permanently. Later releases have them enabled by default.

 

 

 


 

 
 

Oracle installation on Linux 6

http://www.oracle-base.com/articles/11g/oracle-db-11gr2-installation-on-oracle-linux-6.php

http://www.tecmint.com/oracle-database-11g-release-2-installation-in-linux/

ftp://mirror.aarnet.edu.au/pub/oraclelinux/OL6/U1/x86/

https://wikis.oracle.com/display/oraclelinux/Downloading+Oracle+Linux

Oracle Basic Commands for linux


How to start Enterprise Manager in DB console
emctl status dbconsole

here you get also your http-adress to call

if it's not running:

emctl start dbconsole


export ORACLE_SID=<database_sid>

emctl start dbconsole



Set ur path env variable in your .bash_profile or .bashrc file

export PATH=$ORACLE_HOME/bin:$PATH

Then just issue the command in your prompt

linux]dbca
linux]netmgr

NB: if your are in a remote machine you need to set DISPLAY variable too

or you can directly to ur bin folder where you can see all the utilities.. you can run it like

linux ]./dbca

Oracle Auto Start on Linux

From Oracle 11gR2 onward the use of the dbstart and dbshut scripts is deprecated. In Oracle 11gR2 the preferred replacement is Oracle Restart. Oracle Restart is itself deprecated in Oracle 12c, with no nominated replacement at the time of writing.
Both dbstart and dbshut are still present in Oracle 11gR2 and Oracle 12cR1, so you can continue to use them for now (I still use them). In order to use Oracle Restart you must install Grid Infrastructure (GI), which you will already have if you are using RAC or ASM for a standalone instance. In these cases, Oracle Restart will already be present and running. For single instance databases that don't use ASM, I think it is unreasonable to expect people to install GI, so the following describes a method for those cases, while avoiding dbstart and dbshut.

Create a file called "/etc/init.d/dbora" as the root user, containing the following.

#!/bin/sh
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
#
# Set ORA_OWNER to the user id of the owner of the
# Oracle database software.

ORA_OWNER=oracle

case "$1" in
    'start')
        # Start the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        # Remove "&" if you don't want startup as a background process.
        su $ORA_OWNER -c "/home/oracle/scripts/startup.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1" &

        touch /var/lock/subsys/dbora
        ;;
    'stop')
        # Stop the Oracle databases:
        # The following command assumes that the oracle login
        # will not prompt the user for any values
        su $ORA_OWNER -c "/home/oracle/scripts/shutdown.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
        rm -f /var/lock/subsys/dbora
        ;;
esac


Use the chmod command to set the privileges to 750.


chmod 750 /etc/init.d/dbora
 
Associate the dbora service with the appropriate run levels and set it to auto-start using 
the following command. 
 chkconfig --add dbora

Next, we must create the "startup.sh" and "shutdown.sh" scripts in the "/home/oracle/scripts". First create the directory.

    # mkdir -p /home/oracle/scripts
    # chown oracle.oinstall /home/oracle/scripts
#!/bin/bash

export TMP=/tmp
export TMPDIR=$TMP
export PATH=/usr/sbin:/usr/local/bin:$PATH
export ORACLE_HOSTNAME=ol6-121.localdomain
export ORACLE_UNQNAME=db12c

export ORACLE_SID=db12c
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES

# Start Listener
lsnrctl start

# Start Database
sqlplus / as sysdba << EOF
STARTUP;
EXIT;
EOF
 
 
The "/home/oracle/scripts/shutdown.sh" script is similar.
 
 
#!/bin/bash

export TMP=/tmp
export TMPDIR=$TMP
export PATH=/usr/sbin:/usr/local/bin:$PATH
export ORACLE_HOSTNAME=ol6-121.localdomain
export ORACLE_UNQNAME=db12c

export ORACLE_SID=db12c 
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES

# Stop Database
sqlplus / as sysdba << EOF
SHUTDOWN IMMEDIATE;
EXIT;
EOF

# Stop Listener
lsnrctl stop 
Make sure the permissions and ownership of the files is correct.
 
# chmod u+x /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh
# chown oracle.oinstall /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh
 
 
The listener and database will now start and stop automatically with the
 machine. You can test them using the following command as the "root" 
user.
 
# service dbora start
# service dbora stop 
For more help visit 
http://www.oracle-base.com/articles/linux/automating-database-startup-and-shutdown-on-linux.php#known_issues






Tuesday 5 November 2013

Database role


select database_role from v$database;

Restore Standby Database by increment backup of rman if Primary server failed.

Suppose you have already create standby server , it is not synchronized with primary server for one .
If you have kept the rman increment or full backup archive log from backup time to failure time. and auto backup of control file, then you can restore the standby server upto the failure point.


To login on standby use

 sys@sid as sysdba
 password: enter the password of standby sys(in my case it is slate)

To login on rman use

connect target sys@sid
password

Restore control file to standby from latest auto backup.
sqlplus>startup nomount

C:\Users\Administrator>rman

Recovery Manager: Release 11.2.0.1.0 - Production on Tue Nov 5 14:37:40 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys@slate2

target database Password:
connected to target database: SLATE (DBID=3523425568, not open)

RMAN>restore standby controlfile from 'c:/backup/C-3523425568-20131105-08';

before starting mount , first change backup location, other wise control file automatic restore the backup.
sqlplus>startup mount;

No catalog your backup

if you already catalog other backup, then first crosscheck them and delete them.

rman target /
crosscheck backup;
delete expired backup;

now catalog your backup

catalog start with 'c:/backup/';

RMAN> catalog start with 'c:/backup1/';

searching for all files that match the pattern c:/backup1/

List of Files Unknown to the Database
=====================================
File Name: C:\BACKUP1\STD_7BOO6T9E_1_1
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-EXAMPLE_FNO-5_75OO6RJP
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-SLATE_FNO-6_77OO6RJU
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-SYSAUX_FNO-2_72OO6RI9
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-SYSTEM_FNO-1_73OO6RJ3
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-UNDOTBS1_FNO-3_74OO6RJI
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-USERS_FNO-4_76OO6RJS

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: C:\BACKUP1\STD_7BOO6T9E_1_1
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-EXAMPLE_FNO-5_75OO6RJP
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-SLATE_FNO-6_77OO6RJU
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-SYSAUX_FNO-2_72OO6RI9
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-SYSTEM_FNO-1_73OO6RJ3
File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-UNDOTBS1_FNO-3_74OO6RJI

File Name: C:\BACKUP1\STD_DATA_D-SLATE_I-3523425568_TS-USERS_FNO-4_76OO6RJS


RMAN> catalog start with 'c:/backup/';

searching for all files that match the pattern c:/backup/
no files found to be unknown to the database

if you find this error, then make sure backup is available there
if backup available there then change location of backup and recatalog it.


Point time in Recovery
If you want  to recover up to specific time then first catalog backup and archive log till that time.
then enter the following command.

 run{
set until time "to_date('2014 Sep 25 10:43','yyyy mon dd hh24:mi')";
restore database ;
recover database;

}


if you first want to recover database and then want to apply log manually then use following command
RMAN> restore database;

Starting restore at 05-NOV-13
using channel ORA_DISK_1

channel ORA_DISK_1: restoring datafile 00001
input datafile copy RECID=351 STAMP=830699653 file name=C:\BACKUP1\STD_DATA_D-S
ATE_I-3523425568_TS-SYSTEM_FNO-1_73OO6RJ3
destination for restore of datafile 00001: C:\ORACLE\ORADATA\SLATE\SYSTEM01.DBF
channel ORA_DISK_1: copied datafile copy of datafile 00001
output file name=C:\ORACLE\ORADATA\SLATE\SYSTEM01.DBF RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00002
input datafile copy RECID=350 STAMP=830699653 file name=C:\BACKUP1\STD_DATA_D-S
ATE_I-3523425568_TS-SYSAUX_FNO-2_72OO6RI9
destination for restore of datafile 00002: C:\ORACLE\ORADATA\SLATE\SYSAUX01.DBF
channel ORA_DISK_1: copied datafile copy of datafile 00002
output file name=C:\ORACLE\ORADATA\SLATE\SYSAUX01.DBF RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00003
input datafile copy RECID=352 STAMP=830699653 file name=C:\BACKUP1\STD_DATA_D-S
ATE_I-3523425568_TS-UNDOTBS1_FNO-3_74OO6RJI
destination for restore of datafile 00003: C:\ORACLE\ORADATA\SLATE\UNDOTBS01.DB

channel ORA_DISK_1: copied datafile copy of datafile 00003
output file name=C:\ORACLE\ORADATA\SLATE\UNDOTBS01.DBF RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00004
input datafile copy RECID=353 STAMP=830699653 file name=C:\BACKUP1\STD_DATA_D-S
ATE_I-3523425568_TS-USERS_FNO-4_76OO6RJS
destination for restore of datafile 00004: C:\ORACLE\ORADATA\SLATE\USERS01.DBF
channel ORA_DISK_1: copied datafile copy of datafile 00004
output file name=C:\ORACLE\ORADATA\SLATE\USERS01.DBF RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00005
input datafile copy RECID=348 STAMP=830699653 file name=C:\BACKUP1\STD_DATA_D-S
ATE_I-3523425568_TS-EXAMPLE_FNO-5_75OO6RJP
destination for restore of datafile 00005: C:\ORACLE\ORADATA\SLATE\EXAMPLE01.DB

channel ORA_DISK_1: copied datafile copy of datafile 00005
output file name=C:\ORACLE\ORADATA\SLATE\EXAMPLE01.DBF RECID=0 STAMP=0
channel ORA_DISK_1: restoring datafile 00006
input datafile copy RECID=349 STAMP=830699653 file name=C:\BACKUP1\STD_DATA_D-S
ATE_I-3523425568_TS-SLATE_FNO-6_77OO6RJU
destination for restore of datafile 00006: C:\ORACLE\ORADATA\SLATE\SLATE
channel ORA_DISK_1: copied datafile copy of datafile 00006
output file name=C:\ORACLE\ORADATA\SLATE\SLATE RECID=0 STAMP=0
Finished restore at 05-NOV-13

RMAN> recover database;

Starting recover at 05-NOV-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: C:\ORACLE\ORADATA\SLATE\SYSTEM01.DBF
destination for restore of datafile 00002: C:\ORACLE\ORADATA\SLATE\SYSAUX01.DBF
destination for restore of datafile 00003: C:\ORACLE\ORADATA\SLATE\UNDOTBS01.DB

destination for restore of datafile 00004: C:\ORACLE\ORADATA\SLATE\USERS01.DBF
destination for restore of datafile 00005: C:\ORACLE\ORADATA\SLATE\EXAMPLE01.DB

destination for restore of datafile 00006: C:\ORACLE\ORADATA\SLATE\SLATE
channel ORA_DISK_1: reading from backup piece C:\BACKUP\STD_7BOO6T9E_1_1
channel ORA_DISK_1: errors found reading piece handle=C:\BACKUP\STD_7BOO6T9E_1_

channel ORA_DISK_1: failover to piece handle=C:\BACKUP1\STD_7BOO6T9E_1_1 tag=IN
R_UPDATE
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

unable to find archived log
archived log thread=1 sequence=168
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 11/05/2013 13:55:27
RMAN-06054: media recovery requesting unknown archived log for thread 1 with se
uence 168 and starting SCN of 3806855

on sqlplus enter this command.

place the archive archive at location which is defined for standby server . then issue following command.

or catalog the archive log file then issue following command.

catalog start with 'c:/backup/';


for point and time recovery do not issue following command, it is only to recover manually
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

check that 

SELECT sequence#, first_time, next_time, applied
 FROM   v$archived_log
ORDER BY sequence#;

if successfully applied all logs, then switch over the standby to primary .....

Read-Only Standby and Active Data Guard

Once a standby database is configured, it can be opened in read-only mode to allow query access. This is often used to offload reporting to the standby server, thereby freeing up resources on the primary server. When open in read-only mode, archive log shipping continues, but managed recovery is stopped, so the standby database becomes increasingly out of date until managed recovery is resumed.
To switch the standby database into read-only mode, do the following.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE OPEN READ ONLY;
To resume managed recovery, do the following.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Failover

If the primary database is not available the standby database can be activated as a primary database using the following statements.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE ACTIVATE STANDBY DATABASE;


Thursday 31 October 2013

Recover archivelog gap between primary to standby using RMAN incremental backups

A Physical Standby database synchs with Primary by continuous apply of archive logs from a Primary Database. In case of an archive log gone missing or corrupt, We have to bring back the standby to sync with the primary.

When the logs missing or corrupt is less in number (say below 15), we can ship the logs which were missing in the standby site from the primary site (scp/sftp/ftp) and then we can register the log file in the standby so that the gap can be resolved.

Find the archives which are missing by issueing the following command.
SQL> select * from v$archive_gap;
This would give the gap sequences. Or you can use the v$managed_standby view to find where the log apply stuck.

SQL> select sequence#,process,status from v$managed_standby;
Here u can see status as wait for log for say sequence# 100 but your primary would've proceeded to sequence# 110

At primary
SQL> select max(sequence#) from v$archived_log;      ---> This would show you 110

Copy the logs to the standby site from the primary site
$ scp log_file_name_n.arc oracle@standby:/log/file/location/log_file_name_n.arc

At standby site
SQL> alter database register logfile '/log/file/location/log_file_name_n.arc';
logfile registered
Do the log file registration at the standby site until all the missing log files are registered. Now apply would take place and your standby will become sync with the primary.
This is easy process if you have missing or corrupt logs in lesser number. But when the difference is huge (say around 500 logs) this method is very time consuming and not a proper approach.

Else you have to rebuild the standby database from scratch.
As an enhancement from 10g, an incremental backup created with BACKUP INCREMENTAL... FROM SCN can be used to refresh the standby database with changes at the primary database since the last SCN at Standby and then managed recovery can resume i.e. Compensate for the missing archive logs.

Let us see the steps involved.
Step 1: On the primary:
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
144710998

On the standby:
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
130158742

Clearly there is a difference. But this by itself does not indicate a problem; since the standby is expected to lag behind the primary (this is an asynchronous non-real time apply setup). The real question is how much it is lagging in the terms of wall clock.

To know that use the scn_to_timestamp function to translate the SCN to a timestamp:
SQL> select scn_to_timestamp(144710998) from dual;
SCN_TO_TIMESTAMP(1447102)
-------------------------------
18-AUG-11 08.54.28.000000000 AM

Run the same query to know the timestamp associated with the SCN of the standby database as well
SQL> select scn_to_timestamp(130158742) from dual;
SCN_TO_TIMESTAMP(1301571)
-------------------------------
13-AUG-11 07.19.27.000000000 PM
Note: Run it on the primary database, since it will fail in the standby in a mounted mode
 This shows that the standby is four and half days lagging!

Step 2: [Standby] Stop the managed standby apply process:
SQL> alter database recover managed standby database cancel;
Database altered.

Step 3: [Standby] Shutdown the standby database
SQL> shut immediate

Step 4: [Primary] On the primary, take an incremental backup from the SCN number where the standby has been stuck:
RMAN> run {
2> allocate channel c1 type disk format '/u01/backup/%U.bkp';
3> backup incremental from scn 130158740 database;
4> }

Step 5: [Primary] On the primary, create a new standby controlfile:
SQL> alter database create standby controlfile as '/u01/backup/for_standby.ctl';
Database altered.

Step 6: [Primary] Copy these files to standby host:
oracle@dba1 /u01/backup]$ scp * oracle@dba2:/u01/backup

Step 7: [Standby] Bring up the instance in nomount mode:
SQL> startup nomount

Step 8: [Standby] Check the location of the controlfile:
SQL> show parameter control_files
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_files string /u01/oradata/standby_cntfile.ctl

Step 9: [Standby] Replace the controlfile with the one you just created in primary.
 $ cp /u01/backup/for_standby.ctl /u01/oradata/standby_cntfile.ctl

Step 10: [Standby] Mount the standby database:
SQL> alter database mount standby database;

Step 11: [Standby] Connect to RMAN. RMAN does not know about these files yet; so you must let it know – by a process called cataloging. Catalog these files:
$ rman target=/
RMAN> catalog start with '/u01/backup';

Step 12: Recover these files:
RMAN> recover database;

Step 13: After some time, the recovery fails with the message:
archive log filename=/u01/oradata/1_18108_697108460.dbf thread=1 sequence=18109
ORA-00310: archived log contains sequence 18108; sequence 18109 required
This happens because we have come to the last of the archived logs. The expected archived log with sequence# 18108 has not been generated yet.

Step 14: At this point exit RMAN and start managed recovery process:
SQL> alter database recover managed standby database disconnect from session;
Database altered.

Step 15: Check the SCN’s in primary and standby:
[Standby] SQL> select current_scn from v$database;
CURRENT_SCN
-----------
144747125
[Primary] SQL> select current_scn from v$database;
CURRENT_SCN
-----------
144747111
Now they are very close to each other. The standby has now caught up.

Thursday 10 October 2013

Re create control file for standby database

To create a control file, you have to do the following on the Primary server...

SQL> startup mount;
SQL> alter database create standby controlfile as ‘<file-name>’;
SQL> alter database open;

Then you have to copy the file you created to the Standby server in the corresponding directory (control_files parameter in the init.ora file) with the database in shutdown mode. Start it up and it will be Standby again.


Hope that helps!


make as many copy of this control file as much you have define in pfile or spfile ;


Second method 

sql>startup nomount;

take backup of control file by rman, copy this to standby server.
  1. RMAN> restore standby controlfile from '/tmp/control01.ctl';  
  2.   
  3. Starting restore at 08-JUN-10  
  4. using target database control file instead of recovery catalog  
  5. allocated channel: ORA_DISK_1  
  6. channel ORA_DISK_1: sid=1082 instance=biodb1 devtype=DISK  
  7.   
  8. channel ORA_DISK_1: copied control file copy  
  9. output filename=+DATADG/control01.ctl  
  10. output filename=+FRADG/control02.ctl  
  11. output filename=+DATADG/control03.ctl  
  12. Finished restore at 08-JUN-10  
  13.   
  14. RMAN> exit  
  1. SQL> alter database mount standby database;  
  2.   
  3. Database altered.  

Startup and shutdown physical standby

8.1 Starting Up and Shutting Down a Physical Standby Database

This section describes the procedures for starting up and shutting down a physical standby database.

8.1.1 Starting Up a Physical Standby Database

To start up a physical standby database, use SQL*Plus to connect to the database with administrator privileges, and then use the SQL*Plus STARTUP command with the NOMOUNT option. (You must use theNOMOUNT option with a standby database.)
If both the primary and standby databases are offline, then always (if possible) start the standby database before starting the primary database.
After the database is started, mount the database as a standby database. Once it is mounted, the database can receive archived redo data from the primary database.
You then have the option of either starting a managed recovery operation or opening the database for read-only access. Typically, you start a managed recovery operation. The following example shows how to start a standby database:
  1. Start the database:
    SQL> STARTUP NOMOUNT;
    
    
  2. Mount the standby database:
    SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
    
    
  3. Start the managed recovery operation:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE 
      2> DISCONNECT FROM SESSION;
    
    
Once the database is performing managed recovery, log apply services apply the archived redo logs to the standby database.
See Also:
Section 6.2.2 for information about managed recovery and Section 8.2 for information on opening a standby database for read-only access

8.1.2 Shutting Down a Physical Standby Database

To shut down a physical standby database, use the SQL*Plus SHUTDOWN command. If the database is performing managed recovery, you must cancel managed recovery operations before issuing theSHUTDOWN command. Control is not returned to the session that initiates a database shutdown until shutdown is complete.
If the primary database is up and running, defer the archive log destination on the primary database and perform a log switch operation (to make the defer operation take effect) before shutting down the standby database. Otherwise, log transport services will not be able to transmit redo data to this standby site.
The following steps show you how to shut down a standby database:

  1. Find out if the standby database is performing managed recovery. If the MRP0 or MRP process exists, then the standby database is performing managed recovery.
    SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;
    
    
  2. Cancel managed recovery operations.
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    
    
  3. Shut down the standby database.
    SQL> SHUTDOWN IMMEDIATE;

Tuesday 8 October 2013

About the Undo Retention Period

After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible.
When automatic undo management is enabled, there is always a current undo retention period, which is the minimum amount of time that Oracle Database attempts to retain old undo information before overwriting it. Old (committed) undo information that is older than the current undo retention period is said to be expired and its space is available to be overwritten by new transactions. Old undo information with an age that is less than the current undo retention period is said to be unexpired and is retained for consistent read and Oracle Flashback operations.
Oracle Database automatically tunes the undo retention period based on undo tablespace size and system activity. You can optionally specify a minimum undo retention period (in seconds) by setting the UNDO_RETENTION initialization parameter. The exact impact this parameter on undo retention is as follows:
  • The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database always tunes the undo retention period for the best possible retention, based on system activity and undo tablespace size. See "Automatic Tuning of Undo Retention" for more information.
  • For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to honor the minimum retention period specified byUNDO_RETENTION. When space is low, instead of overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is specified for an auto-extending undo tablespace, when the maximum size is reached, the database may begin to overwrite unexpired undo information. The UNDOTBS1 tablespace that is automatically created by DBCA is auto-extending.

    Automatic Tuning of Undo Retention

    Oracle Database automatically tunes the undo retention period based on how the undo tablespace is configured.
    • If the undo tablespace is configured with the AUTOEXTEND option, the database dynamically tunes the undo retention period to be somewhat longer than the longest-running active query on the system. However, this retention period may be insufficient to accommodate Oracle Flashback operations. Oracle Flashback operations resulting in snapshot too old errors are the indicator that you must intervene to ensure that sufficient undo data is retained to support these operations. To better accommodate Oracle Flashback features, you can either set the UNDO_RETENTION parameter to a value equal to the longest expected Oracle Flashback operation, or you can change the undo tablespace to fixed size.
    • If the undo tablespace is fixed size, the database dynamically tunes the undo retention period for the best possible retention for that tablespace size and the current system load. This best possible retention time is typically significantly greater than the duration of the longest-running active query.
      If you decide to change the undo tablespace to fixed-size, you must choose a tablespace size that is sufficiently large. If you choose an undo tablespace size that is too small, the following two errors could occur:
      • DML could fail because there is not enough space to accommodate undo for new transactions.
      • Long-running queries could fail with a snapshot too old error, which means that there was insufficient undo data for read consistency.
      See "Sizing a Fixed-Size Undo Tablespace" for more information.
    Note:
    Automatic tuning of undo retention is not supported for LOBs. This is because undo information for LOBs is stored in the segment itself and not in the undo tablespace. For LOBs, the database attempts to honor the minimum undo retention period specified by UNDO_RETENTION. However, if space becomes low, unexpired LOB undo information may be overwritten.

    Retention Guarantee

    To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default.
    WARNING:
    Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.
    You enable retention guarantee by specifying the RETENTION GUARANTEE clause for the undo tablespace when you create it with either the CREATE DATABASEor CREATE UNDO TABLESPACE statement. Or, you can later specify this clause in an ALTER TABLESPACE statement. You disable retention guarantee with theRETENTION NOGUARANTEE clause.
    You can use the DBA_TABLESPACES view to determine the retention guarantee setting for the undo tablespace. A column named RETENTION contains a value ofGUARANTEENOGUARANTEE, or NOT APPLY, where NOT APPLY is used for tablespaces other than the undo tablespace.

    Undo Retention Tuning and Alert Thresholds

    For a fixed-size undo tablespace, the database calculates the best possible retention based on database statistics and on the size of the undo tablespace. For optimal undo management, rather than tuning based on 100% of the tablespace size, the database tunes the undo retention period based on 85% of the tablespace size, or on the warning alert threshold percentage for space used, whichever is lower. (The warning alert threshold defaults to 85%, but can be changed.) Therefore, if you set the warning alert threshold of the undo tablespace below 85%, this may reduce the tuned size of the undo retention period. For more information on tablespace alert thresholds, see "Managing Tablespace Alerts".

    Tracking the Tuned Undo Retention Period

    You can determine the current retention period by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT view. This view contains one row for each 10-minute statistics collection interval over the last 4 days. (Beyond 4 days, the data is available in the DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given in seconds.
    select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
    to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
    from v$undostat order by end_time;
    
    BEGIN_TIME      END_TIME        TUNED_UNDORETENTION
    --------------- --------------- -------------------
    04-FEB-05 00:01 04-FEB-05 00:11               12100
          ...                                          
    07-FEB-05 23:21 07-FEB-05 23:31               86700
    07-FEB-05 23:31 07-FEB-05 23:41               86700
    07-FEB-05 23:41 07-FEB-05 23:51               86700
    07-FEB-05 23:51 07-FEB-05 23:52               86700
    
    576 rows selected.
    
    See Oracle Database Reference for more information about V$UNDOSTAT.