Oracle DataPump 10g v 11g

With original Export utility running on a live database, a parameter CONSISTENT=Y is usually set so the objects in the backup is consistent to a single point in time. For Data Pump Export, that parameter does not exist.

According to 10g Oracle documentation: http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref181
A parameter comparable to CONSISTENT is not needed, use FLASHBACK_SCN and FLASHBACK_TIME for this functionality.

The default value of either FLASHBACK_SCN or FLASHBACK_TIME is none.

What you run a Data Pump export on a live database without setting the value of FLASHBACK_SCN or FLASHBACK_TIME, the no. of records in the parent table (parent1) is not same as number of records in child table (child1_of_parent1).

This means
a. The export dump file could capture data from different transactions, and
b. The dump file may be useless if the records in the dump file violated referential integrity constraint on the tables.

Use the following during the export:

FLASHBACK_TIME="TO_TIMESTAMP('08-05-2014 09:00:00', 'DD-MM-YYYY HH24:MI:SS')"

Conclusion is:

a. The behavior of Data Pump Export utility is different from original Export utility in terms of how the active transactions are treated.
b. The original Export utility, when setting consistent=y, will create export dump file of database objects from the point in time at the beginning of the Export session.
c. Without setting values for FLASHBACK_SCN or FLASHBACK_TIME, Data Pump Export utility may create an inconsistent export.
d. To ensure a consistent export with Data Pump export, either set the FLASHBACK_SCN or FLASHBACK_TIME parameter, or restart the database in restrict mode before the export session starts.

The parameter CONSISTENT=Y can be used with Oracle Datapump 11g. According to 11g Oracle documentation: http://docs.oracle.com/cd/B28359_01/server.111/b28319/dp_export.htm you can continue using FLASHBACK_SCN or FLASHBACK_TIME.

Advertisements

RMAN-06059: expected archived log not found

RMAN-06059: expected archived log not found

RMAN attempted to backup an archive log file, but couldn’t find it.

This can happen for a variety of reasons; the file has been manually moved or deleted, the archive log destination has recently been changed, the file has been compressed, etc.

Starting backup at 26-JUL-12
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=155 devtype=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup plus archivelog command at 07/26/2012 22:55:32
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /app/oracle/product/10.2/dbs/arch1_1_781981525.dbf
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

Your options are either to restore the missing file(s), or to perform a crosscheck. To perform a crosscheck, run the following command from within RMAN:

change archivelog all crosscheck;
It is advisable to perform a full backup of the database at this point.

When an archive log crosscheck is performed, RMAN checks each archive log in turn to make sure that it exists on disk (or tape). Those that are missing are marked as unavailable. If you have got missing logs, this won’t bring them back. It will allow you to get past this error and back-up the database though.

ERROR conn.ConnectionService verifyRepositoryEx.433 – Invalid Connection Pool. ERROR = ORA-28000: the account is locked

Issue below can occur on any platform on Oracle Database 10g and 11g database console.

Oracle Database Console started up without any error messages. EM Database console upon access showed database status down and listener status up. On reviewing the sysman log file found the following error:

2012-05-21 17:25:50,426 [HttpRequestHandler-15749845] ERROR

conn.ConnectionService verifyRepositoryEx.433 - Invalid Connection
Pool. ERROR = ORA-28000: the account is locked

Checked the database for SYSMAN

SQL> select username,account_status from dba_users where username in
('SYSMAN','DBSNMP');

USERNAME                       ACCOUNT_STATUS
------------------------------ ------------------------------
DBSNMP                         OPEN 
SYSMAN                         LOCKED(TIMED)

Password for SYSMAN was unknown for me at this stage and the configuration files had encrypted password in it.

Stopped the em console using

emctl stop dbconsole

Then on the database, changed the SYSMAN password as below:

SQL> alter user sysman identified by password account unlock;

User altered.

targets.xml

bash-3.00$ cd $ORACLE_HOME/<hostname_sid>/sysman/emd
bash-3.00$ ls -ltr targets.xml
-rw-r----- 1 oracle oinstall 894 May 21 17:25 targets.xml

bash-3.00$ cp targets.xml targets.xml.21052012

—Change from

<Property NAME="UserName" VALUE="ea8d8c4d8bb0ec12" ENCRYPTED="TRUE"/>
<Property NAME="password" VALUE="21a4a57992210188" ENCRYPTED="TRUE"/>

—to

<Property NAME="UserName" VALUE="SYSMAN" ENCRYPTED="FALSE"/>
<Property NAME="password" VALUE="password" ENCRYPTED="FALSE"/>

emoms.properties

bash-3.00$ cd $ORACLE_HOME/<hostname_sid>/config
bash-3.00$ cp emoms.properties emoms.properties.21052012

—Change from

oracle.sysman.eml.mntr.emdRepPwd=21a4a57992210188
oracle.sysman.eml.mntr.emdRepPwdEncrypted=TRUE

—to

oracle.sysman.eml.mntr.emdRepPwd=password
oracle.sysman.eml.mntr.emdRepPwdEncrypted=FALSE

During the DB console startup the password and username will get encrypted in the above 2 files.

bash-3.00$ emctl start dbconsole

Now while accessing the DB console showed up the login page and the new SYSMAN credentials worked!

ld.so.1: sqlplus: fatal: libsqlplus.so

Applies to Oracle Database 10g and higher versions, any platform.

The following settings are for Solaris 10 64-bit Operating system.

Cannot invoke sqlplus at the command line due to following library error

%sqlplus
ld.so.1: sqlplus: fatal: libsqlplus.so: open failed: No such file or directory
Killed

Print shared library dependencies for sqlplus

%ldd sqlplus
        libsqlplus.so =>         (file not found)
        libclntsh.so.10.1 =>     (file not found)
        libnnz10.so =>   (file not found)
        libkstat.so.1 =>         /lib/64/libkstat.so.1
        libnsl.so.1 =>   /lib/64/libnsl.so.1
        libsocket.so.1 =>        /lib/64/libsocket.so.1
        libgen.so.1 =>   /lib/64/libgen.so.1
        libdl.so.1 =>    /lib/64/libdl.so.1
        libsched.so.1 =>         /usr/lib/64/libsched.so.1
        libc.so.1 =>     /lib/64/libc.so.1
        libaio.so.1 =>   /lib/64/libaio.so.1
        libm.so.2 =>     /lib/64/libm.so.2
        libthread.so.1 =>        /lib/64/libthread.so.1
        libmp.so.2 =>    /lib/64/libmp.so.2
        libmd.so.1 =>    /lib/64/libmd.so.1
        libscf.so.1 =>   /lib/64/libscf.so.1
        libdoor.so.1 =>  /lib/64/libdoor.so.1
        libuutil.so.1 =>         /lib/64/libuutil.so.1
        /platform/SUNW,Sun-Fire-V240/lib/sparcv9/libc_psr.so.1
        /platform/SUNW,Sun-Fire-V240/lib/sparcv9/libmd_psr.so.1

Set the LD_LIBRARY_PATH to resolve the issue.

bash
export ORACLE_HOME=/u01/prod/rdbms/10.2.0
export ORACLE_SID=appscdg
export LD_LIBRARY_PATH=/u01/prod/rdbms/10.2.0/lib:/usr/dt/lib:/usr/openwin/lib:/u01/prod/rdbms/10.2.0/ctx/lib
export PATH=$ORACLE_HOME/bin:$PATH

Now check the binary executable

bash-3.00$ ldd $ORACLE_HOME/bin/sqlplus
        libsqlplus.so =>
/oracle02/appslos/rdbms/10.2.0/lib/libsqlplus.so
        libclntsh.so.10.1 =>
/oracle02/appslos/rdbms/10.2.0/lib/libclntsh.so.10.1
        libnnz10.so =>   /oracle02/appslos/rdbms/10.2.0/lib/libnnz10.so
        libkstat.so.1 =>         /lib/64/libkstat.so.1
        libnsl.so.1 =>   /lib/64/libnsl.so.1
        libsocket.so.1 =>        /lib/64/libsocket.so.1
        libgen.so.1 =>   /lib/64/libgen.so.1
        libdl.so.1 =>    /lib/64/libdl.so.1
        libsched.so.1 =>         /usr/lib/64/libsched.so.1
        libc.so.1 =>     /lib/64/libc.so.1
        libaio.so.1 =>   /lib/64/libaio.so.1
        libm.so.2 =>     /lib/64/libm.so.2
        libthread.so.1 =>        /lib/64/libthread.so.1
        libm.so.1 =>     /lib/64/libm.so.1
        librt.so.1 =>    /lib/64/librt.so.1
        libmp.so.2 =>    /lib/64/libmp.so.2
        libmd.so.1 =>    /lib/64/libmd.so.1
        libscf.so.1 =>   /lib/64/libscf.so.1
        libdoor.so.1 =>  /lib/64/libdoor.so.1
        libuutil.so.1 =>         /lib/64/libuutil.so.1
        /platform/SUNW,Sun-Fire-V240/lib/sparcv9/libc_psr.so.1
        /platform/SUNW,Sun-Fire-V240/lib/sparcv9/libmd_psr.so.1

Issue resolved.

How to check if Oracle Physical Standby is in sync with Primary

How To Check Whether Physical Standby is in Sync with the Primary or Not?

 Check for the redo applied and recieved

 Execute the below query on the Primary database
SQL> SELECT THREAD# “Thread”,SEQUENCE# “Last Sequence Generated” FROM V$ARCHIVED
_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHI
VED_LOG GROUP BY THREAD#) ORDER BY 1;

    Thread Last Sequence Generated
———- ———————–
         1                      21
         1                      21

Execute the belowquery on the Standby database 
SQL> SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL
.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Differenc
e” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME
 ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARC
H, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN
(SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE
 ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

    Thread Last Sequence Received Last Sequence Applied Difference
———- ———————- ——————— ———-
         1                     21                    21          0

Compare value of “Last Sequence Generated” in query executed on Primary with “Last Sequence Received” in query executed on Standby for all threads.
If both values are same than this means that standby has received the last sequence generated on primary.
If both values are not same then there are some archives missing on standby, Hence standby is not in sync with the primary.

Check for GAP

On physical standby,execute the below SQL. If it does not returns any row and output is “no row selected” than this means there is no archive gap on standby.

SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

no rows selected

Redo Log resizing

If you want resize the redo log file, you should drop the current log file.

 First you should switch the log file using this command

 sql> ALTER SYSTEM SWITCH LOGFILE;

SYSTEM ALTERED.

When an online redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping an online redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log file.

 To drop a member of an active group, you must first force a log switch.

 You cannot drop an active redo log group.

SQL> alter database drop logfile group 1;

SQL> alter system switch logfile;

By doing consecutive logfile switches (or sometimes more than twice) the next available log file group becomes active. At this stage you will be able to drop the group which was previously active.

Use the command line rm command or platform specific delete command to delete the redo log file at the OS level.

SQL> !rm /redo1/romford/redo01a_romford.log /redo2/romford/redo01b_romford.log

Add a new log file group 1 with desired size

SQL> ALTER DATABASE ADD LOGFILE GROUP 1 (‘/redo1/romford/redo01a_romford.log’,’/redo2/romford/redo01b_romford.log’) size 100M;

Query v$log and v$logfile views to check the active log file group and its members.

FLASHBACK_TRANSACTION_QUERY gives you null values in UNDO_SQL and OPERATION is set to ‘UNKNOWN

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_file_dest                  string      D:\romford\oradata
db_recovery_file_dest                string      D:\romford\flash_recovery_area
db_recovery_file_dest_size           big integer 3G
db_flashback_retention_target        integer     1440
recyclebin                           string      on
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1

SQL> alter tablespace UNDOTBS1 retention guarantee;

Tablespace altered.

SQL> select tablespace_name,retention from dba_tablespaces;

TABLESPACE_NAME                RETENTION
—————————— ———–
SYSTEM                         NOT APPLY
SYSAUX                         NOT APPLY
UNDOTBS1                       GUARANTEE
TEMP                           NOT APPLY
USERS                          NOT APPLY
MPOOJARI                       NOT APPLY

To flash back a table to an earlier SCN or timestamp, you must have either the

1. FLASHBACK object privilege on the table or the FLASHBACK ANY TABLE system privilege.
2. SELECT, INSERT, DELETE, and ALTER object privileges on the table.
3. Row movement must be enabled for all tables in the Flashback list

Oracle has provided a new view, FLASHBACK_TRANSACTION_QUERY, to provide more information about the data versions. This includes the SQL required to reverse each change. Queries against this view are documented as “flashback transaction queries” and require the SELECT ANY TRANSACTION system privilege.

Error: Selecting from FLASHBACK_TRANSACTION_QUERY gives you null values in UNDO_SQL and OPERATION is set to ‘UNKNOWN’.

Its not that there wasn’t sufficient undo but it was not logged because of supplemental logging not enabled. This is a change which is there from 11g onwards. To make sure that the completel information is logged in the FLASHBACK_TRANSACTION_QUERY view, you must enable it. You can do so by,

SQL> alter database add supplemental log data;

Database altered.