Data Pump * Traditional exp|imp to data pump expdp|impdp cmdline parameter conversion. Official list is (for import, export, respectively): http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref389 http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref181 [Update 2011-06] Beginning with 11gR2, data pump has legacy mode for the parameters: http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_legacy.htm My parameter conversion: exp|imp file -> expdp|impdp dumpfile, but only filename, not path exp|imp log -> expdp|impdp logfile, but only filename, not path exp compress -> no equivalent; some still mistakenly believe `exp compress=y' compresses dump file; note expdp compression CAN really compress, on metadata only in 10g and all data in 11g, and it may compress better than gzip (Ref: http://www.oracle.com/technology/oramag/oracle/09-jul/o49data.html?_template=/ocom/print) exp|imp rows -> expdp|impdp content exp owner -> expdp schemas exp consistent=y -> expdp flashback_{time|scn} (see the section starting with "exp consistent=y can be simulated") imp indexfile -> impdp content or sqlfile imp fromuser touser -> impdp remap_schema (in its presence, schemas parameter, if used, points to the original or source schema, not the remapped one) imp ignore -> impdp table_exists_action for tables (for tables only, unlike imp ignore for any object, which has no exact equivalent in impdp) exp|imp statistics=none -> expdp|impdp exclude=statistics (documentation is wrong in saying "A parameter comparable to STATISTICS is not needed. Statistics are always saved for tables."); before 11gR2, it's very beneficial to exclude stats or you would contaminate shared pool with lots of literal SQLs like INSERT INTO "SYS"."IMPDP_STATS"... during impdp (Bug 7185178, fixed in 11gR2). If you forgot, right after impdp, purge those SQLs with grant select on sys.impdp_stats to dba and revoke it, or flush shared pool if you don't mind the impact. Note: it seems impdp exclude=(table_)statistics is available in 10g but not 11g. expdp is fine. imp resumable -> not needed; impdp does it by default, and it shows on screen "ORA-39171: Job is experiencing a resumable wait" except during the final index creation phase. imp resumable_timeout -> none. If 2 hour default is not suitable, after impdp starts, exec dbms_resumable.set_session_timeout to set it (which also works around Bug 6140308 that DB level setting is ignored, and Bug 2968139 that parallel sessions won't take the alter session setting). exp|imp recordlength -> not supported by expdp|impdp exp|imp trace=y -> The undocumented SQL trace parameter accepts much more options in data pump than just y or n in exp|imp, and can do more than SQL trace in data pump, but requires higher privilege in data pump. Level 1 SQL trace can use value e.g. 400301. See Note:286496.1. NLS_LANG for exp|imp -> environment variable not needed because expdp|impdp uses database nls_characterset parameter on the servers. But see Note:227332.1 Sec.7. exp|imp through a pipe, usually to compress dumpfile or to pass to ssh to send the dump across servers to avoid intermediate dump file -> no equivalent; expdp|impdp does not work with pipes or any sequential access devices (Ref: Note:463336.1). Expdp compression does real compression already. And dbms_datapump package allows for cross-server data pump export and import (Ref: http://www.freelists.org/post/oracle-l/Data-Pump-over-NETWORK-LINK,9) Note:286496.1 has a list of data pump parameters not shown in help=y: trace, metrics, keep_master. Notes 469439.1, 752923.1 and 813396.1 have access_method, which can take value direct_path or external_table (not external_tables). If a data pump session is unusually slow and it's on an IOT table (Bug 16586937) or one with chained rows (Doc 2033737.1), try external_table. * 12c logtime param can be simulated in pre-12c data pump, e.g.: expdp username... 2>&1 | perl -pe '$t=localtime;print "$t: "' The Perl one-liner is generic; it can be used to prefix time to the output lines of any Linux/UNIX command. Adjust time to any format you prefer per Perl syntax. On Windows, change " to \" and ' to " (that works on Linux/UNIX too). * impdp remap_schema and remap_tablespace can do multiple remapping: impdp remap_schema=(scott:adam,john:mary) (where parentheses are optional) or impdp remap_schema=scott:adam remap_schema=john:mary Remap_tablespace does such a thoroughly clean job that not only LOBs for which exp|imp can't deal with are relocated, but a partitioned table's default tablespace (xxx_part_tables.def_tablespace_name) is also remapped (so no need to 'alter table modify default attributes tablespace'). * If you `kill -9' an expdp process on UNIX/Linux, your terminal may be messed up. Try `stty sane' to make it work again. But refrain from doing that. The correct way is ^C and kill_job. Otherwise, manually clean up by attaching to the aborted job (expdp|impdp attach=). * A dump file created by expdp won't have other-readable bit set, regardless umask. This is different from a dump file created by exp. * What's the use of version parameter for expdp|impdp? Bugs such as 7413726 (POOR EXPDP PERFORMANCE WHEN DB COMPATIBLE=10.2.0.3 OR 10.2.0.4) can be worked around by setting this version parameter to a lower Oracle version than the database compatible parameter (e.g. version=10.2.0.1). If not, you would have to change the DB parameter and bounce, and you can't change DB compatible to one a major version down. Not sure what exactly is involved. 10g no longer has v$compatibility or v$compatseg. * Evaluate and compare performance of using network_link vs. NFS. If you map both the source and target servers to a common NFS server location, which can be the source or target server itself, remember to set read/write buffer size to a higher value, e.g. mount -t nfs -o hard,intr,rsize=32768,wsize=32768 nfssvr:/path /path where hard is optional since it's default, and intr allows for interruption on stale mount. If impdp, the command you run to pick up the dumpfile, can't write log to the directory where the NFS mount is, direct the log to a different location, e.g. logfile=somelocaldir:impdp_from_nfs.log or don't log, e.g. nologfile=yes. * When you expdp from a remote DB, you can create a DB link in the local DB. In dbprod, as the user that expdp logs into: create database link dbstage connect to u identified by password123 using 'dbstage'; Make sure export_dir directory object exists in dbprod. expdp system/xxxxxxx schemas=u network_link=dbstage directory=export_dir dumpfile=expdp_dbstage_u.dmp logfile=expdp_dbstage_u.log or if the link is in sys schema or it's a public db link expdp \'/ as sysdba\' schemas=u network_link=dbstage directory=export_dir dumpfile=expdp_dbstage_u.dmp logfile=expdp_dbstage_u.log One limitation when you use network_link is that tables with LONG or LONG RAW columns will not have data copied over,[2016-11 Update: This limitation is lifted in Oracle 12.2.] or in older versions of Oracle, the table be silently skipped. (Documentation only mentions LONG and error ORA-31679 lumps LONG RAW into LONG.) If it's just LONG, you can subsequently run Sqlplus command copy from usr@srcdb to usr@destdb insert tablename using select * from tablename to insert the data into the empty table (prefix tablename with owner if needed), or copy ... create ... to create the table. If the data type is LONG RAW, the copy command won't work. Note this limitation is not on RAW data type. Also, network_link doesn't work with the undocumented access_method parameter; you would get ORA-39115. See the section on access_method. In 10g, also be aware of Note:6630677.8 "IMPDP skips table exported just before table with LONG column using NETWORK_LINK" which is fixed in 11.0.6. * Apart from the bug stated below, exp consistent=y can be simulated with expdp flashback_time=systimestamp (see Doc 464132.1) or flashback_time="to_timestamp()". /* flashback_time=systimestamp first appears in 12.2 documentation. But according to Docs 464132.1 and 2509092.1, it should work in 10g and 11g. The syntax using to_timestamp is very confusing especially on UNIX. If you really don't like the simple systimestamp or use a parfile, you can follow this syntax: On UNIX, it is (assuming you use sysdate): flashback_time=\"to_timestamp\(to_char\(sysdate,\'yyyy-mm-dd hh24:mi:ss\'\),\'yyyy-mm-dd hh24:mi:ss\'\)\" On Windows, remove all \: flashback_time="to_timestamp(to_char(sysdate,'yyyy-mm-dd hh24:mi:ss'),'yyyy-mm-dd hh24:mi:ss')" */ /* But in 10g, due to Bug:6641104, sysdate cannot be specified only once on the command line for multiple tables; otherwise it would be re-evaluated for each table during the expdp period, and there would be no one single read consistent point in time. Instead, you must hardcode the time with actual time, or just use flashback_scn. So use sysdate only if you export one table. Here's my workaround. Use one single line with two commands separated by semicolon to make sure there's minimum time gap. It's almost the same as flashback_time=current-time or sysdate): DATE=`date "+%Y-%m-%d %H:%M:%S"`; expdp username flashback_time=\"to_timestamp\(\'$DATE\',\'yyyy-mm-dd hh24:mi:ss\'\)\" I think this bug is fixed in 11.1.0.7. See Note:601739.1. */ * However, expdp flashback_{time|scn} uses completely different mechanism than exp consistent=y. expdp uses flashback technology while exp runs set transaction read only at the beginning of the session, which is (in case you don't know!) silently ignored if the login user is SYS or any user with sysdba privilege. Besides, consistent=y is ignored when exporting partitions of a partitioned table. * The power of flashback_{time|scn} is what the name suggests. You can flashback tables to a previous point in time. Note that in spite of the undocumented restriction that SQL "flashback table ... to {scn|timestamp}" throws ORA-604 and ORA-997 when the table has a LONG column, expdp flashback_{time|scn} works fine with LONG (as long as you don't also specify network_link; see Note:553337.1). * Old expdp has a number of memory leak bugs that make it almost unusable under certain conditions. For example, Bug 5095025 is triggered when you list too many schemas, and Bug ??? when you have too many grants. In one case, my expdp process depleted system memory at a rate of 10MB per second. * Remap schema or tablespace may cause server free memory to drop sharply and the data pump session gets ORA-4030 if there're too many to remap. See Bugs 5464834, 5095025, 5292551, etc. * Remap_schema and the new schema's password Oracle documentation http://docs.oracle.com/database/121/SUTIL/GUID-619809A6-1966-42D6-9ACC-A3E0ADC36523.htm#SUTIL927 http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL927 "If the import operation does create the schema, then after the import is complete, you must assign it a valid password to connect to it." Test shows that if you use a client of version 11.2.0.3 or newer, you are able to connect to the newly-created remapped user with the original user's password. Only if the client is 11.2.0.2 or older will you get ORA-01017 (invalid username/password; logon denied). But since Oracle officially states that the remapped user has an "unusable" password before you assign a valid one, the behavior of the newer client should be treated as unreliable or just lucky. In fact, this is not a data pump issue per se. You can simulate this test with: create user usera identified by Today123; select password, spare4 from sys.user$ where name='USERA'; create user userb identified by values ';'; <-- put the above query results in From a client of older than 11.2.0.3 and a client of 11.2.0.3 or newer: sqlplus usera/Today123@thedatabase <-- works for both clients sqlplus userb/Today123@thedatabase <-- works for newer client only According to my SR #3-13465425831 (Data pump remap_schema and the new schema's password), the requirement to set password after a remap_schema import (assume the new schema does not pre-exist) in the documentation may be dropped in the future. * Parallel has problems. You may get errors like: ORA-31693: Table data object "APPUSER"."FILE_ENTRY_T" failed to load/unload and is being skipped due to error: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11010: unable to open at least one dump file for load If you do use parallel, you must specify multiple datafiles in expdp and the same in impdp; otherwise only one of multiple worker threads would do real work. * More syntax examples (remove \ if on Windows or in parfile; on UNIX \ must prefix most punctuation characters): expdp yong/yong dumpfile=yong.dmp exclude=table:\"= \'TABLEINCAPITAL\'\" logfile=yong.log schemas=yong directory=yongtest rm yong.dmp #11g has reuse_dumpfiles=y; no need to rm logfile since it will be overwritten expdp yong/yong dumpfile=yong.dmp exclude=table:\"like \'TEST%\'\" query=testpart:\"where x\>20\" logfile=yong.log schemas=yong Since '_' is also a wildcard, which represents any one single character, syntax e.g. include=table:\"like \'ABC_DEF_%\' will include a table named e.g. ABCXDEF5. To make '_' literal, escape it. While the character for escaping is conventionally '\', I can't get either include=table:\"like \'ABC\\_DEF\\_%\' escape \'\\\'\" or include=table:\"like \'ABC\_DEF\_%\' escape \'\\'\" to work. So use 'x' as the escape character: include=table:\"like \'ABCx_DEFx_%\' escape \'x\'\" * Include or exclude does not accept syntax OWNER.TABLE; e.g., exclude=table:\"=\'YONG.MYTAB\'\" would be interpreted as a table literally named "YONG.MYTAB" with dot as part of the name. You can export table structure without data with: query=YONG.MYTAB:\"WHERE 1=2\" (Ref: How To Exclude Data In One Table Owned By A Specified User During DataPump Export? (Doc ID 1535814.1)) To import or export specific tables owned by a specific user, combine the include and schemas parameters, e.g. impdp yong schemas=scott include=table:\"in \(\'EMP\',\'DEPT\'\)\" dumpfile=... * Include or exclude arguments (object types) are in views database_export_objects, schema_export_objects or table_export_objects depending on the impdp or expdp mode; there's no view called *import_objects. The object_path column is the object type. If the type contains /, specify it as is, e.g. exclude=index/statistics (but for this particular one, it's the same as exclude=index_statistics). * Traditional import has no way to bypass redo, even if the table is nologging and the exp was done with direct=y. impdp bypasses redo if the table is nologging. This can be easily checked by looking at v$datafile.unrecoverable_time for the tablepspace where the table resides (if DB is in archivelog mode and force_logging is off). If it's the time any nologging operation was done, that operation had no redo (or rather, minimum redo). (This feature is probably added as a result of enhancement in Bug 4568451, except the create-table-as-nologging part doesn't seem to be implemented, even in 11.2.0.1.) [Update 2013-08] Oracle 12c has transform=disable_archive_logging:n that works for a table not with nologging attribute. * If impdp immediately quits and comes back to the shell prompt without any error or warning, check privilege of the user you run impdp as. If still unsure of the cause, create a logon trigger and check trace file in udump: create trigger theuser.sqltrace_trig after logon on theuser.schema begin execute immediate 'alter session set events ''10046 trace name context forever, level 4'''; end; / * In many cases impdp (imp as well) has bottleneck at creating indexes. You may be able to speed up with a logon trigger changing some parameters: grant alter session to impuser; --if not have it create trigger impuser.tmptrig after logon on impuser.schema begin execute immediate 'alter session set workarea_size_policy=manual'; execute immediate 'alter session set sort_area_size=300000000'; execute immediate 'alter session set db_file_multiblock_read_count=128'; end; / Remember to drop the trigger after impdp is done (thanks to Kumar Madduri's correction). For imp, you can drop it sooner, like right after imp starts. * Minimum privilege to dump your own data A non-DBA user can export his data. Expdp requires read and write privilege on the directory object, create table and create session privilege (or connect role), and some quota on his default tablespace (at least 512k if db_block_size is 8k): create user theuser identified by ... quota 512k on ...; grant create session, create table to theuser; grant read, write on directory ... to theuser; The user only needs Oracle client to run this: expdp theuser@ORCL schemas=theuser directory=... dumpfile=... See also: Without accessing or installing Oracle server, a non-DBA user with only Oracle client backups and restores data * If you get errors absolutely irrelevant such as LRM-00112: multiple values not allowed for parameter 'directory' when you did not, or UDE-00014: invalid value for parameter, 'attach'. when you didn't even specify attach, make sure the parameter is followed by equal sign and value. * If you get error ORA-39065: unexpected master process exception in ... ORA-39079: unable to enqueue message DG,KUPC$... make sure aq_tm_processes is not explicitly set to 0: select value, isdefault from gv$parameter where name='aq_tm_processes'; and correct it if so: alter system set aq_tm_processes=1 scope=memory; (more info about this param at ./aq_tm_processes.txt) * Parallel=y is reported to be not working to load package bodies http://www.freelists.org/post/oracle-l/11201-64bit-OEL-54-Performance-problem-with-impdp or to load a table with LOB http://www.itpub.net/thread-1309692-1-1.html * Parallel import on RAC may require the dump file to be readable from all nodes. * Not importing all rows: ". . imported ... out of rows" See Impdp Imports 0 Rows if Expdp has not completed (Doc ID 788571.1) Bug 7703796: WRONG RESULT IN 'IMPORTED # ROWS' OF IMPDP OUTPUT WHEN DOING IMPDP VIA DB-LINK Bug 9470999: IMPDP REPORTS INCORRECT AMOUNT OF ROWS IMPORTED IN PARALLEL MODE http://manchev.org/index.php/database/104-datapump-characterset-cl8mswin1251-al32utf8 * Huge undo tablespace used by import Impdp no longer supports commit=y. According to http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_legacy.htm#sthref361 COMMIT This parameter is ignored. Data Pump Import automatically performs a commit after each table is processed. If you have a huge table, the only thing you can do to reduce the undo (v$transaction.used_ublk) is see if all the indexes on the table can be not imported with exclude=index. If the table needs precreated, import with table_exists_action=(append|truncate) exclude=index For some reason, the truncate option may not work well (to fully deallocate space). Manually truncate the tables first, preferably with drop all storage clause. * What is my impdp doing? Sometimes it seems to not move, get stuck, "locked up", "deadlocked" (as a layman database user calls it). The trace parameter doesn't help much or the trace file is too verbose to read. Unfortunately all the data pump related sessions have either idle wait events or a fairly generic "Datapump dump file I/O". Worse, their v$session.sql_id or even prev_sql_id is empty. Their v$open_cursor has dozens of SQLs but you're not sure which one you should focus on. Their 'consistent gets' statistic in v$sesstat keeps going up. But we need to find the SQL causing it. One way to achieve that is looking at v$sql(area) where users_executing>0, and a very recent last_active_time, combined with select * from v$session_longops where sofar!=totalwork. If the buffer_gets of the SQL is high and is still increasing rapidly, that may be it. That's how I identified the case affected by a bug documented in this note DataPump Import (IMPDP) Seems To Hang On Processing Object Type SCHEMA_EXPORT/TABLE/TABLE_DATA [ID 789700.1] * Without accessing or installing Oracle server, a non-DBA user with only Oracle client refreshes data Case 1: He refreshes data between two schemas in the same database, e.g., copy data in schema devdata to schema qadata in database ORCL, (1) One time work by DBA in ORCL (at least 512k quota if db_block_size is 8k): create user imp_db identified by ... quota unlimited on users; grant create session, create table to imp_db; grant datapump_imp_full_database to imp_db; (2) User drops all objects in qadata (better to drop tables first; see ./SchemaCleanup.html) (3) User runs: impdp imp_db@ORCL schemas=devdata remap_schema=devdata:qadata network_link=ORCL nologfile=yes Case 2: He refreshes data across databases, e.g., copy devdata in DEVDB to qadata in QADB, (1) One time work by DBA in DEVDB: create user exp_db identified by ... quota unlimited on users; grant create session, create table to exp_db; grant datapump_exp_full_database to exp_db; In QADB: create user imp_db identified by ... quota unlimited on users; grant create session, create table to imp_db; grant datapump_imp_full_database to imp_db; conn imp_db --The following can be omitted if you use the same username in DEVDB and QADB --with the same password and use OID to resolve connect identifiers. create database link DEVDB connect to exp_db identified by ... using 'DEVDB'; (2) User drops all objects in qadata in QADB. (3) User runs: impdp imp_db@QADB schemas=devdata remap_schema=devdata:qadata network_link=DEVDB nologfile=yes The above uses network_link so data of LONG or LONG RAW type must be copied separately. Note that the user logs in as imp_db or exp_db, who has datapump_(imp|exp)_full_database role, which should not be granted lightly, probably not in production. * Without accessing or installing Oracle server, a non-DBA user with only Oracle client backups and restores data See the section "Minimum privilege to dump your own data" for backup. To restore, impdp theuser@ORCL schemas=theuser directory=... dumpfile=... This works even if he doesn't have shell access to the server and can't (s)ftp, scp, rcp the dumpfile over. * Estimate_only of expdp ignores compression=all Unless the data is already compressed (jpg images, PDF documents etc.), compression=all works wonders. But estimate_only probably only checks dba_segments and ignores this expdp parameter. * Possible undocumented parameters for data pump (* means exp also has it; only checked 10.2.0.1 expdp and exp): ip_address (IP Address for PLSQL debugger) keep_master (Retain job table upon completion) log_entry (maybe insert a msg to log as dbms_datapump.log_entry does) metrics* (Enable/disable object metrics reporting) mp_enable (Enable/disable multi-processing for current session) package_load (Specify how to load PL/SQL objects) parallel_threshold (Degree of DML Parallelism) silent* (display [banner] information, default is NONE) trace* (enable sql_trace and timed_stat, default is 0) tts_closure_check (Enable/disable transportable containment check: def is Y) In addition, exp has point_in_time_recover (point-in-time recover option: default is N) impparfile (file to create as paramfile for IMP for transportable tablespaces) file_format (format of export file names) Appendix: * flashback_time syntax reference (it took me a while to get it right): Note:204334.1 (Parameters FLASHBACK_SCN And FLASHBACK_TIME: Point In Time Export) exp system/manager file=exp.dmp flashback_time='"2004-06-19 13:24:26"' exp system/manager parfile=exp.par exp.par ------- file=exp.dmp flashback_time='2004-06-19 13:24:26' exp system/manager file=exp.dmp flashback_time=\"TO_TIMESTAMP('19-06-2004 13:24:26', 'DD-MM-YYYY HH24:MI:SS')\" exp system/manager file=exp.dmp flashback_time=\"TO_TIMESTAMP\(\'19-06-2004 13:24:26\', \'DD-MM-YYYY HH24:MI:SS\'\)\" When using the TO_TIMESTAMP function in a parameter file in Oracle10g, use double quotes. E.g.: FLASHBACK_TIME="TO_TIMESTAMP('19-06-2004 13:24:26', 'DD-MM-YYYY HH24:MI:SS')" * Wrong documentation about expdp full=y According to http://docs.oracle.com/en/database/oracle/oracle-database/19/sutil/oracle-data-pump-export-utility.html#GUID-BA07401C-6261-4B07-AD2C-06CD0A6E0BE9 "A full export does not, by default, export system schemas that contain Oracle-managed data and metadata. Examples of system schemas that are not exported by default include SYS, ORDSYS, and MDSYS." But a test in a 19.3 database still shows "SYS"."KU$_USER_MAPPING_VIEW" "AUDSYS"."AUD$UNIFIED":"SYS_P221" "SYSTEM"."REDO_DB" "SYS"."DAM_CONFIG_PARAM$" "MDSYS"."RDF_PARAM$" Saying "system schemas that contain Oracle-managed data and metadata" is ambiguous. In newer versions of Oracle, dba_users has the column oracle_maintained. The wording in documentation looks remarkably close to saying that the schemas whose oracle_maintained column shows 'Y' are not exported, which is misleading or simply wrong. Also, dba_objects has oracle_maintained. It's also not true that the objects marked with 'Y' under oracle_maintained will not be exported. And it's false again to say that objects in sys.ku_noexp_view are not exported (Ref: 2114233.1). In any case, do not run impdp on a dumpfile exported with full=y thinking the dumpfile contains only your own data, not any system data that comes with Oracle software. Ref: https://community.oracle.com/mosc/discussion/comment/16777158 Bug 20599735 : SCHEMAS OBJECTS NOT EXCLUDED WHEN PERFORMING FULL EXPDP According to Some Data Dictionary Objects Are Unloaded In 12c by Datapump (Doc ID 2004442.1) the workaround is the undocumented EXCLUDE='NORMAL_OPTIONS' or the documented VERSION=11.2 parameter. With the former, tables "SYS"."KU$_USER_MAPPING_VIEW" and "SYSTEM"."SYS_EXPORT_FULL_01" are still exported but not any other table that is excluded. The total dump file size saved is only 2 MB if compression=all. So it's not worth it. * Language Name from DBMS_DATAPUMP.GET_DUMPFILE_INFO is character set The PL/SQL procedure incorrectly calls it "Language Name". Ref: https://community.oracle.com/message/15577099 SR 3-22629308841 Example code using this procedure is at https://docs.oracle.com/database/121/SUTIL/GUID-5AAC848B-5A2B-4FD1-97ED-D3A048263118.htm To run in 11g database, comment out the block beginning with line "WHEN DBMS_DATAPUMP.KU$_DFHDR_COMPRESSION_ALG THEN" * library cache lock and deadlock Ref: 'Library Cache Lock' (Cycle) Seen During DataPump Import in 12.2 RAC Environment (Doc ID 2407491.1) In 12c and 18c RAC, impdp specifying parallel (greater than 1) will hang forever due to DW* sessions staying in a deadlock loop during the phase of alter compile. Oracle's solution is to either run impdp in serial, or set _lm_share_lock_opt to false to disable this buggy optimization. But this parameter is not only static, but also requires the same value for all instances, so you have to bounce the whole database to set it. If just kill one of the two sessions involved in the deadlock, the import will continue to the end without further problem. But this is neither supported nor guarantees all the objects are imported fine. They may have, but not guranteed. The following was done on Oracle 18.9: SQL> select * from v$hang_info; <-- resolution_status value could be TransientHangWillProbablySelfResolve or ToBeResolvedByAnotherLayer but neither resolves; after hang_resolve_time, it just starts another hang HANG_ID HANG_CHAIN_SESSIONS TOTAL_HUNG_SESSIONS HANG_TYPE HANG_CREATE_TIME HANG_RESOLVE_TIME I R G E RESOLUTION_STATUS VICTIM_INSTANCE ---------- ------------------- ------------------- ------------ -------------------- --------------------- - - - - --------------------------------------------- --------------- VICTIM_SESSION_ID VICTIM_SERIAL# VICTIM_OSPID F PNAME WAIT_EVENT_TEXT VICTIM_QOS_P VICTIM_QOS_PC_ITT VICTIM_QOS_PC_RTT VICTIM_QOS_PC_KEY ----------------- -------------- ------------------------ - ----- ------------------------ ------------------ ------------ ----------------- ----------------- ----------------- CON_ID ---------- 8 2 2 Deadlock 30-Jun-2020 14:17:57 30-Jun-2020 14:23:47 N N N N IGNRD:TransientHangWillProbablySelfResolve 1 261 57269 23816 N DW02 library cache lock NOT MANAGED -1 -1 0 0 SQL> select * from v$hang_session_info; HANG_ID INSTANCE SID SERIAL# OSPID F R PNAME WAIT_EVENT_TEXT QOS_PC_RANK QOS_PC_ITT QOS_PC_RTT QOS_PC_KEY CON_ID ------- -------- --- ------- ------ - - ----- ------------------- ------------ ---------- ---------- ---------- ------ 8 1 139 54490 23689 N N DW00 library cache lock NOT MANAGED -1 -1 0 0 8 1 261 57269 23816 N Y DW02 library cache lock NOT MANAGED -1 -1 0 0 SQL> select inst_id, sid, saddr, program, event, p1, p1raw, sql_id, last_call_et from gv$session where wait_class!='Idle' order by 1, 3, 4; INST_ID SID SADDR PROGRAM EVENT P1 P1RAW SQL_ID LAST_CALL_ET ------- --- ---------------- -------------------------- ------------------ ---------- ---------------- ------------- ------------ 1 139 000000013113C7B0 oracle@doprlcora13a (DW00) library cache lock 9109124880 000000021EF23710 4azut3xyxbszv 5379 1 261 000000014110D020 oracle@doprlcora13a (DW02) library cache lock 9322185992 000000022BA54508 g6tu301x8hbvz 5365 1 995 000000013133E428 oracle@doprlcora13a (DW05) library cache lock 9322185992 000000022BA54508 gsz6tfwb6k090 5356 SQL> select * from dba_kgllock where kgllkhdl='000000021EF23710'; <-- v$session.p1raw when event is 'library cache lock' KGLLKUSE KGLLKHDL KGLLKMOD KGLLKREQ KGLL ---------------- ---------------- -------- -------- ---- 000000013113C7B0 000000021EF23710 0 3 Lock <-- session 13113C7B0 or sid 139 waits for handle 21EF23710 000000014110D020 000000021EF23710 2 0 Lock <-- session 14110D020 or sid 261 locks handle 21EF23710 000000014110D020 000000021EF23710 2 0 Pin SQL> select * from dba_kgllock where kgllkhdl='000000022BA54508'; <-- the other library cache handle KGLLKUSE KGLLKHDL KGLLKMOD KGLLKREQ KGLL ---------------- ---------------- -------- -------- ---- 000000013133E428 000000022BA54508 0 2 Lock <-- session 13133E428 waits for handle 22BA54508 000000014110D020 000000022BA54508 0 3 Lock <-- session 14110D020 waits for handle 22BA54508 000000013113C7B0 000000022BA54508 2 0 Lock <-- session 13113C7B0 locks handle 22BA54508 000000013113C7B0 000000022BA54508 2 0 Pin SQL> alter system kill session '<139 or 261>,'; * Import objects that use DB links with wrong password may lock remote database account Suppose you create a PL/SQL object (procedure, function or package) or a view with code like select * from remotetab@remotedb The link remotedb points to a user in the remote database but with a wrong password. You export this object and import it somewhere. The import will try to validate the code including trying to connect to the remote database. If you have multiple objects that use this bad DB link (bad in the sense that remotedb can be reached but user's password is wrong), and the remote database user is in the profile which allows only a few failed logon attempts, the import process will cause the remote database account to be locked. This is particularly bad if the remote database is production, and if relationship between that and the target database being imported into is not well known, the cause of the production database account being locked becomes elusive. To prevent this from happening, the best is to recreate the DB link in the source database with the correct password before export. If that's not possible, at least temporarily relax the remote database profile to allow unlimited failed logon attempts just for the import to run. To find such objects in the source database, select from dba_source where upper(text) like '%@REMOTEDB%' and select from dba_views where upper(view_vc) like '%@REMOTEDB%'. (Dba_views has text_vc beginning with Oracle 12c; before 12c, you have use some tricks to search views such as those in ./ViewDefinitionSearch.txt. If the DB link occurs at a character position greater than 4000 on a very long line in the code, dba_views.text_vc will miss it; but that's extremely unlikely.) * OS free memory depletion Like any program that uses a lot of page cache on Linux, expdp uses a lot of page cache memory as soon as it starts to actually export tables. Unless it finishes quickly or you have tens of GB of free memory, you may see [kswapd] in the `top' output. Unlike other programs that use page cache, such as cp, gzip, etc., the freeware program nocache (https://github.com/Feh/nocache) does not solve the problem. For more, see https://community.oracle.com/tech/apps-infra/discussion/4499779/run-data-pump-without-using-os-page-cache SR 3-29996935781 * Multiple nodes of RAC can dump to NFS On a RAC database, multiple nodes can run expdp to create dumpfiles on an NFS location mounted by these nodes without corruption. A clustered file system is not needed although it works as well. Unlike datafiles, control files, or redo logs, which must be on a clustered file system (or in ASM), each data pump dump file is opened by only one process at any moment. Ref: https://forums.oracle.com/ords/apexds/post/run-expdp-to-create-dump-files-from-2-rac-nodes-to-one-nfs-0385 and Parallel Capabilities of Oracle Data Pump (Doc ID 365459.1)