Monday, 8 July 2013

Oracle DBMS_DATAPUMP


General Information
Source{ORACLE_HOME}/rdbms/admin/dbmsdp.sql
{ORACLE_HOME}/rdbms/admin/dbmspump.sql
First Available10.1

Constants
NameData TypeValue
KU$_DUMPFILE_TYPE_DISKBINARY_INTEGER0
KU$_DUMPFILE_TYPE_PIPEBINARY_INTEGER1
KU$_DUMPFILE_TYPE_TAPEBINARY_INTEGER2
KU$_DUMPFILE_TYPE_TEMPLATEBINARY_INTEGER3
KU$_FILE_TYPE_DUMP_FILEBINARY_INTEGER1
KU$_FILE_TYPE_BAD_FILEBINARY_INTEGER2
KU$_FILE_TYPE_LOG_FILEBINARY_INTEGER3
KU$_FILE_TYPE_SQL_FILEBINARY_INTEGER4
KU$_JOB_COMPLETEBINARY_INTEGER1
KU$_JOB_COMPLETE_ERRORSBINARY_INTEGER2
KU$_JOB_STOPPEDBINARY_INTEGER3
KU$_JOB_ABORTEDBINARY_INTEGER4
KU$_JOB_VIEW_ALLBINARY_INTEGER0
KU$_JOB_TTS_TABLESPACESBINARY_INTEGER1
KU$_STATUS_WIPBINARY_INTEGER1
KU$_STATUS_JOB_DESCBINARY_INTEGER2
KU$_STATUS_JOB_STATUSBINARY_INTEGER4
KU$_STATUS_JOB_ERRORBINARY_INTEGER8
KU$_STATUS_VERSIONBINARY_INTEGERKU$_STATUS_VERSION_2
KU$_STATUS_VERSION_1BINARY_INTEGER1
KU$_STATUS_VERSION_2BINARY_INTEGER2

Dependencies
SELECT name
FROM dba_dependencies
WHERE referenced_name = 'DBMS_DATAPUMP'
UNION
SELECT referenced_name
FROM dba_dependencies
WHERE name = 'DBMS_DATAPUMP';

Exceptions
Error CodeNameReason
-39001INVALID_ARGVALAn invalid value was supplied for an input parameter
-39002INVALID_OPERATIONCannot be executed because of inconsistencies between the API and the job
-39004INVALID_STATEThe state of the job precludes the execution of the API
-39005INCONSISTENT_ARGSInconsistent arguments 
-39006INTERNAL_ERRORInternal datapump exception 
-31623INVALID_HANDLEIncorrect handle specified for the job
-31626NO_SUCH_JOBA invalid reference to a job which is no longer executing
-31627SUCCESS_WITH_INFOUser specified job parameters that yielded informational messages
-31631PRIVILEGE_ERRORThe necessary privileges are not available for operations
-31634JOB_EXISTSJob creation or restart failed due to duplicate name
-39211NO_DUMPFILE_INFOUser specified an invalid or inaccessible file

Bitmap Definitions used in DATA_OPTIONS parameter
NameData TypeValue
KU$_DATAOPT_SKIP_CONST_ERRNUMBER1
KU$_DATAOPT_XMLTYPE_CLOBNUMBER2
KU$_DATAOPT_NOTYPE_EVOLNUMBER4

Filters
NameObject TypeMeaning
INCLUDE_NAME_EXPR
and
EXCLUDE_NAME_EXPR
Named objectsDefines which object names are included in (INCLUDE_NAME_EXPR), or excluded from (EXCLUDE_NAME_EXPR), the job. You use the object_type parameter to limit the filter to a particular object type.

For Table mode, identifies which tables are to be processed.
SCHEMA_EXPRSchema objectsRestricts the job to objects whose owning schema name is satisfied by the expression.

For Table mode, only a single SCHEMA_EXPR filter is supported. If specified, it must only specify a single schema (for example, 'IN (''SCOTT'')').

For Schema mode, identifies which users are to be processed.
TABLESPACE_EXPRTable, Cluster, Index, Rollback SegmentRestricts the job to objects stored in a tablespace whose name is satisfied by the expression.

For Tablespace mode, identifies which tablespaces are to be processed. If a partition of an object is stored in the tablespace, the entire object is added to the job.

For Transportable mode, identifies which tablespaces are to be processed. If a table has a single partition in the tablespace set, all partitions must be in the tablespace set. An index is not included within the tablespace set unless all of its partitions are in the tablespace set. A domain index is not included in the tablespace set unless all of its secondary objects are included in the tablespace set.
INCLUDE_PATH_EXPR and EXCLUDE_PATH_EXPRAllDefines which object paths are included in, or excluded from, the job. You use these filters to select only certain object types from the database or dump file set. Objects of paths satisfying the condition are included (INCLUDE_PATH_EXPR) or excluded (EXCLUDE_PATH_EXPR) from the operation. The object_path parameter is not supported for these filters.

Item Codes for entry in a dump file info table (of type ku$_dumpfile_info)
NameData TypeValue
KU$_COMPRESS_NONENUMBER1
KU$_COMPRESS_METADATANUMBER2
KU$_DFHDR_FILE_VERSIONNUMBER1
KU$_DFHDR_MASTER_PRESENTNUMBER2
KU$_DFHDR_GUIDNUMBER3
KU$_DFHDR_FILE_NUMBERNUMBER4
KU$_DFHDR_CHARSET_IDNUMBER5
KU$_DFHDR_CREATION_DATENUMBER6
KU$_DFHDR_FLAGSNUMBER7
KU$_DFHDR_JOB_NAMENUMBER8
KU$_DFHDR_PLATFORMNUMBER9
KU$_DFHDR_INSTANCENUMBER10
KU$_DFHDR_LANGUAGENUMBER11
KU$_DFHDR_BLOCKSIZENUMBER12
KU$_DFHDR_DIRPATHNUMBER13
KU$_DFHDR_METADATA_COMPRESSEDNUMBER14
KU$_DFHDR_DB_VERSIONNUMBER15
KU$_DFHDR_MASTER_PIECE_COUNTNUMBER16
KU$_DFHDR_MASTER_PIECE_NUMBERNUMBER17
KU$_DFHDR_DATA_COMPRESSEDNUMBER18
KU$_DFHDR_METADATA_ENCRYPTEDNUMBER19
KU$_DFHDR_DATA_ENCRYPTEDNUMBER20
KU$_DFHDR_MAX_ITEM_CODENUMBER20

Remaps
NameData
Type
Object TypeMeaning
REMAP_DATAFILETextLibrary,
Tablespace,
Directory
Any datafile reference in the job that matches the object_type parameter and referenced the old_value datafile will be redefined to use the value datafile.
REMAP_SCHEMATextSchema ObjectsAny schema object in the job that matches the object_type parameter and was located in the old_value schema will be moved to the value schema.
REMAP_TABLESPACETextTable, Index, Rollback Segment, Materialized View, Materialized View Log, TablespaceAny storage segment in the job that matches the object_type parameter and was located in the old_value tablespace will be relocated to the value tablespace.
Transforms
NameData
Type
Object TypeMeaning
SEGMENT_ATTRIBUTESNumberTable, IndexIf nonzero (TRUE), emit storage segment parameters. Defaults to 1
STORAGENumberTableIf nonzero (TRUE), emit storage clause. (Ignored if SEGMENT_ATTRIBUTES is zero.) Defaults to nonzero (TRUE)
Related System PrivilegesCREATE DIRECTORY
EXP_FULL_DATABASE
IMP_FULL_DATABASE
Security ModelExecute is granted to PUBLIC. Runs as AUTHID CURRENT_USER
 
Defined Data Types

Job Description Types
CREATE TYPE sys.ku$_ParamValue1010 AS OBJECT (
param_name    VARCHAR2(30),
param_op      VARCHAR2(30),
param_type    VARCHAR2(30),
param_length  NUMBER,
param_value_n NUMBER,
param_value_t VARCHAR2(2000));
/

CREATE TYPE sys.ku$_ParamValues1010 AS TABLE OF sys.ku$_ParamValue1010;

CREATE TYPE sys.ku$_JobDesc1010 AS OBJECT (
job_name      VARCHAR2(30),
guid          RAW(16),
operation     VARCHAR2(30),
job_mode      VARCHAR2(30),
remote_link   VARCHAR2(4000),
owner         VARCHAR2(30),
instance      VARCHAR2(16),
db_version    VARCHAR2(30),
creator_privs VARCHAR2(30),
start_time    DATE,
max_degree    NUMBER,
log_file      VARCHAR2(4000),
sql_file      VARCHAR2(4000),
params ku$_ParamValues1010)
;
/

Job Status Types
CREATE TYPE sys.ku$_DumpFile1010 AS OBJECT (
file_name          VARCHAR2(4000),
file_type          NUMBER,
file_size          NUMBER,
file_bytes_written NUMBER)

CREATE TYPE sys.ku$_DumpFileSet1010 AS TABLE OF sys.ku$_DumpFile1010;

CREATE TYPE sys.ku$_JobStatus1010 AS OBJECT (
job_name           VARCHAR2(30),
operation          VARCHAR2(30),
job_mode           VARCHAR2(30),
bytes_processed    NUMBER,
total_bytes        NUMBER,
percent_done       NUMBER,
degree             NUMBER,
error_count        NUMBER,
state              VARCHAR2(30),
phase              NUMBER,
restart_count      NUMBER,
worker_status_list ku$_WorkerStatusList1010,
files              ku$_DumpFileSet1010)

CREATE PUBLIC SYNONYM ku$_JobStatus1010
FOR sys.ku$_JobStatus1010;

Log Entry & Error Types
CREATE TYPE sys.ku$_LogLine1010 AS OBJECT (
logLineNumber NUMBER,
errorNumber   NUMBER,
LogText       VARCHAR2(2000) );
/

CREATE PUBLIC SYNONYM ku$_LogLine1010
FOR sys.ku$_LogLine1010;

CREATE TYPE sys.ku$_LogEntry1010 AS TABLE OF
sys.ku$_LogLine1010;
/
CREATE PUBLIC SYNONYM ku$_LogEntry1010
FOR sys.ku$_LogEntry1010;

Status Types
CREATE TYPE sys.ku$_Status1010 AS OBJECT (
mask            NUMBER,
wip             ku$_LogEntry1010,
job_description ku$_JobDesc1010,
job_status      ku$_JobStatus1010,
error           ku$_LogEntry1010);
/

Worker Status Types
CREATE TYPE sys.ku$_WorkerStatus1010 AS OBJECT (
worker_number     NUMBER,
process_name      VARCHAR2(30),
state             VARCHAR2(30),
schema            VARCHAR2(30),
name              VARCHAR2(4000),
object_type       VARCHAR2(200),
partition         VARCHAR2(30),
completed_objects NUMBER,
total_objects     NUMBER,
completed_rows    NUMBER,
completed_bytes   NUMBER,
percent_done      NUMBER)
/
CREATE PUBLIC SYNONYM ku$_WorkerStatus1010
FOR sys.ku$_WorkerStatus1010;

CREATE TYPE sys.ku$_WorkerStatusList1010 AS TABLE OF sys.ku$_WorkerStatus1010
/

CREATE PUBLIC SYNONYM ku$_WorkerStatusList1010
FOR sys.ku$_WorkerStatusList1010;
 
ADD_DEVICE
Adds a sequential device to the dump file set for Export, Import, or Sql_file operationsdbms_datapump.add_device(
handle      IN NUMBER,                  -- job handle
device_name IN VARCHAR2,                -- name of device being added
volume_size IN VARCHAR2 DEFAULT NULL);  -- device storage capacity
TBD
 
ADD_FILE (new 11g parameter)
Adds files to the dump file set for an Export, Import, or SQL_FILE operation or specifies the log file or the output file for a SQL_FILE operationdbms_datapump.add_file(
handle    IN NUMBER,
filename  IN VARCHAR2,
directory IN VARCHAR2 DEFAULT NULL,
filesize  IN VARCHAR2 DEFAULT NULL,
filetype  IN NUMBER   DEFAULT dbms_datapump.KU$_FILE_TYPE_DUMP_FILE,
reusefile IN NUMBER   DEFAULT NULL);
See Export Demo below
 
ATTACH
Used to gain access to a Data Pump job that is in the Defining, Executing, Idling, or Stopped statedbms_datapump.attch(
job_name  IN VARCHAR2 DEFAULT NULL,
job_owner IN VARCHAR2 DEFAULT NULL)
RETURN NUMBER;
See LOG_ENTRY demo
 
CLIENT_LOB_APPEND (new in 11g)
Temporary home for clob helper routinesdbms_datapump.client_lob_append(
value    IN VARCHAR2,
position IN NUMBER );
TBD
 
CLIENT_LOB_DELETE (new in 11g)
Undocumenteddbms_datapump.client_lob_delete;
TBD
 
CLIENT_LOB_GET (new in 11g)
Undocumenteddbms_datapump.client_lob_get RETURN CLOB;
TBD
 
CREATE_JOB_VIEW
Create view into master table for a job

Overload 1
dbms_datapump.create_job_view(
job_schema IN VARCHAR2,
job_name   IN VARCHAR2,
view_name  IN VARCHAR2,
view_type  IN VARCHAR2 DEFAULT dbms_datapump.ku$_job_view_all);
TBD
Overload 2dbms_datapump.create_job_view(
handle    IN NUMBER,
view_name IN VARCHAR2,
view_type IN VARCHAR2 DEFAULT dbms_datapump.ku$_job_view_all);
TBD
 
DATAPUMP_JOB

Is it or is it not?
dbms_datapump.datapump_job RETURN BOOLEAN;
set serveroutput on

BEGIN
  IF dbms_datapump.datapump_job THEN
    dbms_output.put_line('Yes');
  ELSE
    dbms_output.put_line('No');
  END IF;
END;
/
 
DATA_FILTER (new 11g overload)
Specifies restrictions on the rows that are to be retrieved

Overload 1
dbms_datapump.data_filter (
handle      IN NUMBER,
name        IN VARCHAR2,               -- filter name
value       IN NUMBER,
table_name  IN VARCHAR2 DEFAULT NULL,  -- if not specified = all
schema_name IN VARCHAR2 DEFAULT NULL);
TBD
Overload 2dbms_datapump.data_filter(
handle      IN NUMBER,
name        IN VARCHAR2,
value       IN CLOB,
table_name  IN VARCHAR2 DEFAULT NULL,
schema_name IN VARCHAR2 DEFAULT NULL);
TBD
Overload 3dbms_datapump.data_filter(
handle      IN NUMBER,
name        IN VARCHAR2,
value       IN VARCHAR2,
table_name  IN VARCHAR2 DEFAULT NULL,
schema_name IN VARCHAR2 DEFAULT NULL);
TBD
 
DATA_REMAP (new in 11g)

Modify the values of data in user tables
dbms_datapump.data_remap(
handle     IN NUMBER,
name       IN VARCHAR2,
table_name IN VARCHAR2,
column     IN VARCHAR2,
function   IN VARCHAR2,
schema     IN VARCHAR2 DEFAULT NULL);
TBD
 
DETACH
Specifies that the user has no further interest in using the handledbms_datapump.detach(handle IN NUMBER);
See Export Demo below
 
ESTABLISH_REMOTE_CONTEXT
Establish remote Data Pump job contextdbms_datapump.establish_remote_context(remote_link IN VARCHAR2);
TBD
 
GET_DUMPFILE_INFO

Monitors the status of a job or waits for the completion of a job
dbms_datapump.get_dumpfile_info(
file_name  IN  VARCHAR2,
directory  IN  VARCHAR2,
info_table OUT dbms_datapump.ku$_dumpfile_info,
filetype   OUT NUMBER);
TBD
 
GET_STATUS
Monitors the status of a job or waits for the completion of a job or for more details on API errors

Overload 1
dbms_datapump.get_status(
handle  IN NUMBER,
mask    IN INTEGER,
timeout IN NUMBER DEFAULT NULL)
RETURN dbms_datapump.ku$_status;
TBD
Overload 2dbms_datapump.get_status(
handle    IN  NUMBER,
mask      IN  BINARY_INTEGER,
timeout   IN  NUMBER DEFAULT NULL,
job_state OUT VARCHAR2,
status    OUT ku$_status1010);
TBD
Overload 3dbms_datapump.get_status(
handle     IN  NUMBER,
mask       IN  BINARY_INTEGER,
timeout    IN  NUMBER DEFAULT NULL,
job_state  OUT VARCHAR2,
status     OUT ku$_status1020);
See Export Demo below
 
GET_STATUS_VERSION
Determine ku$_Status object version to use for network operationsdbms_datapump.get_status_version(version IN NUMBER) RETURN NUMBER;
TBD
 
HAS_PRIVS

Privs - Yes or no?
dbms_datapump.has_privs(oper IN VARCHAR2) RETURN BOOLEAN;
set serveroutput on

BEGIN
  IF dbms_datapump.has_privs('EXPORT') THEN
    dbms_output.put_line('Yes');
  ELSE
    dbms_output.put_line('No');
  END IF;
END;
/
 
LOG_ENTRY

Inserts a message into the log file
dbms_datapump.log_entry (
handle        IN NUMBER,
message       IN VARCHAR2
log_file_only IN NUMBER DEFAULT 0);
DECLARE
 dph NUMBER;
BEGIN
  dph := dbms_datapump.attach('EXAMPLE5', USER);
  dbms_output.put_line(dph);

  dbms_datapump.log_entry(dph, 'Log entry from DataPump API');
  dbms_datapump.detach(dph);
END;
/
 
LOG_ERROR

Undocumented
dbms_datapump.log_error(
handle        IN NUMBER,
message       IN VARCHAR2,
error_number  IN NUMBER DEFAULT 0,
fatal_error   IN NUMBER DEFAULT 0,
log_file_only IN NUMBER DEFAULT 0);
TBD
 
METADATA_FILTER (new overload in 11g)
Creates a filters that restricts the items that are included in a job

Overload 1
dbms_datapump.metadata_filter (
handle      IN NUMBER,
name        IN VARCHAR2,
value       IN VARCHAR2,
object_path IN VARCHAR2 DEFAULT NULL,
object_type IN VARCHAR2 DEFAULT NULL);
See Export Demo below
Overload 2dbms_datapump.metadata_filter (
handle      IN NUMBER,
name        IN VARCHAR2,
value       IN CLOB,
object_path IN VARCHAR2 DEFAULT NULL,
object_type IN VARCHAR2 DEFAULT NULL);
TBD
 
METADATA_REMAP

Specifies a remapping to be applied to objects as they are processed in the specified job
dbms_datapump.metadata_remap (
handle      IN NUMBER,
name        IN VARCHAR2,
old_value   IN VARCHAR2,
value       IN VARCHAR2,
object_type IN VARCHAR2 DEFAULT NULL);
TBD
 
METADATA_TRANSFORM
Specifies transformations to be applied to objects as they are processed in the specified job

Overload 1
dbms_datapump.metadata_transform (
handle      IN NUMBER,
name        IN VARCHAR2,
value       IN VARCHAR2,
object_type IN VARCHAR2 DEFAULT NULL);
TBD
Overload 2dbms_datapump.METADATA_TRANSFORM (
handle      IN NUMBER,
name        IN VARCHAR2,
value       IN NUMBER,
object_type IN VARCHAR2 DEFAULT NULL);
TBD
 
OPEN

Declare a new job using the Data Pump API. The handle that is returned is used as a parameter for calls to all other procedures except ATTACH.
dbms_datapump.OPEN (
operation   IN VARCHAR2,
job_mode    IN VARCHAR2,
remote_link IN VARCHAR2 DEFAULT NULL,
job_name    IN VARCHAR2 DEFAULT NULL,
version     IN VARCHAR2 DEFAULT 'COMPATIBLE',
compression IN NUMBER DEFAULT dbms_datapump.ku$_compress_metadata)
RETURN NUMBER;
 
Job ModeDescription
FULLOperates on the full database or full dump file set except for the SYS, XDB,ORDSYS, MDSYS, CTXSYS, ORDPLUGINS, and LBACSYS schemas.
SCHEMAOperates on a set of selected schemas. Defaults to the schema of the current user. All objects in the selected schemas are processed. Users cannot specify SYS, XDB, ORDSYS, MDSYS, CTXSYS, ORDPLUGINS, or LBACSYS schemas for this mode.
TABLEOperates on a set of selected tables. Defaults to all of the tables in the current user's schema. Only tables and their dependent objects are processed.
TABLESPACEOperates on a set of selected tablespaces. No defaulting is performed. Tables that have storage in the specified tablespaces are processed in the same manner as in Table mode.
TRANSPORTABLEOperates on metadata for tables (and their dependent objects) within a set of selected tablespaces to perform a transportable tablespace export/import.
See Export Demo below
 
SETUP_REMOTE_CONTEXT

Sets up a remote Data Pump job context
dbms_datapump.setup_remote_context(
user_name      IN VARCHAR2,
job_name       IN VARCHAR2,
version        IN NUMBER,
status_xml     IN VARCHAR2,
status_xml_len IN NUMBER,
more           IN NUMBER);
TBD
 
SET_DEBUG (new 11g overload)

Sets the internal debug switch

Overload 1
dbms_datapump.set_debug(
on_off  IN NUMBER,
ip_addr IN VARCHAR2 DEFAULT NULL);
TBD

Overload 2
dbms_datapump.set_debug(
debug_flags  IN BINARY_INTEGER,
version_flag IN BINARY_INTEGER);
TBD
 
SET_PARALLEL
Adjusts the degree of parallelism within a jobdbms_datapump.set_parallel (
handle IN NUMBER,
degree IN NUMBER);
See Export Demo below
 
SET_PARMETER
Specify job-processing options

Overload 1
dbms_datapump.set_parameter (
handle IN NUMBER,
name   IN VARCHAR2,
value  IN VARCHAR2);
TBD
Overload 2dbms_datapump.set_parameter (
handle IN NUMBER,
name   IN VARCHAR2,
value  IN NUMBER);
TBD
 
START_JOB (new 11g parameters)
Begins or resumes job executiondbms_datapump.start_job (
handle       IN NUMBER,
skip_current IN NUMBER DEFAULT 0,
abort_step   IN NUMBER DEFAULT 0,
cluster_ok   IN NUMBER DEFAULT 1,
service_name IN VARCHAR2 DEFAULT NULL);
See Export Demo below
 
STOP_JOB
Terminates a job, but optionally, preserves the state of the jobdbms_datapump.stop_job (
handle      IN NUMBER,
immediate   IN NUMBER DEFAULT 0,
keep_master IN NUMBER DEFAULT NULL,
delay       IN NUMBER DEFAULT 60);
See Export Demo below
 
TEST_REMOTE_CONTEXT1010
Test remote Data Pump job context:
Is version 10.1?
dbms_datapump.test_remove_context1010;
TBD
 
TEST_REMOTE_CONTEXT1020
Test remote Data Pump job context:
Is version 10.2?
dbms_datapump.test_remote_context1020;
TBD
 
WAIT_FOR_JOB
Runs a job until it either completes normally or stops for some other reasondbms_datapump.wait_for_job(
handle    IN  NUMBER,
job_state OUT VARCHAR2);
TBD
 
Datapump Demos

DataPump Export
conn / as sysdba

set linesize 121
col owner format a20
col directory_name format a15
col directory_path format a60

SELECT *
FROM dba_directories;

HOST mkdir c:\expimp
HOST mkdir c:\expimp\logs

EXIT

CREATE OR REPLACE DIRECTORY expimp AS 'c:\expimp';

CREATE OR REPLACE DIRECTORY expimp_log AS 'c:\expimp\logs';

GRANT READ,WRITE ON DIRECTORY expimp TO system;

GRANT READ,WRITE ON DIRECTORY expimp_log TO system;

HOST del c:\expimp\*.dmp

HOST del c:\expimp_logs\*.log

set serveroutput on

DECLARE
 ind       NUMBER;        -- loop index
 dph       NUMBER;        -- job handle
 pct_done  NUMBER;        -- percentage complete
 job_state VARCHAR2(30);  -- track job state
 le        ku$_LogEntry;  -- WIP and error messages
 js        ku$_JobStatus; -- job status from get_status
 jd        ku$_JobDesc;   -- job description from get_status
 sts       ku$_Status;    -- status object returned by get_status
BEGIN
  -- create job
  dph :=dbms_datapump.open('EXPORT','SCHEMA',NULL,'EXAMPLE5','LATEST');

  -- specify dump file
  dbms_datapump.add_file(dph, 'example5.dmp', 'EXPIMP',
  filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

  -- specify log file
  dbms_datapump.add_file(dph, 'example5.log', 'EXPIMP_LOG',
  filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

  -- specify export schema
  dbms_datapump.metadata_filter(dph, 'SCHEMA_EXPR', 'IN (''HR'')');

  -- set parallelism
  dbms_datapump.set_parallel(dph, 2);

  -- start job
  dbms_datapump.start_job(dph);

  -- monitor job  pct_done := 0;
  job_state := 'UNDEFINED';
  WHILE (job_state != 'COMPLETED') AND (job_state != 'STOPPED') LOOP
    dbms_datapump.get_status(dph, dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip, -1, job_state, sts);

    js := sts.job_status;

    -- If the percentage done changed, display the new value
    IF js.percent_done != pct_done THEN
      dbms_output.put_line('*** Job percent done = ' ||
      to_char(js.percent_done));
      pct_done := js.percent_done;
    END IF;

    -- If any work-in-progress (WIP) or error messages
    -- were received for the job, display them.

    IF (BITAND(sts.mask,dbms_datapump.ku$_status_wip) != 0) THEN
      le := sts.wip;
    ELSE
      IF (BITAND(sts.mask,dbms_datapump.ku$_status_job_error) != 0) THEN
        le := sts.error;
      ELSE
        le := NULL;
      END IF;
    END IF;

    IF le IS NOT NULL THEN
      ind := le.FIRST;
      WHILE ind IS NOT NULL LOOP
        dbms_output.put_line(le(ind).LogText);
        ind := le.NEXT(ind);
      END LOOP;
    END IF;
  END LOOP;

  -- Indicate that the job finished and detach from it.
  dbms_output.put_line('Job has completed');
  dbms_output.put_line('Final job state = ' || job_state);
  dbms_datapump.detach(dph);
EXCEPTION
  WHEN OTHERS THEN
    dbms_datapump.stop_job(dph);
END;
/
set serveroutput on 1000000

DECLARE
 l_dp_handle      NUMBER;
 l_last_job_state VARCHAR2(30) := 'UNDEFINED';
 l_job_state      VARCHAR2(30) := 'UNDEFINED';
 l_sts            KU$_STATUS;
BEGIN
  l_dp_handle := dbms_datapump.open(operation => 'EXPORT',
  job_mode => 'SCHEMA', remote_link => NULL, job_name =>
  'EMP_EXPORT', version => 'LATEST');

  dbms_datapump.add_file(handle => l_dp_handle, filename =>
  'SCOTT.dmp', directory => 'TEST_DIR');
  dbms_datapump.metadata_filter(handle => l_dp_handle,
name => 'SCHEMA_EXPR', value => '= ''SCOTT''');
  dbms_datapump.start_job(l_dp_handle);
  dbms_datapump.detach(l_dp_handle);
END;
/

-- check job status
system@db10g> SELECT * FROM dba_datapump_jobs;

DataPump Import
conn / as sysdba

set serveroutput on

DECLARE
 dph NUMBER;
BEGIN
  dph := dbms_datapump.open(operation => 'IMPORT', job_mode =>
  'TABLE',job_name => 'EMP_IMPORT');

  dbms_datapump.add_file(handle => p_dph,filename =>
  'EXPIMP%U.DMP', directory => 'EXPIMP',filetype=>1);

  dbms_datapump.add_file(handle => dph,filename =>
  'EXPIMP.LOG',directory => 'EXPIMP_LOG',filetype=>3);

  dbms_datapump.set_parameter(handle => dph,name =>
  'TABLE_EXISTS_ACTION', value =>'REPLACE');

  dbms_datapump.start_job(dph);

  dbms_datapump.detach(dph);
EXCEPTION
  WHEN OTHERS THEN
    dbms_output.put_line('Error:' || sqlerrm || ' on Job-ID:' || dph);
END;
/

No comments: