Running DataPump directly from a PDB as SYSDBA is a common task in Oracle multitenant architecture. However, there's a catch: the handy ORACLE_PDB_SID environment variable, introduced in Oracle 18c to ease direct access to specific PDBs, only works on Linux/Unix environments. So, what do you do if you’re stuck on a Windows machine?
Sharing knowledge is the ultimate key to gaining knowledge…
The only two things that stay with you for life are you & your knowledge...
Live while you can! Teach & inspire while you could & Smile while you have the teeth.
-
ASM
- Creating ASM Instance and diskgroups manually without DBCA
- 20 ASM Realtime | Interview Questions
- How to make a cold backup for database running on ASM ( using RMAN )
- Moving a Single Datafile from File System to ASM System (METHOD 1)
- Moving a Single Datafile from File System to ASM System (METHOD 2)
- Things to consider while adding new disk/ Rebalancing existing Disk Group
-
Dataguard
- Dataguard Modes
- Dataguard Modes - II
- Oracle DataGuard Setup
- invalid archived log deletion policy
- recover from the Loss of a Datafile on a Standby Database ?
- MRP terminated with ORA-00600: internal error code, arguments
- Recover archive gaps in standby database - using 2 methodss
- Gap Resolution in Dataguard
- Convert Physical Standby to Snapshot Standby and Vice Versa
- Re-create Redo-logs on Standby/ Primary
- Pre-check for DataGuard switchover Process
- ORA-19815: WARNING: db_recovery_file_dest_size
- How do you purge old archivelogs which are applied on Standby DB
- Pre-check For SWITCHOVER using DG Broker
- https://oracle-dba-help.blogspot.com/2016/03/data-guard-modes.html
- ORA-01274 : RENAME THE UNKNOW FILE IN THE STANDBY DATABASE
- Database Creation in Easy 6 Steps
- Oracle Networking
- Tablespace management
- Housekeep : Quick Tips for DBA
- Health check for DB
- ORA-12537 While Connecting To Database Via Listener
- How to identify the applied patche-set of your Oracle Home
- Accessing a schema without knowing the password
- ORA-12505: TNS:listener does not currently know of SID given in connect descriptor
- Session Related Scripts
- How to check ORA Error on DB Server – Beginner Tip
- Working with Redo Logs
- Decommission of Oracle Database
- Roles in Oracle - Managing Users
- Oracle Release Number Format
- Troubleshooting Handy-Guide
- Understanding the Upgradation Process
- Query to find the session creating more redo
- Audit in Oracle
- Troubleshooting Internal Errors and Error-Look-up Tool on MOS
- Troubleshooting Issues with Undo Tablespace
- User Management
- User Management - II
- Data Pump Gets Better
- MEMORY_TARGET Parameter
- Installing PostgreSQL Server on CentOS, RHEL 6 / 7 Systems
- Creating a database in Postgres
- Initialize multiple Postgres instance on the same RHEL server
- How to stop starting of clusters while starting up services
- Setup a streaming replication
- Switchover- master server and warm standby (with streaming replication)
- All About Physical Replication and Log shipping
- PostgreSQL Upgrade and troubleshooting
- Timeline Switch issue
- Query to identify the lag in replication
- Drop table : ERROR: must be owner of relation table_name
- PgBadger
- PgCenter
- PgTune
- Internals of Toast Table
- Influencing Query Execution Plan
- Create Extension ERROR
- Find Uptime in PostgreSQL
- Set DB Timezone
- Clear RAM Cache, Buffer
- Read-Only User in Postgres
- Everything you need to know about Streaming Replication
- Ora2Pg Migrate Oracle to Postgres
- Total Page Cost
- PostgreSQL 13 - Everything you need to know
- Startup Failure - .conf file
- All About Postgres Upgrade
- Basic Guide
- Session Related Scripts
- AWR :: Beginners Guide
- All About ASH - Active Session History
- Wait Events : checkpoint busy waits or archiver busy waits
- Find sessions performing sort operations : TEMP Tablespace occupants
- Generate ADDM task and generate its report for Tuning
- Database Performance Tuning Scripts -Keep Handy
- Buffer Busy Wait
- Simulating the Deadlock
- Latches
- Steps to Schedule any RMAN Jobs In Windows Server
- ORA-01194: file 1 needs more recovery to be consistent
- Loss of Current Online Redo-Log
- RMAN "baby Steps"
- Recovery of ControlFiles
- Loss of Datafile
- Recovery from complete loss of all online redo log files
- Block Change Tracking
- RMAN Questions
- Instance Recovery
- Redundancy and Recovery window in RMAN
- Cold Backup
- Database Cloning
- Contact
- Drop Database using RMAN
- PITR to older Incarnation
Showing posts with label Datapump. Show all posts
Showing posts with label Datapump. Show all posts
Saturday 23 March 2024
Saturday 4 December 2021
Datapump Errors and Solutions
SCOPE
The article covers common & complex error occurs during the Datapump Export and Import operations
Monday 25 June 2018
Oracle Datapump Scenarios | Interview Questions
Follow us on Facebook !!
ASM Interview Questions | PDF Download
20 + ASM Interview Questions | Real-time Scenarios
ASM Interview Questions
How to Crack DBA Interviews | Tips
How to prepare for Interview | Points
General Tips for Job Interview | Situations
Frequently Asked Questions for 2+ Years
Interview Questions for 1+ Years Candidates
General DBA Interview Questions | Quicky
30+ Oracle Dataguard Questions | PDF Download
Oracle Performance Tuning Interview Questions - Updating Soon..!!
FREE Sign-Up with Gmail and Receive Pdf's and Updates via Mail -Click Here
DBA Job Updates @ My Facebook Page
Sunday 12 February 2017
What is nohup command ? How to Run Datapump Jobs in Background with ‘’nohup ‘’ mode.
This
is used to run Expdp / Impdp Jobs or any other long running jobs in Background.
Step 1: Create a parameter file for export or import
$ cat expdp.par job_name=tab_export directory=EXP_DIR dumpfile=EMP_table_.dmp logfile=export_EMP.log reuse_dumpfiles=y
Step 2: Create a shell script for your exp par file in nohup.
Also, Change
the permission of this file to execute
this.
$ vi exp.sh nohup expdp parfile=/home/oracle/st/exp.par & [Esc] !wq [ENTER] $ chmod 744 exp.sh
Step 3: Now run the shell script in nohup using below command.
This
will release the prompt immediately and there will not be any running job in
the prompt.
$ nohup export.sh & [1] 30221 $ nohup: appending output to `nohup.out' [1]+ Done nohup export.sh
Note:
You can see the datapump job running in DBA_DATAPUMP_JOBS view.
Saturday 10 September 2016
Datapump Architecture. What is Master table in Datapump ?
Datapump Architecture. What is Master table in Datapump ?
Master Table :
The Master Table is created in the schema of the current user
running the Pump Dump export or import, and it keeps tracks of lots of detailed
information.
The Master Table is used to track the detailed progress
information of a Data Pump job.
This will store the following information :
·
The status of every worker process involved
in the operation.
·
The current set of dump files involved.
·
The job’s user-supplied parameters.
·
The state of current job status and restart
information.
·
The current state of every object exported or
imported and their locations in the dump file set.
Note : The
Master Table is the key to Data Pump’s restart capability in the event
of a planned or unplanned job stoppage.
Behaviour of Master Table :
This table is created at the beginning of a Data Pump operation
and is dropped at the end of the successful completion of a Data Pump
operation. The Master Table can also be dropped if the job is killed using the
kill_job interactive command. If a job is stopped using the stop_job
interactive command or if the job is terminated unexpectedly, the Master Table
will be retained.
The keep_master parameter can be set to Y to retain the Master
Table at the end of a successful job for debugging purposes
The name of the Master Table is the same as the Data Pump job name
and has the following columns:
SQL> Desc
<job_name> ;
Process in Datapump Architecture
The master control process
·
Maintains job state, job description,
restart, and dump file set information in the Master Table.
·
This process controls the execution and
sequencing of a Data Pump job.
·
The master process has two main functions
1.
To divide the loading and unloading of data
and metadata tasks and handle the worker processes;
2.
To manage the information in the Master Table
and record job activities in the log file.
Worker Process:
·
This handles the request assigned by the
master control process. This process
maintains the current status of the job, like : ‘pending’ or ‘completed’ or
‘failed’.
·
The worker process is responsible for loading
and unloading data and metadata.
·
The number of worker processes needed can be
defined by assigning a number to the parallel parameter.
Parallel
Query Process:
·
This process is used when the Data Pump
chooses External Table API as the data access method for loading and unloading
data.
·
The worker process that uses the External
Table API creates multiple parallel query processes for data movement, with the
worker process acting as the query coordinator.
Shadow Process :
·
This process is created when a client logs
into the Oracle server.
·
The shadow process creates a job, which
primarily consists of creating the Master Table, creating the queues in
Advanced Queues (AQ) used for communication among the various processes, and
creating the master control process.
·
Once a job is running, the shadow process’
main job is to check the job status for the client process. If the client process detaches, the shadow
process goes away; however, the remaining Data Pump job processes are still
active.
·
Another client process can create a new
shadow process and attach to the existing job.
Tuesday 9 August 2016
Datapump Scenarios - Frequently Asked
Q. How to split the datapump dumpfiles, into multiple and at
multiple directories ?
PARALLEL parameter is used to improve the speed of the export.
This will be also more effective if you split the dumpfiles with
DUMPFILE parameter across the filesystem.
Create 2 or 3 directories in different filesystems and use the
commands effectively.
$ expdp / dumpfile=dir1:test_1.dmp,
dir1:test_2.dmp, dir2:test_3.dmp, dir3:test_4.dmp logfile=dir1:test.log full=y
parallel=4
Q. How to limit the file size of dumpfile in Datapump ?
FILESIZE parameter is used to limit the dumpfile size. For eg., if
you want to limit your dumpfiles to 5gb,
you can issue command as below
$ expdp / directory=dir1
dumpfile=test_%U.dmp logfile=test.log filesize=5120m full=y
where %U will assign numbers automatically from 1 to 99.
Q. How to import the dumpfile if the Database version are not same
?
VERSION parameter is used while taking export if you want to
create a dumpfile which should be imported
into a DB which is lower than the source DB
Example:
If your source DB is 11g and target DB is 10g, you can't use the
dumpfile taken from 11g expdp utility to import into 10g DB.
If you don’t use this VERSION parameter, then it will show you the
error as Below
ORA-39142:
incompatible version number 3.1 in dump file "/u01/datapump/test.dmp"
VERSION={COMPATIBLE | LATEST | version_string}
$ expdp / directory=dir1
dumpfile=test_1.dmp logfile=test.log VERSION=10.2.0
Q. How to improve the performace of Datapump while Importing ?
1) Import of full database should be split as tables first and
indexes next. Use the parameter exclude effectively
to improve the speed of import.
EXCLUDE = INDEX,STATISTICS
This will not import the indexes and statistics which in turn only
import the tables, hence improving the performance
2)Always set init.ora parameter cursor_sharing to exact which has
a good effect on import's performance.
Q. How to find the contents of a dumpfile ?
If you are not sure about the schemas that were present in the
dumpfile or tablespaces present inside the
dumpfile, etc., you can easily check the dumpfile for those information.
Use the below command
$ grep -a "CREATE
USER" test_1.dmp
$ grep -a "CREATE
TABLESPACE" test_1.dmp
The above command gives all the CREATE USER statements and CREATE
TABLESPACE statements
which will be useful in many cases.
Note: You can also get the INDEXES and TABLES creation ddl from
the dumpfile as well.
I’ll be updating the post whenever I come across things that can
help improving the performance of datapump.
For more updates, Like Us Our Facebook Page Here
Subscribe to:
Posts (Atom)