What is
difference between oracle SID and Oracle service name?
Oracle SID is
the unique name that uniquely identifies your instance/database where as the
service name is the TNS alias can be same or different as SID.
What are the
steps to install oracle on Linux system? List two kernel parameter that effect
oracle installation?
Initially set up
disks and kernel parameters, then create oracle user and DBA group, and finally
run installer to start the installation process. The SHMMAX & SHMMNI two
kernel parameter required to set before installation process.
What are bind
variables?
With bind
variable in SQL, oracle can cache queries in a single time in the SQL cache
area. This avoids a hard parse each time, which saves on various locking and
latching resource we use to check object existence and so on.
What is the
difference between data block/extent/segment?
A data block is
the smallest unit of logical storage for a database object. As objects grow
they take chunks of additional storage that are composed of contiguous data
blocks. These groupings of contiguous data blocks are called extents. All the
extents that an object takes when grouped together are considered the segment
of the database object.
What is the
difference between PGA and UGA?
When you are
running dedicated server then process information stored inside the process
global area (PGA) and when you are using shared server then the process
information stored inside user global area (UGA).
What is SGA?
Define structure of shared pool component of SGA?
The system
global area is a group of shared memory area that is dedicated to oracle
instance. All oracle process uses the SGA to hold information. The SGA is used
to store incoming data and internal control information that is needed by the
database. You can control the SGA memory by setting the parameter
db_cache_size, shared_pool_size and log_buffer.
Shared pool portion contain three major area:
Library cache (parse SQL statement, cursor information and execution plan),
data dictionary cache (contain cache, user account information, privilege user information, segments and extent information,data buffer cache for parallel execution message and control structure.
Shared pool portion contain three major area:
Library cache (parse SQL statement, cursor information and execution plan),
data dictionary cache (contain cache, user account information, privilege user information, segments and extent information,data buffer cache for parallel execution message and control structure.
What is the
difference between SMON and PMON processes?
SMON (System
Monitor) performs recovery after instance failure, monitor temporary segments
and extents; clean temp segment, coalesce free space. It is mandatory process
of DB and starts by default.
PMON (Process Monitor) failed process resources. In shared server architecture monitor and restarts any failed dispatcher or server process. It is mandatory process of DB and starts by default.
PMON (Process Monitor) failed process resources. In shared server architecture monitor and restarts any failed dispatcher or server process. It is mandatory process of DB and starts by default.
What is a system
change number (SCN)?
SCN is a value
that is incremented whenever a dirty read occurs.
SCN is incremented whenever a deadlock occurs.
SCN is a value that keeps track of explicit locks.
SCN is a value that is incremented whenever database changes are made.
SCN is incremented whenever a deadlock occurs.
SCN is a value that keeps track of explicit locks.
SCN is a value that is incremented whenever database changes are made.
What is the main
purpose of ‘CHECKPOINT’ in oracle database? How do you automatically force the
oracle to perform a checkpoint?
A checkpoint is
a database event, which synchronize the database blocks in memory with the
datafiles on disk. It has two main purposes: To establish a data consistency
and enable faster database Recovery.
The following
are the parameter that will be used by DBA to adjust time or interval of how
frequently its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.
What happens
when we fire SQL statement in Oracle?
First it will
check the syntax and semantics in library cache, after that it will create
execution plan.
If already data is in buffer cache it will directly return to the client.
If not it will fetch the data from datafiles and write to the database buffer cache after that it will send server and finally server send to the client.
If already data is in buffer cache it will directly return to the client.
If not it will fetch the data from datafiles and write to the database buffer cache after that it will send server and finally server send to the client.
What is the use
of large pool, which case you need to set the large pool?
You need to set
large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large
pool prevents RMAN & MTS from competing with other sub system for the same
memory. RMAN uses the large pool for backup & restore when you set the
DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous
I/O. If neither of these parameters is enabled, then Oracle allocates backup
buffers from local process memory rather than shared memory. Then there is no
use of large pool.
What does
database do during the mounting process?
While mounting
the database oracle reads the data from controlfile which is used for verifying
physical database files during sanity check. Background processes are started
before mounting the database only.
What are logfile
states?
“CURRENT” state
means that redo records are currently being written to that group. It will be
until a log switch occurs. At a time there can be only one redo group current.
If a redo group containing redo’s of a dirty buffer that redo group is said to be ‘ACTIVE’ state. As we know log file keep changes made to the data blocks then data blocks are modified in buffer cache (dirty blocks). These dirty blocks must be written to the disk (RAM to permanent media).
And when a redolog group contains no redo records belonging to a dirty buffer it is in an “INACTIVE” state. These inactive redolog can be overwritten.
One more state ‘UNUSED’ initially when you create new redo log group its log file is empty on that time it is unused. Later it can be any of the above mentioned state.
If a redo group containing redo’s of a dirty buffer that redo group is said to be ‘ACTIVE’ state. As we know log file keep changes made to the data blocks then data blocks are modified in buffer cache (dirty blocks). These dirty blocks must be written to the disk (RAM to permanent media).
And when a redolog group contains no redo records belonging to a dirty buffer it is in an “INACTIVE” state. These inactive redolog can be overwritten.
One more state ‘UNUSED’ initially when you create new redo log group its log file is empty on that time it is unused. Later it can be any of the above mentioned state.
What is log
switch?
The point at
which oracle ends writing to one online redo log file and begins writing to
another is called a log switch. Sometimes you can force the log switch.
ALTER SYSTEM
SWITCH LOGFILE;
How to check
Oracle database version?
SQL> Select *
from v$version;
Explain Oracle
Architecture?
Oracle Instance:
a means to access an Oracle database,always opens one and only one database and consists of memory structures and background process.
a means to access an Oracle database,always opens one and only one database and consists of memory structures and background process.
Oracle server:
a DBMS that provides an open, comprehensive, integrated approach to information management,Consists of an Instance and a database.
a DBMS that provides an open, comprehensive, integrated approach to information management,Consists of an Instance and a database.
Oracle database:
a collection of data that is treated as a unit,Consists of Datafiles, Control files, Redo log files. (optional param file, passwd file, archived log)
a collection of data that is treated as a unit,Consists of Datafiles, Control files, Redo log files. (optional param file, passwd file, archived log)
Instance memory
Structures:
System Global
Area (SGA):
Allocated at instance startup, and is a fundamental component of an Oracle Instance.
Allocated at instance startup, and is a fundamental component of an Oracle Instance.
SGA Memory
structures:
Includes Shared Pool, Database Buffer Cache, Redo Log Buffer among others.
Includes Shared Pool, Database Buffer Cache, Redo Log Buffer among others.
Shared Pool :
Consists of two key performance-related memory structures Library Cache and Data Dictionary Cache.
Consists of two key performance-related memory structures Library Cache and Data Dictionary Cache.
Library Cache:
Stores information about the most recently used SQL and PL/SQL statements and enables the sharing of commonly used statements.
Stores information about the most recently used SQL and PL/SQL statements and enables the sharing of commonly used statements.
Data Dictionary
Cache :
Stores collection of the most recently used definitions in the database Includes db files, tables, indexes, columns etc. Improves perf. During the parse phase, the server process looks at the data dictionary for information to resolve object names and validate access.
Stores collection of the most recently used definitions in the database Includes db files, tables, indexes, columns etc. Improves perf. During the parse phase, the server process looks at the data dictionary for information to resolve object names and validate access.
Database Buffer
Cache:
Stores copies of data blocks that have been retrieved from the datafiles. Everything done here.
Stores copies of data blocks that have been retrieved from the datafiles. Everything done here.
Redo Log Buffer
:
Records all changes made to the database data blocks, Primary purpose is recovery. Redo entries contain information to reconstruct or redo changes.
Records all changes made to the database data blocks, Primary purpose is recovery. Redo entries contain information to reconstruct or redo changes.
User process:
Started at the time a database User requests connection to the Oracle server. requests interaction with the Oracle server, does not interact directly with the Oracle server.
Started at the time a database User requests connection to the Oracle server. requests interaction with the Oracle server, does not interact directly with the Oracle server.
Server process:
Connects to the Oracle Instance and is Started when a user establishes a session.
fulfills calls generated and returns results.
Each server process has its own nonshared PGA when the process is started.
Server Process Parses and run SQL statements issued through the application, Reads necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA and Return results in such a way that the application can process the information.
In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead.
Connects to the Oracle Instance and is Started when a user establishes a session.
fulfills calls generated and returns results.
Each server process has its own nonshared PGA when the process is started.
Server Process Parses and run SQL statements issued through the application, Reads necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA and Return results in such a way that the application can process the information.
In some situations when the application and Oracle Database operate on the same computer, it is possible to combine the user process and corresponding server process into a single process to reduce system overhead.
Program Global
Area (PGA):
Memory area used by a single Oracle server process.
Allocated when the server process is started, deallocated when the process is terminated and used by only one process.
Used to process SQL statements and to hold logon and other session information.
Memory area used by a single Oracle server process.
Allocated when the server process is started, deallocated when the process is terminated and used by only one process.
Used to process SQL statements and to hold logon and other session information.
Background
processes:
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and memory structures
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and memory structures
There are two
types of database processes:
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
– DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
– DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn
DBWn writes
when:
• Checkpoint occurs
• Dirty buffers reach threshold
• There are no free buffers
• Timeout occurs
• RAC ping request is made
• Tablespace OFFLINE
• Tablespace READ ONLY
• Table DROP or TRUNCATE
• Tablespace BEGIN BACKUP
• Checkpoint occurs
• Dirty buffers reach threshold
• There are no free buffers
• Timeout occurs
• RAC ping request is made
• Tablespace OFFLINE
• Tablespace READ ONLY
• Table DROP or TRUNCATE
• Tablespace BEGIN BACKUP
Log Writer
(LGWR) writes:
• At commit
• When 1/3rd full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes
• At commit
• When 1/3rd full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes
System Monitor
(SMON) Responsibilities:
• Instance recovery
– Rolls forward changes in redo logs
– Opens database for user access
– Rolls back uncommitted transactions
• Coalesces free space
• Deallocates temporary segments.
Process Monitor (PMON) Cleans up after failed processes by:
• Rolling back the transaction
• Releasing locks
• Releasing other resources
• Restarting dead dispatchers
• Instance recovery
– Rolls forward changes in redo logs
– Opens database for user access
– Rolls back uncommitted transactions
• Coalesces free space
• Deallocates temporary segments.
Process Monitor (PMON) Cleans up after failed processes by:
• Rolling back the transaction
• Releasing locks
• Releasing other resources
• Restarting dead dispatchers
Checkpoint
(CKPT) Responsible for:
• Signaling DBWn at checkpoints
• Updating datafile headers with checkpoint information
• Updating control files with checkpoint information
Archiver (ARCn)
• Optional background process
• Automatically archives online redo logs when ARCHIVELOG mode is set
• Preserves the record of all changes made to the database
• Signaling DBWn at checkpoints
• Updating datafile headers with checkpoint information
• Updating control files with checkpoint information
Archiver (ARCn)
• Optional background process
• Automatically archives online redo logs when ARCHIVELOG mode is set
• Preserves the record of all changes made to the database
Why do you run orainstRoot and ROOT.SH once you finalize the Installation?
orainstRoot.sh needs to be run to change the Permissions and
groupname to 770 and to dba.
Root.sh (ORACLE_HOME) location needs to be run to create a ORATAB in /etc/oratab or /opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to /usr/local/bin.
orainstRoot.sh
[root@oracle11g ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete
root.sh
[root@oracle11g ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh
Running Oracle 11g root.sh script…
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
For Oracle installation on unix/linux, we will be prompted to run a script ‘root.sh’ from the oracle inventory directory.this script needs to run the first time only when any oracle product is installed on the server.
It creates the additional directories and sets appropriate ownership and permissions on files for root user.
Root.sh (ORACLE_HOME) location needs to be run to create a ORATAB in /etc/oratab or /opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to /usr/local/bin.
orainstRoot.sh
[root@oracle11g ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete
root.sh
[root@oracle11g ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh
Running Oracle 11g root.sh script…
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
For Oracle installation on unix/linux, we will be prompted to run a script ‘root.sh’ from the oracle inventory directory.this script needs to run the first time only when any oracle product is installed on the server.
It creates the additional directories and sets appropriate ownership and permissions on files for root user.
Oracle Database
11g New Feature for DBAs?
1) Automatic
Diagnostic Repository [ADR]
2) Database Replay
3) Automatic Memory Tuning
4) Case sensitive password
5) Virtual columns and indexes
6) Interval Partition and System Partition
7) The Result Cache
8) ADDM RAC Enhancements
9) SQL Plan Management and SQL Plan Baselines
10) SQL Access Advisor & Partition Advisor
11) SQL Query Repair Advisor
12) SQL Performance Analyzer (SPA) New
13) DBMS_STATS Enhancements
14) The Result Cache
15) Total Recall (Flashback Data Archive)
Note: The above are only top new features, there are other features as well introduced in 11g which will be included subsequently
2) Database Replay
3) Automatic Memory Tuning
4) Case sensitive password
5) Virtual columns and indexes
6) Interval Partition and System Partition
7) The Result Cache
8) ADDM RAC Enhancements
9) SQL Plan Management and SQL Plan Baselines
10) SQL Access Advisor & Partition Advisor
11) SQL Query Repair Advisor
12) SQL Performance Analyzer (SPA) New
13) DBMS_STATS Enhancements
14) The Result Cache
15) Total Recall (Flashback Data Archive)
Note: The above are only top new features, there are other features as well introduced in 11g which will be included subsequently
What is the
Difference Between Local Inventory and Global Inventory?
What is oraInventory ?
What is oraInventory ?
oraInventory is
repository (directory) which store/records oracle software products & their
oracle_homes location on a machine. This Inventory now a days in XML format and
called as XML Inventory where as in past it used to be in binary format &
called as binary Inventory.
There are basically two kind of inventories,
One is Local Inventory (also called as Oracle Home Inventory) and other is Global Inventory (also called as Central Inventory).
There are basically two kind of inventories,
One is Local Inventory (also called as Oracle Home Inventory) and other is Global Inventory (also called as Central Inventory).
What is Global
Inventory ?
Global Inventory holds information about Oracle Products
on a Machine. These products can be various oracle components like database,
oracle application server, collaboration suite, soa suite, forms & reports
or discoverer server . This global Inventory location will be determined by
file oraInst.loc in /etc (on Linux) or /var/opt/oracle
(solaris). If you want to see list of oracle products on machine check for
file inventory.xml under ContentsXML in oraInventory Please
note if you have multiple global Inventory on machine check all oraInventory
directories)
You will see entry like
HOME NAME=”ORA10g_HOME” LOC=”/u01/oracle/10.2.0/db” TYPE=”O” IDX=”1?/
You will see entry like
HOME NAME=”ORA10g_HOME” LOC=”/u01/oracle/10.2.0/db” TYPE=”O” IDX=”1?/
What is Local
Inventory ?
Inventory inside
each Oracle Home is called as local Inventory or oracle_home Inventory. This
Inventory holds information to that oracle_home only.
What is Oracle
Home Inventory?
Oracle home
inventory or local inventory is present inside each Oracle home. It only
contains information relevant to a particular Oracle home. This file is located
in the following location:
$ORACLE_HOME/inventory
It contains the following files and folders:
· Components File
· Home Properties File
· Other Folders
$ORACLE_HOME/inventory
It contains the following files and folders:
· Components File
· Home Properties File
· Other Folders
Can I have
multiple Global Inventory on a machine ?
Quite common questions is that can you have multiple
global Inventory and answer is YES you can have multiple global Inventory but
if your upgrading or applying patch then change Inventory Pointer oraInst.loc to respective location. If you are
following single global Inventory and if you wish to uninstall any software
then remove it from Global Inventory as well.
What to do if my Global Inventory is corrupted ?
No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”
No need to worry if your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”
What is RESULT
Cache?
11G Backgroung
Processes?
The following
process are added in 11g as new background processes.
1 dbrm DB resource manager
2 dia0 Diagnosability process
3 fbda Flashback data archiver process
4 vktm Virtual Timekeeper
5 w000 Space Management Co-ordination process
6 smc0 Space Manager process
NOTE : The above six are mandatory processes.
But 11g has 56 new processes added which can be queried using
1 dbrm DB resource manager
2 dia0 Diagnosability process
3 fbda Flashback data archiver process
4 vktm Virtual Timekeeper
5 w000 Space Management Co-ordination process
6 smc0 Space Manager process
NOTE : The above six are mandatory processes.
But 11g has 56 new processes added which can be queried using
If any one of
these 6 mandatory background processes is killed/not running, the instance will
be aborted ?
Background
processes are started automatically when the instance is started.
Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted.
Any issues related to backgroud processes should be monitored and analyzed from the trace files generated and the alert log.
Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked if that particular feature is activated.
If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted.
Any issues related to backgroud processes should be monitored and analyzed from the trace files generated and the alert log.
What is
SGA_TARGET and SGA_MAX_SIZE ?
SGA_MAX_SIZE is
the largest amount of memory that will be available for the SGA in the instance
and it will be allocated from memory. You do not have to use it all, but it
will be potentially wasted if you set it too high and don’t use it. It is not a
dynamic parameter. Basically it gives you room for the Oracle instance to grow.
SGA_TARGET is actual memory in use by the current SGA. This parameter is dynamic and can be increased up to the value of SGA_MAX_SIZE.
SGA_MAX_SIZE and SGA_TARGET both are the parameter are used to change the SGA SIZE.
SGA_MAX_SIZE sets the maximum value for sga_target.
SGA_TAGET is 10G feature used to change the sga size dynamically .it specifies the total amount of SGA memory available to an instance.
this feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size, shared_pool_size, large_pool_size and db_cache_size are affected.
SGA_TARGET is actual memory in use by the current SGA. This parameter is dynamic and can be increased up to the value of SGA_MAX_SIZE.
SGA_MAX_SIZE and SGA_TARGET both are the parameter are used to change the SGA SIZE.
SGA_MAX_SIZE sets the maximum value for sga_target.
SGA_TAGET is 10G feature used to change the sga size dynamically .it specifies the total amount of SGA memory available to an instance.
this feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size, shared_pool_size, large_pool_size and db_cache_size are affected.
SGA_MAX_SIZE
& SGA_TARGET
SGA_MAX_SIZE
sets the overall amount of memory the SGA can consume but is not dynamic.
The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.
I.e. If SGA_MAX_SIZE=4GB and SGA_TARGET=2GB, later period of time, if you want you can resize your SGA_TARGET parameter to the value of SGA_MAX_SIZE i.e. 4GB, you can’t resize the SGA_TARGET value to more than 4GB.
It is significant that SGA_TARGET includes the entire memory for the SGA, in contrast to earlier releases in which memory for the internal and fixed SGA was added to the sum of the configured SGA memory parameters. Thus, SGA_TARGET gives you precise control over the size of the shared memory region allocated by the database. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate SGA_TARGET
Do not dynamically set or unset the SGA_TARGET parameter. This should be set only at startup.
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing.
SGA_TARGET provides the following:
§ Single parameter for total SGA size
§ Automatically sizes SGA components
§ Memory is transferred to where most needed
§ Uses workload information
§ Uses internal advisory predictions
§ STATISTICS_LEVEL must be set to TYPICAL
§ SGA_TARGET is dynamic
§ Can be increased till SGA_MAX_SIZE
§ Can be reduced till some component reaches minimum size
§ Change in value of SGA_TARGET affects only automatically sized components
If I keep SGA_TARGET =0 then what will happen ?
Disable automatic SGA tuning by setting sga_target=0
Disable ASMM by setting SGA_TARGET=0
The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.
I.e. If SGA_MAX_SIZE=4GB and SGA_TARGET=2GB, later period of time, if you want you can resize your SGA_TARGET parameter to the value of SGA_MAX_SIZE i.e. 4GB, you can’t resize the SGA_TARGET value to more than 4GB.
It is significant that SGA_TARGET includes the entire memory for the SGA, in contrast to earlier releases in which memory for the internal and fixed SGA was added to the sum of the configured SGA memory parameters. Thus, SGA_TARGET gives you precise control over the size of the shared memory region allocated by the database. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate SGA_TARGET
Do not dynamically set or unset the SGA_TARGET parameter. This should be set only at startup.
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing.
SGA_TARGET provides the following:
§ Single parameter for total SGA size
§ Automatically sizes SGA components
§ Memory is transferred to where most needed
§ Uses workload information
§ Uses internal advisory predictions
§ STATISTICS_LEVEL must be set to TYPICAL
§ SGA_TARGET is dynamic
§ Can be increased till SGA_MAX_SIZE
§ Can be reduced till some component reaches minimum size
§ Change in value of SGA_TARGET affects only automatically sized components
If I keep SGA_TARGET =0 then what will happen ?
Disable automatic SGA tuning by setting sga_target=0
Disable ASMM by setting SGA_TARGET=0
SGA_TARGET is a
database initialization parameter (introduced in Oracle 10g) that can be used
for automatic SGA memory sizing.
Default value 0 (SGA auto tuning is disabled)
Default value 0 (SGA auto tuning is disabled)
What happens
when you run ALTER DATABASE OPEN RESETLOGS ?
The current
online redo logs are archived, the log sequence number is reset to 1, new
database incarnation is created, and the online redo logs are given a new time
stamp and SCN.
The reason to do the open the database with the resetlogs is that after doing an incomplete recovery , the data files and control files still don’t come to the same point of the redo log files. And as long as the database is not consistent within all the three file-data, redo and control, you can’t open the database. The resetlogs clause would reset the log sequence numbers within the log files and would start them from 0 thus enabling you to open the database but on the cost of losing all what was there in the redo log files.
In what scenarios open resetlogs required ?
An ALTER DATABASE OPEN RESETLOGS statement is required,
1.after incomplete recovery (Point in Time Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with the reset logs option.
The reason to do the open the database with the resetlogs is that after doing an incomplete recovery , the data files and control files still don’t come to the same point of the redo log files. And as long as the database is not consistent within all the three file-data, redo and control, you can’t open the database. The resetlogs clause would reset the log sequence numbers within the log files and would start them from 0 thus enabling you to open the database but on the cost of losing all what was there in the redo log files.
In what scenarios open resetlogs required ?
An ALTER DATABASE OPEN RESETLOGS statement is required,
1.after incomplete recovery (Point in Time Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with the reset logs option.
Whenever you
perform incomplete recovery or recovery with a backup control file, you must
reset the online logs when you open the database. The new version of the reset
database is called a new incarnation..
Difference
between RESETLOGS and NORESETLOGS ?
After recover
database operation, open the database with:
ALTER DATABASE
OPEN [NO]RESETLOGS
NORESETLOGS:
The NORESETLOGS option does not clear the redo log files during startup and the online redo logs to be used for recovery. Only used in scenario where MANUAL RECOVERY is started, CANCEL is used, and then RECOVER DATABASE is started.
RESETLOGS:
CAUTION: Never use RESETLOGS unless necessary.
Once RESETLOGS is used then the redo log files cannot be used and any completed transactions in those redo logs are lost!!
Before using the RESETLOGS option take an offline backup of the database.
NORESETLOGS:
The NORESETLOGS option does not clear the redo log files during startup and the online redo logs to be used for recovery. Only used in scenario where MANUAL RECOVERY is started, CANCEL is used, and then RECOVER DATABASE is started.
RESETLOGS:
CAUTION: Never use RESETLOGS unless necessary.
Once RESETLOGS is used then the redo log files cannot be used and any completed transactions in those redo logs are lost!!
Before using the RESETLOGS option take an offline backup of the database.
What is SCN
(System Change Number) ?
The system
change number (SCN) is an ever-increasing value that uniquely identifies a
committed version of the database at a point in time. Every time a user commits
a transaction Oracle records a new SCN in redo logs.
Oracle uses SCNs in control files datafile headers and redo records. Every redo log file has both a log sequence number and low and high SCN. The low SCN records the lowest SCN recorded in the log file while the high SCN records the highest SCN in the log file.
Oracle uses SCNs in control files datafile headers and redo records. Every redo log file has both a log sequence number and low and high SCN. The low SCN records the lowest SCN recorded in the log file while the high SCN records the highest SCN in the log file.
What is Database
Incarnation ?
Database
incarnation is effectively a new “version” of the database that happens when
you reset the online redo logs using “alter database open resetlogs;”.
Database incarnation falls into following category Current, Parent, Ancestor and Sibling
i) Current Incarnation : The database incarnation in which the database is currently generating redo.
ii) Parent Incarnation : The database incarnation from which the current incarnation branched following an OPEN RESETLOGS operation.
iii) Ancestor Incarnation : The parent of the parent incarnation is an ancestor incarnation. Any parent of an ancestor incarnation is also an ancestor incarnation.
iv) Sibling Incarnation : Two incarnations that share a common ancestor are sibling incarnations if neither one is an ancestor of the other.
Database incarnation falls into following category Current, Parent, Ancestor and Sibling
i) Current Incarnation : The database incarnation in which the database is currently generating redo.
ii) Parent Incarnation : The database incarnation from which the current incarnation branched following an OPEN RESETLOGS operation.
iii) Ancestor Incarnation : The parent of the parent incarnation is an ancestor incarnation. Any parent of an ancestor incarnation is also an ancestor incarnation.
iv) Sibling Incarnation : Two incarnations that share a common ancestor are sibling incarnations if neither one is an ancestor of the other.
How to view
Database Incarnation history of Database ?
Using SQL>
select * from v$database_incarnation;
Using RMAN>LIST INCARNATION;
However, you can use the RESET DATABASE TO INCARNATION command to specify that SCNs are to be interpreted in the frame of reference of another incarnation.
•For example my current database INCARNATION is 3 and now I have used
FLASHBACK DATABASE TO SCN 3000;then SCN 3000 will be search in current incarnation which is 3. However if I want to get back to SCN 3000 of INCARNATION 2 then I have to use,
Using RMAN>LIST INCARNATION;
However, you can use the RESET DATABASE TO INCARNATION command to specify that SCNs are to be interpreted in the frame of reference of another incarnation.
•For example my current database INCARNATION is 3 and now I have used
FLASHBACK DATABASE TO SCN 3000;then SCN 3000 will be search in current incarnation which is 3. However if I want to get back to SCN 3000 of INCARNATION 2 then I have to use,
RMAN> RESET
DATABASE TO INCARNATION 2;
RMAN> RECOVER DATABASE TO SCN 3000;
ORACLE – BACKUP AND RECOVERY
How would you decide your backup strategy and timing for backup?
In fact backup strategy is purely depends upon your organization business need.
If no downtime then database must be run on archivelog mode and you have to take frequently or daily backup.
If sufficient downtime is there and loss of data would not affect your business then you can run your database in noarchivelog mode and backup can be taken in-frequently or weekly or monthly.
In most of the case in an organization when no downtime then frequent inconsistent backup needed (daily backup), multiplex online redo log files (multiple copies), different location for redo log files, database must run in archivelog mode and dataguard can be implemented for extra bit of protection.
RMAN> RECOVER DATABASE TO SCN 3000;
ORACLE – BACKUP AND RECOVERY
How would you decide your backup strategy and timing for backup?
In fact backup strategy is purely depends upon your organization business need.
If no downtime then database must be run on archivelog mode and you have to take frequently or daily backup.
If sufficient downtime is there and loss of data would not affect your business then you can run your database in noarchivelog mode and backup can be taken in-frequently or weekly or monthly.
In most of the case in an organization when no downtime then frequent inconsistent backup needed (daily backup), multiplex online redo log files (multiple copies), different location for redo log files, database must run in archivelog mode and dataguard can be implemented for extra bit of protection.
What is
difference between Restoring and Recovery of database?
Restoring means
copying the database object from the backup media to the destination where
actually it is required where as recovery means to apply the database object
copied earlier (roll forward) in order to bring the database into consistent
state.
What is the
difference between complete and incomplete recovery?
An incomplete
database recovery is a recovery that it does not reach to the point of failure.
The recovery can be either point of time or particular SCN or Particular
archive log specially incase of missing archive log or redolog failure where as
a complete recovery recovers to the point of failure possibly when having all
archive log backup.
What is the
benefit of running the DB in archivelog mode over no archivelog mode?
When a database
is in no archivelog mode whenever log switch happens there will be a loss of
some redoes log information in order to avoid this, redo logs must be archived.
This can be achieved by configuring the database in archivelog mode.
If an oracle
database is crashed?
How would you recover that transaction which is not in backup?
How would you recover that transaction which is not in backup?
If the database
is in archivelog we can recover that transaction otherwise we cannot recover
that transaction which is not in backup.
What is the
difference between HOTBACKUP and RMAN backup?
For hotbackup we
have to put database in begin backup mode, then take backup where as RMAN would
not put database in begin backup mode. RMAN is faster can perform incremental
(changes only) backup, and does not place tablespace in hotbackup mode.
Can we use Same
target database as Catalog database?
No, the recovery
catalog should not reside in the target database (database to be backed up)
because the database can not be recovered in the mounted state.
Incremental
backup levels:
Level 0 – full backup that can be used for subsequent incrementals
RMAN> backup incremental level 0 database;
Differential Level 1–only the blocks that have changed since the last backup (whether it is level 0 or level 1)
RMAN> backup incremental level 1 differential database;
Cumulative Level 1 – all changes since the last level 0 incremental backup
RMAN> backup incremental level 1 cumulative database;
A full backup cannot be used for a cumulative level 1 backup.
A cumulative level 1 backup must be done on top of an incremental level 0 backup.
Level 0 – full backup that can be used for subsequent incrementals
RMAN> backup incremental level 0 database;
Differential Level 1–only the blocks that have changed since the last backup (whether it is level 0 or level 1)
RMAN> backup incremental level 1 differential database;
Cumulative Level 1 – all changes since the last level 0 incremental backup
RMAN> backup incremental level 1 cumulative database;
A full backup cannot be used for a cumulative level 1 backup.
A cumulative level 1 backup must be done on top of an incremental level 0 backup.
Why RMAN
incremental backup fails even though full backup exists?
If you have
taken the RMAN full backup using the command ‘Backup database’, where as a
level 0 backup is physically identical to a full backup. The only difference is
that the level 0 backup is recorded as an incremental backup in the RMAN
repository so it can be used as the parent for a level 1 backup. Simply the
‘full backup without level 0’ can not be considered as a parent backup from
which you can take level 1 backup.
Can we perform
RMAN level 1 backup without level 0?
If no level 0 is
available, then the behavior depends upon the compatibility mode setting
(oracle version).
If the compatibility mode less than 10.0.0, RMAN generates a level 0 backup of files contents at the time of backup.
If the compatibility is greater than 10.0.0, RMAN copies all block changes since the file was created, and stores the results as level 1 backup.
If the compatibility mode less than 10.0.0, RMAN generates a level 0 backup of files contents at the time of backup.
If the compatibility is greater than 10.0.0, RMAN copies all block changes since the file was created, and stores the results as level 1 backup.
How to put
Manual/User managed backup in RMAN?
In case of recovery catalog, you can put by using catalog
command:
RMAN> CATALOG START WITH ‘/oracle/backup.ctl’;
RMAN> CATALOG START WITH ‘/oracle/backup.ctl’;
How to check
RMAN version in oracle?
If you want to
check RMAN catalog version then use the below query from SQL*plus
SQL> Select * from rcver;
SQL> Select * from rcver;
What happens
actually in case of instance Recovery?
While Oracle
instance fails, Oracle performs an Instance Recovery when the associated
database is being re-started. Instance recovery occurs in 2 steps:
Cache recovery: Changes being made to a database are recorded in the database buffer cache as well as redo log files simultaneously. When there are enough data in the database buffer cache, they are written to data files. If an Oracle instance fails before these data are written to data files, Oracle uses online redo log files to recover the lost data when the associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database (the before image of the modified data is stored in an undo segment which is used to restore the original values in case the transaction is rolled back). At the time of an instance failure, the database may have uncommitted transactions. It is possible that changes made by these uncommitted transactions have gotten saved in data files. To maintain read consistency, Oracle rolls back all uncommitted transactions when the associated database is re-started. Oracle uses the undo data stored in undo segments to accomplish this. This process is called transaction recovery.
Cache recovery: Changes being made to a database are recorded in the database buffer cache as well as redo log files simultaneously. When there are enough data in the database buffer cache, they are written to data files. If an Oracle instance fails before these data are written to data files, Oracle uses online redo log files to recover the lost data when the associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database (the before image of the modified data is stored in an undo segment which is used to restore the original values in case the transaction is rolled back). At the time of an instance failure, the database may have uncommitted transactions. It is possible that changes made by these uncommitted transactions have gotten saved in data files. To maintain read consistency, Oracle rolls back all uncommitted transactions when the associated database is re-started. Oracle uses the undo data stored in undo segments to accomplish this. This process is called transaction recovery.
ORACLE
PATCHING,CLONING & UPGRADE
When you moved
oracle binary files from one ORACLE_HOME server to another server then which
oracle utility will be used to make this new ORACLE_HOME usable?
Relink all.
In which months
oracle release CPU patches?
JAN, APR, JUL,
OCT
When we applying
single Patch, can you use opatch utility?
Yes, you can use Opatch incase of single patch. The only type of patch that cannot be used with OPatch is a patchset.
Yes, you can use Opatch incase of single patch. The only type of patch that cannot be used with OPatch is a patchset.
Is it possible
to apply OPATCH without downtime?
As you know for
apply patch your database and listener must be down. When you apply OPTACH it
will update your current ORACLE_HOME. Thus coming to your question to the point
in fact it is not possible without or zero downtime in case of single instance
but in RAC you can Apply Opatch without downtime as there will be more separate
ORACLE_HOME and more separate instances (running once instance on each
ORACLE_HOME).
You have
collection of patch (nearly 100 patches) or patchset. How can you apply only
one patch from it?
With Napply
itself (by providing patch location and specific patch id) you can apply only
one patch from a collection of extracted patch. For more information check the
opatch util NApply –help. It will give you clear picture.
For Example:
opatch util napply -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and subset of patch installed in your ORACLE_HOME.
opatch util napply -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and subset of patch installed in your ORACLE_HOME.
If both CPU and
PSU are available for given version which one, you will prefer to apply?
From the above
discussion it is clear once you apply the PSU then the recommended way is to
apply the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU
contain CPU (If you apply CPU over PSU will considered you are trying to
rollback the PSU and will require more effort in fact). So if you have not
decided or applied any of the patches then, I will suggest you to go to use PSU
patches. For more details refer: Oracle Products [ID 1430923.1], ID 1446582.1
PSU is superset
of CPU then why someone choose to apply a CPU rather than a PSU?
CPUs are smaller
and more focused than PSU and mostly deal with security issues. It seems to be
theoretically more consecutive approach and can cause less trouble than PSU as
it has less code changing in it. Thus any one who is concerned only with
security fixes and not functionality fixes, CPU may be good approach.
How to Download
Patches, Patchset or Opatch from metalink?
If you are using latest support.oracle.com then
after login to metalink Dashboard
– Click on “Patches & Updates” tab
– On the left sidebar click on “Latest Patchsets” under “Oracle Server/Tools”.
– A new window will appear.
– Just mouseover on your product in the “Latest Oracle Server/Tools Patchsets” page.
– Corresponding oracle platform version will appear. Then simply choose the patchset version and click on that.
– You will go the download page. From the download page you can also change your platform and patchset version.
– Click on “Patches & Updates” tab
– On the left sidebar click on “Latest Patchsets” under “Oracle Server/Tools”.
– A new window will appear.
– Just mouseover on your product in the “Latest Oracle Server/Tools Patchsets” page.
– Corresponding oracle platform version will appear. Then simply choose the patchset version and click on that.
– You will go the download page. From the download page you can also change your platform and patchset version.
REFERENCES:
Oracle®
Universal Installer and OPatch User’s Guide
11g Release 2 (11.2) for Windows and UNIX
Part Number E12255-11
11g Release 2 (11.2) for Windows and UNIX
Part Number E12255-11
What is the
recent Patch applied?
What is OPatch?
How to Apply
Opatch in Oracle?
1. You MUST read the Readme.txt file
included in opatch file, look for any prereq. steps/ post installation steps or
and DB related changes. Also, make sure that you have the correct opatch
version required by this patch.
2.Make sure you have a good backup of database.
3. Make a note of all Invalid objects in the database prior to the patch.
4. Shutdown All the Oracle Processes running from that Oracle Home , including the Listener and Database instance, Management agent etc.
5. You MUST Backup your oracle Home and Inventory
tar -cvf $ORACLE_HOME $ORACLE_HOME/oraInventory | gzip >Backup_Software_Version.tar.gz
6. Unzip the patch in $ORACLE_HOME/patches
7. cd to the patch direcory and do opatch -apply to apply the patch.
8. Read the output/log file to make sure there were no errors.
2.Make sure you have a good backup of database.
3. Make a note of all Invalid objects in the database prior to the patch.
4. Shutdown All the Oracle Processes running from that Oracle Home , including the Listener and Database instance, Management agent etc.
5. You MUST Backup your oracle Home and Inventory
tar -cvf $ORACLE_HOME $ORACLE_HOME/oraInventory | gzip >Backup_Software_Version.tar.gz
6. Unzip the patch in $ORACLE_HOME/patches
7. cd to the patch direcory and do opatch -apply to apply the patch.
8. Read the output/log file to make sure there were no errors.
Patching Oracle
Software with OPatch ?
opatch napply
-skip_subset -skip_duplicate
OPatch skips duplicate patches and subset patches (patches under that are subsets of patches installed in the Oracle home).
OPatch skips duplicate patches and subset patches (patches under that are subsets of patches installed in the Oracle home).
What is Opactch
in Oracle?
OPATCH Utility
(Oracle RDBMS Patching)
1. Download the required Patch from Metalink based on OS
Bit Version and DB Version.
2. Need to down the database before applying patch.
3. Unzip and Apply the Patch using ”opatch apply” command.On successfully applied of patch you will see successful message “OPatch succeeded.“, Crosscheck your patch is applied by using “opatch lsinventory” command .
4. Each patch has a unique ID, the command to rollback a patch is “opatch rollback -id ”command.On successfully applied of patch you will see successful message “OPatch succeeded.“, Crosscheck your patch is applied by using “opatch lsinventory” command .
5. Patch file format will be like, “p__.zip”
6. We can check the opatch version using “opatch -version” command.
7. Generally, takes 2 minutes to apply a patch.
8. To get latest Opatch version download “patch 6880880 – latest opatch tool”, it contains OPatch directory.
9. Contents of downloaded patches will be like “etc,files directories and a README file”
10. Log file for Opatch utility can be found at $ORACLE_HOME/cfgtoollogs/opatch
11. OPatch also maintains an index of the commands executed with OPatch and the log files associated with it in the history.txt file located in the /cfgtoollogs/opatch directory.
12. Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).
13. Direct upgrade to Oracle 10g is only supported if your database is running one of the following releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the database to one of these releases or use a different upgrade option (like export/ import).
14.Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+, 10.1.0.2+ or 10.2.0.1+. Upgrades from other versions are supported only via intermediate upgrades to a supported upgrade version.
2. Need to down the database before applying patch.
3. Unzip and Apply the Patch using ”opatch apply” command.On successfully applied of patch you will see successful message “OPatch succeeded.“, Crosscheck your patch is applied by using “opatch lsinventory” command .
4. Each patch has a unique ID, the command to rollback a patch is “opatch rollback -id ”command.On successfully applied of patch you will see successful message “OPatch succeeded.“, Crosscheck your patch is applied by using “opatch lsinventory” command .
5. Patch file format will be like, “p__.zip”
6. We can check the opatch version using “opatch -version” command.
7. Generally, takes 2 minutes to apply a patch.
8. To get latest Opatch version download “patch 6880880 – latest opatch tool”, it contains OPatch directory.
9. Contents of downloaded patches will be like “etc,files directories and a README file”
10. Log file for Opatch utility can be found at $ORACLE_HOME/cfgtoollogs/opatch
11. OPatch also maintains an index of the commands executed with OPatch and the log files associated with it in the history.txt file located in the /cfgtoollogs/opatch directory.
12. Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the Oracle Database software. This means that you do not need to install Oracle Database 11g Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).
13. Direct upgrade to Oracle 10g is only supported if your database is running one of the following releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the database to one of these releases or use a different upgrade option (like export/ import).
14.Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+, 10.1.0.2+ or 10.2.0.1+. Upgrades from other versions are supported only via intermediate upgrades to a supported upgrade version.
Oracle version
10.2.0.4.0 what does each number refers to?
Oracle version number refers:
10 – Major database release number
2 – Database Maintenance release number
0 – Application server release number
4 – Component Specific release number
0 – Platform specific release number
ORACLE – ASM
What is ASM in Oracle?
Oracle version number refers:
10 – Major database release number
2 – Database Maintenance release number
0 – Application server release number
4 – Component Specific release number
0 – Platform specific release number
ORACLE – ASM
What is ASM in Oracle?
Oracle ASM is
Oracle’s volume manager specially designed for Oracle database data. It is
available since Oracle database version 10g and many improvements have been
made in versions 11g release 1 and 2.
ASM offers
support for Oracle RAC clusters without the requirement to install 3rd party
software, such as cluster aware volume managers or filesystems.
ASM is shipped
as part of the database server software (Enterprise and Standard editions) and
does not cost extra money to run.
ASM simplifies
administration of Oracle related files by allowing the administrator to
reference disk groups
rather than individual disks and files, which are managed by ASM.
rather than individual disks and files, which are managed by ASM.
The ASM
functionality is an extention of the Oracle Managed Files (OMF) functionality
that also includes striping and mirroring to provide balanced and secure
storage. The new ASM functionality can be used in combination with existing raw
and cooked file systems, along with OMF and manually managed files.
Advantages of
ASM in Oracle?
Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system
Provides automatic load balancing over all the available disks, thus reducing hot spots in the file system
Prevents
fragmentation of disks, so you don’t need to manually relocate data to tune I/O
performance
Adding disks is
straight forward – ASM automatically performs online disk reorganization when
you add or remove storage
Uses redundancy
features available in intelligent storage arrays
The storage
system can store all types of database files
Using disk group
makes configuration easier, as files are placed into disk groups
ASM provides
stripping and mirroring (fine and coarse gain – see below)
ASM and non-ASM
oracle files can coexist
Striping—ASM
spreads data evenly across all disks in a disk group to optimize performance
and utilization. This even distribution of database files eliminates the need
for regular monitoring and I/O performance tuning.
For example, if
there are six disks in a disk group, pieces of each ASM file are written to all
six disks. These pieces come in 1 MB chunks known as extents. When a database
file is created, it is striped (divided into extents and distributed) across
the six disks, and allocated disk space on all six disks grows evenly. When
reading the file, file extents are read from all six disks in parallel, greatly
increasing performance.
Mirroring—ASM
can increase availability by optionally mirroring any file. ASM mirrors at the
file level, unlike operating system mirroring, which mirrors at the disk level.
Mirroring means keeping redundant copies, or mirrored copies, of each extent of
the file, to help avoid data loss caused by disk failures. The mirrored copy of
each file extent is always kept on a different disk from the original copy. If
a disk fails, ASM can continue to access affected files by accessing mirrored
copies on the surviving disks in the disk group.
ASM supports
2-way mirroring, where each file extent gets one mirrored copy, and 3-way mirroring,
where each file extent gets two mirrored copies.
Online storage
reconfiguration and dynamic rebalancing—ASM permits you to add or remove disks
from your disk storage system while the database is operating. When you add a
disk, ASM automatically redistributes the data so that it is evenly spread
across all disks in the disk group, including the new disk. This redistribution
is known as rebalancing. It is done in the background and with minimal impact
to database performance. When you request to remove a disk, ASM first
rebalances by evenly relocating all file extents from the disk being removed to
the other disks in the disk group.
Managed file
creation and deletion—ASM further reduces administration tasks by enabling
files stored in ASM disk groups to be Oracle-managed files. ASM automatically
assigns filenames when files are created, and automatically deletes files when
they are no longer needed.
What is ASM
instance in Oracle?
The ASM
functionality is controlled by an ASM instance. This is not a full database
instance, just the memory structures and as such is very small and lightweight.
Characteristics
of Oracle ASM instance
1. do not have
controlfile and datafiles, do not have online redo logs
2. do have init.ora and a
passwordfile
3. for connecting remotely, create passwordfile and set following in init.ora
remote_login_passwordfile=exclusive
3. for connecting remotely, create passwordfile and set following in init.ora
remote_login_passwordfile=exclusive
create a
password file:
$ORACLE_HOME/bin/orapwd file=orapw+ASM1 password=yourpw entries=10
4. ASM instance can not be in open status as there are not datafiles. Can be in mount (although
there is no controlfile) and nomount status. When in mount status, database can use the
diskgroup. The mount status actually means mount disk groups.
$ORACLE_HOME/bin/orapwd file=orapw+ASM1 password=yourpw entries=10
4. ASM instance can not be in open status as there are not datafiles. Can be in mount (although
there is no controlfile) and nomount status. When in mount status, database can use the
diskgroup. The mount status actually means mount disk groups.
What are ASM
Background Processes in Oracle?
Both an Oracle
ASM instance and an Oracle Database instance are built on the same technology.
Like a database instance, an Oracle ASM instance has memory structures (System
Global Area) and background processes. Besides, Oracle ASM has a minimal
performance impact on a server. Rather than mounting a database, Oracle ASM
instances mount disk groups to make Oracle ASM files available to database
instances.
There are at
least two new background processes added for an ASM instance:
ASM Instance
Background Processes:
ARBx (ASM) Rebalance working processARBn performs the
actual rebalance data extent movements in an Automatic Storage Management
instance. There can be many of these processes running at a time, named ARB0,
ARB1, and so on.These processes are managed by
the RBAL process. The number of ARBx processes invoked is directly influenced
by the asm_power_limit parameter.
RBAL (Re-balancer) RBAL runs in both database and ASM
instances. In the database instance, it does a global open of ASM disks. In an
ASM instance, it also coordinates rebalance activity for disk groups.RBAL, which coordinates rebalance activities
for disk resources controlled by ASM.
for disk resources controlled by ASM.
Database
Instance ASM Background Processes:
In the database instances, there are three background process to support ASM, namely:
In the database instances, there are three background process to support ASM, namely:
ASMB, this
process contact CSS using the group name and acquires the associated ASM
connect string. The connect string is subsequently used to connect to the ASM
instance.
RBAL, which
performs global opens on all disks in the disk group.A global open means that
more than one database instance can be accessing the ASM disks at a time.
O00x, a group
slave processes, with a numeric sequence starting at 000.
What are the
components of components of ASM are disk groups?
The main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.
The main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.
Failure groups
are defined within a disk group to support the required level of redundancy.
For two-way mirroring you would expect a disk group to contain two failure
groups so individual files are written to two locations.
What are ASM
instance initialization parameters?
INSTANCE_TYPE –
Set to ASM or RDBMS depending on the instance type. The default is RDBMS.
DB_UNIQUE_NAME –
Specifies a globally unique name for the database. This defaults to +ASM but
must be altered if you intend to run multiple ASM instances.
ASM_POWER_LIMIT
-The maximum power for a rebalancing operation on an ASM instance. The valid
values range from 1 to 11, with 1 being the default. The higher the limit the
more resources are allocated resulting in faster rebalancing operations. This
value is also used as the default when the POWER clause is omitted from a
rebalance operation.
ASM_DISKGROUPS –
The list of disk groups that should be mounted by an ASM instance during
instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM
configuration changes are automatically reflected in this parameter.
ASM_DISKSTRING –
Specifies a value that can be used to limit the disks considered for discovery.
Altering the default value may improve the speed of disk group mount time and
the speed of adding a disk to a disk group. Changing the parameter to a value
which prevents the discovery of already mounted disks results in an error. The
default value is NULL allowing all suitable disks to be considered.
Advantages of
ASM in Oracle?
Provides
automatic load balancing over all the available disks, thus reducing hot spots
in the file system
Prevents
fragmentation of disks, so you don’t need to manually relocate data to tune I/O
performance
Adding disks is
straight forward – ASM automatically performs online disk reorganization when
you add or remove storage
Uses redundancy
features available in intelligent storage arrays
The storage
system can store all types of database files
Using disk group
makes configuration easier, as files are placed into disk groups
ASM provides
stripping and mirroring (fine and coarse gain – see below)
ASM and non-ASM
oracle files can coexist
Striping—ASM
spreads data evenly across all disks in a disk group to optimize performance
and utilization. This even distribution of database files eliminates the need
for regular monitoring and I/O performance tuning.
For example, if
there are six disks in a disk group, pieces of each ASM file are written to all
six disks. These pieces come in 1 MB chunks known as extents. When a database
file is created, it is striped (divided into extents and distributed) across
the six disks, and allocated disk space on all six disks grows evenly. When reading
the file, file extents are read from all six disks in parallel, greatly
increasing performance.
Mirroring – ASM
can increase availability by optionally mirroring any file. ASM mirrors at the
file level, unlike operating system mirroring, which mirrors at the disk level.
Mirroring means keeping redundant copies, or mirrored copies, of each extent of
the file, to help avoid data loss caused by disk failures. The mirrored copy of
each file extent is always kept on a different disk from the original copy. If
a disk fails, ASM can continue to access affected files by accessing mirrored
copies on the surviving disks in the disk group.
ASM supports
2-way mirroring, where each file extent gets one mirrored copy, and 3-way
mirroring, where each file extent gets two mirrored copies.
Online storage
reconfiguration and dynamic rebalancing—ASM permits you to add or remove disks
from your disk storage system while the database is operating. When you add a
disk, ASM automatically redistributes the data so that it is evenly spread
across all disks in the disk group, including the new disk. This redistribution
is known as rebalancing. It is done in the background and with minimal impact
to database performance. When you request to remove a disk, ASM first
rebalances by evenly relocating all file extents from the disk being removed to
the other disks in the disk group.
Managed file
creation and deletion—ASM further reduces administration tasks by enabling
files stored in ASM disk groups to be Oracle-managed files. ASM automatically
assigns filenames when files are created, and automatically deletes files when
they are no longer needed.
Why should we
use separate ASM home?
ASM should be
installed separately from the database software in its own ORACLE_HOME
directory. This will allow you the flexibility to patch and upgrade ASM and the
database software independently.
How many ASM
instances should one have?
Several
databases can share a single ASM instance. So, although one can create multiple
ASM instances on a single system, normal configurations should have one and
only one ASM instance per system.
For clustered
systems, create one ASM instance per node (called +ASM1, +ASM2, etc).
How many
diskgroups should one have?
Generally
speaking one should have only one disk group for all database files – and,
optionally a second for recovery files (see FRA).
Data with
different storage characteristics should be stored in different disk groups.
Each disk group can have different redundancy (mirroring) settings (high,
normal and external), different fail-groups, etc. However, it is generally not
necessary to create many disk groups with the same storage characteristics
(i.e. +DATA1, +DATA2, etc. all on the same type of disks).
To get started,
create 2 disk groups – one for data and one for recovery files. Here is an
example:
CREATE DISKGROUP
data EXTERNAL REDUNDANCY DISK ‘/dev/d1′, ‘/dev/d2′, ‘/dev/d3′, ….;
CREATE DISKGROUP recover EXTERNAL REDUNDANCY DISK ‘/dev/d10′, ‘/dev/d11′, ‘/dev/d12′, ….;
CREATE DISKGROUP recover EXTERNAL REDUNDANCY DISK ‘/dev/d10′, ‘/dev/d11′, ‘/dev/d12′, ….;
Here is an
example how you can enable automatic file management with such a setup:
ALTER SYSTEM SET
db_create_file_dest = ‘+DATA’ SCOPE=SPFILE;
ALTER SYSTEM SET db_recovery_file_dest = ‘+RECOVER’ SCOPE=SPFILE;
ALTER SYSTEM SET db_recovery_file_dest = ‘+RECOVER’ SCOPE=SPFILE;
You may also
decide to introduce additional disk groups – for example, if you decide to put
historic data on low cost disks, or if you want ASM to mirror critical data
across 2 storage cabinets.
What is ASM
Rebalancing?
The rebalancing
speed is controlled by the ASM_POWER_LIMIT initialization parameter. Setting it
to 0 will disable disk rebalancing.
ALTER DISKGROUP
data REBALANCE POWER 11;
What happens
when an Oracle ASM diskgroup is created?
When an ASM
diskgroup is created, a hierarchialfilesystem structure is created.
How does this
filesystem structure appear?
Oracle ASM diskgroup’sfilesystem
structure is similar to UNIX filesystem hierarchy or Windows filesystem
hierarchy.
Where are the
Oracle ASM files stored?
Oracle ASM files
are stored within the Oracle ASM diskgroup. If we dig into internals, oracle
ASM files are stored within the Oracle ASM filesystem structures.
How are the
Oracle ASM files stored within the Oracle ASM filesystem structure?
Oralce ASM files
are stored within the Oracle ASM filesystem structures as objects that RDBMS
instances/Oracle database instance access. RDBMS/Oracle instance treats the
Oracle ASM files as standard filesystem files.
What are the
Oracle ASM files that are stored within the Oracle ASM file hierarchy?
Files stored in
Oracle ASM diskgroup/Oracl ASM filestructures include:
1) Datafile
2) Controlfiles
3) Server Parameter Files(SPFILE)
4) Redo Log files
1) Datafile
2) Controlfiles
3) Server Parameter Files(SPFILE)
4) Redo Log files
What happens
when you create a file/database file in ASM?What commands do you use to create
database files?
Some common
commands used for creating database files are :
1) Create tabespace
2) Add Datafile
3) Add Logfile
For example,
SQL> CREATE TABLESPACE TS1 DATAFILE ‘+DATA1′ SIZE 10GB;
Above command creates a datafile in DATA1 diskgroup
1) Create tabespace
2) Add Datafile
3) Add Logfile
For example,
SQL> CREATE TABLESPACE TS1 DATAFILE ‘+DATA1′ SIZE 10GB;
Above command creates a datafile in DATA1 diskgroup
How can you
access a databasefile in ASM diskgroup under RDBMS?
Once the ASM
file is created in ASM diskgroup, a filename is generated. This file is now
visible to the user via the standard RDBMS view V$DATAFILE.
What will be the
syntax of ASM filenames?
ASM filename
syntax is as follows:
+diskgroup_name/database_name/database_file_type/tag_name.file_number.incarnation
where,
+diskgroup_name – Name of the diskgroup that contains this file
database_name – Name of the database that contains this file
datafile – Can be one among 20 different ASM file types
tag_name – corresponds to tablespace name for datafiles, groupnumber for redo log files
file_number – file_number in ASM instance is used to correlate filenames in database instance
incarnation_number – It is derived from the timestamp. IT is used to provide uniqueness
+diskgroup_name/database_name/database_file_type/tag_name.file_number.incarnation
where,
+diskgroup_name – Name of the diskgroup that contains this file
database_name – Name of the database that contains this file
datafile – Can be one among 20 different ASM file types
tag_name – corresponds to tablespace name for datafiles, groupnumber for redo log files
file_number – file_number in ASM instance is used to correlate filenames in database instance
incarnation_number – It is derived from the timestamp. IT is used to provide uniqueness
What is an
incarnation number?
An incarnation
number is a part of ASM filename syntax. It is derived from the timestamp. Once
the file is created, its incarnation number doesnot change.
What is the use
of an incarnation number in Oracle ASM filename?
Incarnation
number distinguishes between a new file that has been created using the same
file number and another file that has been deleted
ASM’s SPFile will be residing inside ASM itself. This
could be found out in number of ways, looking at the alert log of ASM when ASM
starts
Machine: x86_64
Using parameter settings in server-side spfile +DATA/asm/asmparameterfile/registry.253.766260991
System parameters with non-default values:
large_pool_size = 12M
instance_type = “asm”
remote_login_passwordfile= “EXCLUSIVE”
asm_diskgroups = “FLASH”
asm_diskgroups = “DATA”
asm_power_limit = 1
diagnostic_dest = “/opt/app/oracle”
Or using the asmcmd’s spget command which shows the spfile location registered with GnP profile
ASMCMD> spget
+DATA/asm/asmparameterfile/registry.253.766260991
ORACLE – RAC
Machine: x86_64
Using parameter settings in server-side spfile +DATA/asm/asmparameterfile/registry.253.766260991
System parameters with non-default values:
large_pool_size = 12M
instance_type = “asm”
remote_login_passwordfile= “EXCLUSIVE”
asm_diskgroups = “FLASH”
asm_diskgroups = “DATA”
asm_power_limit = 1
diagnostic_dest = “/opt/app/oracle”
Or using the asmcmd’s spget command which shows the spfile location registered with GnP profile
ASMCMD> spget
+DATA/asm/asmparameterfile/registry.253.766260991
ORACLE – RAC
What is RAC?
What is the benefit of RAC over single instance database?
In Real
Application Clusters environments, all nodes concurrently execute transactions
against the same database. Real Application Clusters coordinates each node’s
access to the shared data to provide consistency and integrity.
Benefits:
Improve response time
Improve throughput
High availability
Transparency
Improve response time
Improve throughput
High availability
Transparency
What is Oracle
RAC One Node?
Oracle RAC one
Node is a single instance running on one node of the cluster while the 2nd node
is in cold standby mode. If the instance fails for some reason then RAC one
node detect it and restart the instance on the same node or the instance is
relocate to the 2nd node incase there is failure or fault in 1st node. The
benefit of this feature is that it provides a cold failover solution and it
automates the instance relocation without any downtime and does not need a
manual intervention. Oracle introduced this feature with the release of 11gR2
(available with Enterprise Edition).
Real Application
Clusters
Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide a highly scalable and available database solution for all your business applications. Oracle RAC provides the foundation for enterprise grid computing.
Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide a highly scalable and available database solution for all your business applications. Oracle RAC provides the foundation for enterprise grid computing.
Oracle’s Real
Application Clusters (RAC) option supports the transparent deployment of a
single database across a cluster of servers, providing fault tolerance from
hardware failures or planned outages. Oracle RAC running on clusters provides
Oracle’s highest level of capability in terms of availability, scalability, and
low-cost computing.
One DB opened by
multipe instances so the the db ll be Highly Available if an instance crashes.
Cluster Software. Oracles Clusterware or products like Veritas Volume Manager are required to provide the cluster support and allow each node to know which nodes belong to the cluster and are available and with Oracle Cluterware to know which nodes have failed and to eject then from the cluster, so that errors on that node can be cleared.
Cluster Software. Oracles Clusterware or products like Veritas Volume Manager are required to provide the cluster support and allow each node to know which nodes belong to the cluster and are available and with Oracle Cluterware to know which nodes have failed and to eject then from the cluster, so that errors on that node can be cleared.
Oracle
Clusterware has two key components Cluster Registry OCR and Voting Disk.
The cluster
registry holds all information about nodes, instances, services and ASM storage
if used, it also contains state information ie they are available and up or
similar.
The voting disk
is used to determine if a node has failed, i.e. become separated from the
majority. If a node is deemed to no longer belong to the majority then it is
forcibly rebooted and will after the reboot add itself again the the surviving
cluster nodes.
Advantages of
RAC (Real Application Clusters)
Reliability – if
one node fails, the database won’t fail
Availability – nodes can be added or replaced without having to shutdown the database
Scalability – more nodes can be added to the cluster as the workload increases
Availability – nodes can be added or replaced without having to shutdown the database
Scalability – more nodes can be added to the cluster as the workload increases
What is a
virtual IP address or VIP?
A virtual IP address or VIP is an alternate IP address that the client connections use instead of the standard public IP address. To configure VIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.
A virtual IP address or VIP is an alternate IP address that the client connections use instead of the standard public IP address. To configure VIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.
What is the use
of VIP?
If a node fails,
then the node’s VIP address fails over to another node on which the VIP address
can accept TCP connections but it cannot accept Oracle connections.
Give situations under which VIP address failover happens:-
VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.
Using virtual IP we can save our TCP/IP timeout problem because Oracle notification service maintains communication between each nodes and listeners.
Give situations under which VIP address failover happens:-
VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.
Using virtual IP we can save our TCP/IP timeout problem because Oracle notification service maintains communication between each nodes and listeners.
What is the
significance of VIP address failover?
When a VIP
address failover happens, Clients that attempt to connect to the VIP address
receive a rapid connection refused error .They don’t have to wait for TCP
connection timeout messages.
What is voting
disk?
Voting Disk is a
file that sits in the shared storage area and must be accessible by all nodes
in the cluster. All nodes in the cluster registers their heart-beat information
in the voting disk, so as to confirm that they are all operational. If
heart-beat information of any node in the voting disk is not available that
node will be evicted from the cluster. The CSS (Cluster Synchronization
Service) daemon in the clusterware maintains the heart beat of all nodes to the
voting disk. When any node is not able to send heartbeat to voting disk, then
it will reboot itself, thus help avoiding the split-brain syndrome.
For high
availability, Oracle recommends that you have a minimum of three or odd number
(3 or greater) of votingdisks.
Voting Disk – is
file that resides on shared storage and Manages cluster members. Voting disk
reassigns cluster ownership between the nodes in case of failure.
The Voting Disk
Files are used by Oracle Clusterware to determine which nodes are currently
members of the cluster. The voting disk files are also used in concert with
other Cluster components such as CRS to maintain the clusters integrity.
Oracle Database
11g Release 2 provides the ability to store the voting disks in ASM along with
the OCR. Oracle Clusterware can access the OCR and the voting disks present in
ASM even if the ASM instance is down. As a result CSS can continue to maintain
the Oracle cluster even if the ASM instance has failed.
How many voting
disks are you maintaining ?
By default
Oracle will create 3 voting disk files in ASM.
Oracle expects
that you will configure at least 3 voting disks for redundancy purposes. You
should always configure an odd number of voting disks >= 3. This is because
loss of more than half your voting disks will cause the entire cluster to fail.
You should plan
on allocating 280MB for each voting disk file. For example, if you are using
ASM and external redundancy then you will need to allocate 280MB of disk for
the voting disk. If you are using ASM and normal redundancy you will need
560MB.
Why we need to
keep odd number of voting disks ?
Oracle expects
that you will configure at least 3 voting disks for redundancy purposes. You
should always configure an odd number of voting disks >= 3. This is because
loss of more than half your voting disks will cause the entire cluster to fail.
What are Oracle
RAC software components?
Oracle RAC is
composed of two or more database instances. They are composed of Memory
structures and background processes same as the single instance database.Oracle
RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service)
that enable cache fusion.Oracle RAC instances are composed of following
background processes:
ACMS – Atomic Controlfile to Memory Service (ACMS)
GTX0-j – Global Transaction Process
LMON – Global Enqueue Service Monitor
LMD – Global Enqueue Service Daemon
LMS – Global Cache Service Process
LCK0 – Instance Enqueue Process
RMSn – Oracle RAC Management Processes (RMSn)
RSMN – Remote Slave Monitor
ACMS – Atomic Controlfile to Memory Service (ACMS)
GTX0-j – Global Transaction Process
LMON – Global Enqueue Service Monitor
LMD – Global Enqueue Service Daemon
LMS – Global Cache Service Process
LCK0 – Instance Enqueue Process
RMSn – Oracle RAC Management Processes (RMSn)
RSMN – Remote Slave Monitor
What are Oracle
Clusterware processes for 10g ?
Cluster
Synchronization Services (ocssd) — Manages cluster node membership and runs as
the oracle user; failure of this process results in cluster restart.
Cluster Ready Services (crsd) — The crs process manages cluster resources (which could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process, and so on) based on the resource’s configuration information that is stored in the OCR. This includes start, stop, monitor and failover operations. This process runs as the root user
Event manager daemon (evmd) —A background process that publishes events that crs creates.
Process Monitor Daemon (OPROCD) —This process monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in Oracle Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.
RACG (racgmain, racgimon) —Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur.
Cluster Ready Services (crsd) — The crs process manages cluster resources (which could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process, and so on) based on the resource’s configuration information that is stored in the OCR. This includes start, stop, monitor and failover operations. This process runs as the root user
Event manager daemon (evmd) —A background process that publishes events that crs creates.
Process Monitor Daemon (OPROCD) —This process monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in Oracle Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.
RACG (racgmain, racgimon) —Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur.
What are Oracle
database background processes specific to RAC?
LMS—Global Cache
Service Process
LMD—Global Enqueue Service Daemon
LMON—Global Enqueue Service Monitor
LCK0—Instance Enqueue Process
Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances.
LMD—Global Enqueue Service Daemon
LMON—Global Enqueue Service Monitor
LCK0—Instance Enqueue Process
Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances.
What is Cache
Fusion?
Transfor of data
across instances through private interconnect is called cachefusion.Oracle RAC
is composed of two or more instances. When a block of data is read from
datafile by an instance within the cluster and another instance is in need of
the same block,it is easy to get the block image from the insatnce which has
the block in its SGA rather than reading from the disk. To enable inter
instance communication Oracle RAC makes use of interconnects. The Global
Enqueue Service(GES) monitors and Instance enqueue process manages the cahce
fusion
What is SCAN?
(11gR2 feature)
Single Client
Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC) 11g
Release 2 feature that provides a single name for clients to access an Oracle
Database running in a cluster. The benefit is clients using SCAN do not need to
change if you add or remove nodes in the cluster.
SCAN provides a
single domain name via (DNS), allowing and-users to address a RAC cluster as-if
it were a single IP address. SCAN works by replacing a hostname or IP list with
virtual IP addresses (VIP).
Single client access name (SCAN) is meant to facilitate
single name for all Oracle clients to connect to the cluster database, irrespective
of number of nodes and node location. Until now, we have to keep adding
multiple address records in all clients tnsnames.ora, when a new
node gets added to or deleted from the cluster.
Single Client
Access Name (SCAN) eliminates the need to change TNSNAMES entry when nodes are
added to or removed from the Cluster. RAC instances register to SCAN listeners
as remote listeners. Oracle recommends assigning 3 addresses to SCAN, which
will create 3 SCAN listeners, though the cluster has got dozens of nodes.. SCAN
is a domain name registered to at least one and up to three IP addresses,
either in DNS (Domain Name Service) or GNS (Grid Naming Service). The SCAN must
resolve to at least one address on the public network. For high availability
and scalability, Oracle recommends configuring the SCAN to resolve to three
addresses.
What are SCAN
components in a cluster?
1.SCAN Name
2.SCAN IPs (3)
3.SCAN Listeners (3)
2.SCAN IPs (3)
3.SCAN Listeners (3)
What is FAN?
Fast application Notification as it abbreviates to FAN
relates to the events related to instances,services and nodes.This is a notification mechanism that Oracle RAc
uses to notify other processes about the configuration and service level
information that includes service status changes such as,UP or DOWN
events.Applications can respond to FAN events and take immediate action.
What is TAF?
TAF (Transparent
Application Failover) is a configuration that allows session fail-over between
different nodes of a RAC database cluster.
Transparent Application Failover (TAF). If a communication link failure occurs after a connection is established, the connection fails over to another active node. Any disrupted transactions are rolled back, and session properties and server-side program variables are lost. In some cases, if the statement executing at the time of the failover is a Select statement, that statement may be automatically re-executed on the new connection with the cursor positioned on the row on which it was positioned prior to the failover.
Transparent Application Failover (TAF). If a communication link failure occurs after a connection is established, the connection fails over to another active node. Any disrupted transactions are rolled back, and session properties and server-side program variables are lost. In some cases, if the statement executing at the time of the failover is a Select statement, that statement may be automatically re-executed on the new connection with the cursor positioned on the row on which it was positioned prior to the failover.
After an Oracle
RAC node crashes—usually from a hardware failure—all new application
transactions are automatically rerouted to a specified backup node. The
challenge in rerouting is to not lose transactions that were “in flight” at the
exact moment of the crash. One of the requirements of continuous availability
is the ability to restart in-flight application transactions, allowing a failed
node to resume processing on another server without interruption. Oracle’s
answer to application failover is a new Oracle Net mechanism dubbed Transparent
Application Failover. TAF allows the DBA to configure the type and method of
failover for each Oracle Net client.
TAF architecture offers the ability to restart transactions at either the transaction (SELECT) or session level.
TAF architecture offers the ability to restart transactions at either the transaction (SELECT) or session level.
What are the
requirements for Oracle Clusterware?
1. External
Shared Disk to store Oracle Cluster ware file (Voting Disk and Oracle Cluster
Registry – OCR)
2. Two netwrok cards on each cluster ware node (and three set of IP address) -
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication between rac nodes used by clusterware and rac database)
IP address set 3 for Virtual IP (VIP) (used as Virtual IP address for client connection and for connection failover)
3. Storage Option for OCR and Voting Disk – RAW, OCFS2 (Oracle Cluster File System), NFS, …..
Which enable the load balancing of applications in RAC?
Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.
2. Two netwrok cards on each cluster ware node (and three set of IP address) -
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication between rac nodes used by clusterware and rac database)
IP address set 3 for Virtual IP (VIP) (used as Virtual IP address for client connection and for connection failover)
3. Storage Option for OCR and Voting Disk – RAW, OCFS2 (Oracle Cluster File System), NFS, …..
Which enable the load balancing of applications in RAC?
Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.
How to find
location of OCR file when CRS is down?
If you need to find the location of OCR (Oracle Cluster
Registry) but your CRS is down.
When the CRS is down:
Look into “ocr.loc” file, location of this file changes depending on the OS:
On Linux: /etc/oracle/ocr.loc
On Solaris: /var/opt/oracle/ocr.loc
When CRS is UP:
Set ASM environment or CRS environment then run the below command:
ocrcheck
When the CRS is down:
Look into “ocr.loc” file, location of this file changes depending on the OS:
On Linux: /etc/oracle/ocr.loc
On Solaris: /var/opt/oracle/ocr.loc
When CRS is UP:
Set ASM environment or CRS environment then run the below command:
ocrcheck
In 2 node RAC,
how many NIC’s are r using ?
2 network cards
on each clusterware node
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication between rac nodes used by clusterware and rac database)
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication between rac nodes used by clusterware and rac database)
In 2 node RAC,
how many IP’s are r using ?
6 – 3 set of IP
address
## eth1-Public: 2
## eth0-Private: 2
## VIP: 2
## eth1-Public: 2
## eth0-Private: 2
## VIP: 2
How to find IP’s
information in RAC ?
Edit the /etc/hosts file as shown below:
# Do not remove the following line, or various programs
# that requires network functionality will fail.
127.0.0.1 localhost.localdomain localhost
## Public Node names
192.168.10.11 node1-pub.hingu.net node1-pub
192.168.10.22 node2-pub.hingu.net node2-pub
## Private Network (Interconnect)
192.168.0.11 node1-prv node1-prv
192.168.0.22 node2-prv node2-prv
## Private Network (Network Area storage)
192.168.1.11 node1-nas node1-nas
192.168.1.22 node2-nas node2-nas
192.168.1.33 nas-server nas-server
## Virtual IPs
192.168.10.111 node1-vip.hingu.net node1-vip
192.168.10.222 node2-vip.hingu.net node2-vip
# Do not remove the following line, or various programs
# that requires network functionality will fail.
127.0.0.1 localhost.localdomain localhost
## Public Node names
192.168.10.11 node1-pub.hingu.net node1-pub
192.168.10.22 node2-pub.hingu.net node2-pub
## Private Network (Interconnect)
192.168.0.11 node1-prv node1-prv
192.168.0.22 node2-prv node2-prv
## Private Network (Network Area storage)
192.168.1.11 node1-nas node1-nas
192.168.1.22 node2-nas node2-nas
192.168.1.33 nas-server nas-server
## Virtual IPs
192.168.10.111 node1-vip.hingu.net node1-vip
192.168.10.222 node2-vip.hingu.net node2-vip
What is
difference between RAC ip addresses ?
Public IP adress
is the normal IP address typically used by DBA and SA to manage storage, system
and database. Public IP addresses are reserved for the Internet.
Private IP address is used only for internal clustering processing (Cache Fusion) (aka as interconnect). Private IP addresses are reserved for private networks.
VIP is used by database applications to enable fail over when one cluster node fails. The purpose for having VIP is so client connection can be failover to surviving nodes in case there is failure
Private IP address is used only for internal clustering processing (Cache Fusion) (aka as interconnect). Private IP addresses are reserved for private networks.
VIP is used by database applications to enable fail over when one cluster node fails. The purpose for having VIP is so client connection can be failover to surviving nodes in case there is failure
Can application
developer access the private ip ?
No. private IP
address is used only for internal clustering processing (Cache Fusion) (aka as
interconnect)
ORACLE – DATAGUARD
ORACLE – DATAGUARD
What is
Dataguard?
Data Guard
provides a comprehensive set of services that create, maintain, manage, and monitor
one or more standby databases to enable production Oracle databases to survive
disasters and data corruptions. Data Guard maintains these standby databases as
copies of the production database. Data Guard can be used with traditional
backup, restoration, and cluster techniques to provide a high level of data
protection and data availability.
What is DG
Broker?
DG Broker “it is
the management and monitoring tool”.
Oracle dataguard broker is a distributed management framework that automates and centralizes the creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the broker (or) broker specified command-line tool interface “DGMGRL”.
Oracle dataguard broker is a distributed management framework that automates and centralizes the creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the broker (or) broker specified command-line tool interface “DGMGRL”.
What is the
difference between Dataguard and Standby?
Dataguard :
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.
Standby Database
:
Physical standby database provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.
Physical standby database provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.
REFERENCE:
What are the
differences between Physical/Logical standby databases? How would you decide
which one is best suited for your environment?
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure can be different.
It synchronized with primary database though SQL Apply, Redo received from the primary database into SQL statements and then executing these SQL statements on the standby DB.
We can open “physical stand by DB to “read only” and make it available to the applications users (Only select is allowed during this period). we can not apply redo logs received from primary database at this time.
We do not see such issues with logical standby database. We can open the database in normal mode and make it available to the users. At the same time, we can apply archived logs received from primary database.
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure can be different.
It synchronized with primary database though SQL Apply, Redo received from the primary database into SQL statements and then executing these SQL statements on the standby DB.
We can open “physical stand by DB to “read only” and make it available to the applications users (Only select is allowed during this period). we can not apply redo logs received from primary database at this time.
We do not see such issues with logical standby database. We can open the database in normal mode and make it available to the users. At the same time, we can apply archived logs received from primary database.
For OLTP large
transaction database it is better to choose logical standby database.
Explain Active
Dataguard?
11g Active Data
Guard
Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production database.
Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.
Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production database.
Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.
What is a
Snapshot Standby Database?
11g Snapshot
Standby Database
Oracle 11g introduces the Snapshot Standby database which essentially is an updateable standby database which has been created from a physical standby database.
We can convert a physical standby database to a snapshot standby database, do some kind of testing on a database which is a read write copy of the current primary or production database and then finally revert it to it’s earlier state as a physical standby database.
While the snapshot standby database is open in read-write mode, redo is being received from the primary database, but is not applied.
After converting it back to a physical standby database, it is resynchronized with the primary by applying the accumalated redo data which was earlier shipped from the primary database but not applied.
Using a snapshot standby, we are able to do real time application testing using near real time production data. Very often we are required to do production clones for the purpose of testing. But using snapshot standby databases we can meet the same requirement sparing the effort,time,resources and disk space.
Oracle 11g introduces the Snapshot Standby database which essentially is an updateable standby database which has been created from a physical standby database.
We can convert a physical standby database to a snapshot standby database, do some kind of testing on a database which is a read write copy of the current primary or production database and then finally revert it to it’s earlier state as a physical standby database.
While the snapshot standby database is open in read-write mode, redo is being received from the primary database, but is not applied.
After converting it back to a physical standby database, it is resynchronized with the primary by applying the accumalated redo data which was earlier shipped from the primary database but not applied.
Using a snapshot standby, we are able to do real time application testing using near real time production data. Very often we are required to do production clones for the purpose of testing. But using snapshot standby databases we can meet the same requirement sparing the effort,time,resources and disk space.
REFERENCE:
Snapshot Standby
Database (UPDATEABLE SNAPSHOT FOR TESTING)
A snapshot standby database is a fully updatable standby database that is created by converting a physical standby database into a snapshot standby database.
A snapshot standby database is a fully updatable standby database that is created by converting a physical standby database into a snapshot standby database.
Like a physical
or logical standby database, a snapshot standby database receives and archives
redo data from a primary database. Unlike a physical or logical standby
database, a snapshot standby database does not apply the redo data that it
receives. The redo data received by a snapshot standby database is not applied
until the snapshot standby is converted back into a physical standby database,
after first discarding any local updates made to the snapshot standby database.
REFERENCE:
What is the
Default mode will the Standby will be, either SYNC or ASYNC?
ASYNC
ASYNC
Dataguard
Architechture?
Data Guard
Configurations:
A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other.
A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other.
Dataguard
Architecture
The Oracle 9i Data Guard architecture incorporates the following items:
The Oracle 9i Data Guard architecture incorporates the following items:
• Primary
Database – A production database that is used to create standby databases. The
archive logs from the primary database are transfered and applied to standby
databases. Each standby can only be associated with a single primary database,
but a single primary database can be associated with multiple standby
databases.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby databases using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.
• Role Management Services – Control the changing of database roles from primary to standby. The services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI and command line interface.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby databases using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.
• Role Management Services – Control the changing of database roles from primary to standby. The services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI and command line interface.
Primary
Database:
A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.
A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.
Standby
Database:
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.
The types of standby databases are as follows:
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.
The types of standby databases are as follows:
Physical standby
database:
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database.
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database.
Logical standby
database:
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executes the SQL statements on the standby database.
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executes the SQL statements on the standby database.
What are the
services required on the primary and standby database ?
The services
required on the primary database are:
• Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. .
The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.
• Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. .
The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.
What is RTS
(Redo Transport Services) in Dataguard?
It controls the
automated transfer of redo data from the production database to one or more
archival destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the
primary database or another standby database.
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the
primary database or another standby database.
What are the
Protection Modes in Dataguard?
Data Guard
Protection Modes
This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap. Redo gaps are discussed in Section 6.3.3.
This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap. Redo gaps are discussed in Section 6.3.3.
Maximum
Availability
This protectionmode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
This protectionmode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Maximum
Performance
This protectionmode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.
This is the default protection mode.
This protectionmode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.
This is the default protection mode.
Maximum
Protection
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.
Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.
Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.
How to delay the
application of logs to a physical standby?
A standby database automatically applies redo logs when they arrive from the primary database. But in some cases, we want to create a time lag between the archiving of a redo log at the primary site, and the application of the log at the standby site.
A standby database automatically applies redo logs when they arrive from the primary database. But in some cases, we want to create a time lag between the archiving of a redo log at the primary site, and the application of the log at the standby site.
Modify the
LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a
delay for the standby database.
Example: For
60min Delay:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=stdby_srvc DELAY=60′;
The DELAY attribute is expressed in minutes.
The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=stdby_srvc DELAY=60′;
The DELAY attribute is expressed in minutes.
The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.
Steps to create
Physical Standby database?
1.Take a full hot backup of Primary database
2.Create standby control file
3.Transfer full backup, init.ora, standby control file to standby node.
4.Modify init.ora file on standby node.
5.Restore database
6.Recover Standby database
(Alternatively, RMAN DUPLICATE DATABASE FOR STANDBY DO RECOVERY can be also used)
7.Setup FAL_CLIENT and FAL_SERVER parameters on both sides
8.Put Standby database in Managed Recover mode
2.Create standby control file
3.Transfer full backup, init.ora, standby control file to standby node.
4.Modify init.ora file on standby node.
5.Restore database
6.Recover Standby database
(Alternatively, RMAN DUPLICATE DATABASE FOR STANDBY DO RECOVERY can be also used)
7.Setup FAL_CLIENT and FAL_SERVER parameters on both sides
8.Put Standby database in Managed Recover mode
What are the
DATAGUARD PARAMETERS in Oracle?
Set Primary
Database Initialization Parameters
On the primary
database, you define initialization parameters that control redo transport
services while the database is in the primary role. There are additional
parameters you need to add that control the receipt of the redo data and log
apply services when the primary database is transitioned to the standby role.
DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/chicago/control1.ctl’, ‘/arch2/chicago/control2.ctl’
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
Primary Database: Standby Role Initialization Parameters
FAL_SERVER=boston
FAL_CLIENT=chicago
DB_FILE_NAME_CONVERT=’boston’,’chicago’
LOG_FILE_NAME_CONVERT= ‘/arch1/boston/’,’/arch1/chicago/’,’/arch2/boston/’,’/arch2/chicago/’
STANDBY_FILE_MANAGEMENT=AUTO
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/chicago/control1.ctl’, ‘/arch2/chicago/control2.ctl’
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
Primary Database: Standby Role Initialization Parameters
FAL_SERVER=boston
FAL_CLIENT=chicago
DB_FILE_NAME_CONVERT=’boston’,’chicago’
LOG_FILE_NAME_CONVERT= ‘/arch1/boston/’,’/arch1/chicago/’,’/arch2/boston/’,’/arch2/chicago/’
STANDBY_FILE_MANAGEMENT=AUTO
Prepare an
Initialization Parameter File for the Standby Database
Create a text initialization parameter file (PFILE) from
the server parameter file (SPFILE) used by the primary database; a text
initialization parameter file can be copied to the standby location and
modified. For example:
CREATE PFILE=’/tmp/initboston.ora’ FROM SPFILE;
CREATE PFILE=’/tmp/initboston.ora’ FROM SPFILE;
Modifying Initialization
Parameters for a Physical Standby Database.
DB_NAME=chicago
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,’boston’
LOG_FILE_NAME_CONVERT= ‘/arch1/chicago/’,’/arch1/boston/’,’/arch2/chicago/’,’/arch2/boston/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= ‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2= ‘SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,’boston’
LOG_FILE_NAME_CONVERT= ‘/arch1/chicago/’,’/arch1/boston/’,’/arch2/chicago/’,’/arch2/boston/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= ‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2= ‘SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston
Oracle Performance Tuning
Application user is complaining the database is slow.How would you find the performance issue of SQL queries?
Application user is complaining the database is slow.How would you find the performance issue of SQL queries?
High performance is common expectation for end user, in
fact the database is never slow or fast in most of the case session connected
to the database slow down when they receives unexpected hit. Thus to solve this
issue you need to find those unexpected hit. To know exactly what the session
is doing join your query v$session with v$session_wait.
SELECT NVL(s.username,’(oracle)’) as username,s.sid,s.serial#,sw.event,sw.wait_time, sw.seconds_in_wait, sw.state FROM v$session_wait sw,v$session s
WHERE s.sid=sw.sid and s.username= ‘&username’ORDER BY sw.seconds_in_wait DESC;
SELECT NVL(s.username,’(oracle)’) as username,s.sid,s.serial#,sw.event,sw.wait_time, sw.seconds_in_wait, sw.state FROM v$session_wait sw,v$session s
WHERE s.sid=sw.sid and s.username= ‘&username’ORDER BY sw.seconds_in_wait DESC;
1.Check the events that are waiting for something.
2.Try to find out the objects locks for that particular session.
3.Locking is not only the cause to effects the performance. Disk I/O contention is another case. When a session retrieves data from the database datafiles on disk to the buffer cache, it has to wait until the disk sends the data. The wait event shows up for the session as “db file sequential read” (for index scan) or “db file scattered read” (for full table scan).When you see the event, you know that the session is waiting for I/O from the disk to complete. To improve session performance, you have to reduce that waiting period. The exact step depends on specific situation, but the first technique “reducing the number of blocks retrieved by a SQL statement” almost always works.Reduce the number of blocks retrieved by the SQL statement. Examine the SQL statement to see if it is doing a full-table scan when it should be using an index, if it is using a wrong index, or if it can be rewritten to reduce the amount of data it retrieves.
4.Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as the input for generating the findings and recommendations.
SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics.
SQL Tuning Advisor suggests indexes that might be very useful.
SQL Tuning Advisor suggests query rewrites.
SQL Tuning Advisor suggests SQL profile.
2.Try to find out the objects locks for that particular session.
3.Locking is not only the cause to effects the performance. Disk I/O contention is another case. When a session retrieves data from the database datafiles on disk to the buffer cache, it has to wait until the disk sends the data. The wait event shows up for the session as “db file sequential read” (for index scan) or “db file scattered read” (for full table scan).When you see the event, you know that the session is waiting for I/O from the disk to complete. To improve session performance, you have to reduce that waiting period. The exact step depends on specific situation, but the first technique “reducing the number of blocks retrieved by a SQL statement” almost always works.Reduce the number of blocks retrieved by the SQL statement. Examine the SQL statement to see if it is doing a full-table scan when it should be using an index, if it is using a wrong index, or if it can be rewritten to reduce the amount of data it retrieves.
4.Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as the input for generating the findings and recommendations.
SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics.
SQL Tuning Advisor suggests indexes that might be very useful.
SQL Tuning Advisor suggests query rewrites.
SQL Tuning Advisor suggests SQL profile.
More:
1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
3.Enable the trace file before running your queries,then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them respectively.
1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.
3.Enable the trace file before running your queries,then check the trace file using tkprof create output file.
According to explain plan check the elapsed time for each query,then tune them respectively.
What is the use
of iostat/vmstat/netstat command in Linux?
Iostat – reports
on terminal, disk and tape I/O activity.
Vmstat – reports on virtual memory statistics for processes, disk, tape and CPU activity.
Netstat – reports on the contents of network data structures.
Vmstat – reports on virtual memory statistics for processes, disk, tape and CPU activity.
Netstat – reports on the contents of network data structures.
If you are getting
high “Busy Buffer waits”, how can you find the reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 “File #”,p2 “Block #”,p3 “Reason Code” from v$session_wait Where event = ‘buffer busy waits’;
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 “File #”,p2 “Block #”,p3 “Reason Code” from v$session_wait Where event = ‘buffer busy waits’;
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
What to Look for
in AWR Report and STATSPACK Report?
Many DBAs already know how to use STATSPACK but are not
always sure what to check regularly.
Remember to separate OLTP and Batch activity when you run STATSPACK, since they usually
generate different types of waits. The SQL script “spauto.sql” can be used to run STATSPACK
every hour on the hour. See the script in $ORACLE_HOME/rdbms/admin/spauto.sql for more
information (note that JOB_QUEUE_PROCESSES must be set > 0). Since every system is different,this is only a general list of things you should regularly check in your STATSPACK output:
Remember to separate OLTP and Batch activity when you run STATSPACK, since they usually
generate different types of waits. The SQL script “spauto.sql” can be used to run STATSPACK
every hour on the hour. See the script in $ORACLE_HOME/rdbms/admin/spauto.sql for more
information (note that JOB_QUEUE_PROCESSES must be set > 0). Since every system is different,this is only a general list of things you should regularly check in your STATSPACK output:
¦ Top 5 wait
events (timed events)
¦ Load profile
¦ Instance efficiency hit ratios
¦ Wait events
¦ Latch waits
¦ Top SQL
¦ Instance activity
¦ File I/O and segment statistics
¦ Memory allocation
¦ Buffer waits
¦ Load profile
¦ Instance efficiency hit ratios
¦ Wait events
¦ Latch waits
¦ Top SQL
¦ Instance activity
¦ File I/O and segment statistics
¦ Memory allocation
¦ Buffer waits
What is the
difference between DB file sequential read and DB File Scattered Read?
DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan.
The DB file sequential read, reads block into contiguous memory and DB File scattered read gets from multiple block and scattered them into buffer cache.
DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan.
The DB file sequential read, reads block into contiguous memory and DB File scattered read gets from multiple block and scattered them into buffer cache.
Which factors
are to be considered for creating index on Table? How to select column for
index?
Creation of
index on table depends on size of table, volume of data. If size of table is
large and we need only few data for selecting or in report then we need to
create index. There are some basic reason of selecting column for indexing like
cardinality and frequent usage in where condition of select query. Business
rule is also forcing to create index like primary key, because configuring
primary key or unique key automatically create unique index.
It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously.
It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously.
Is creating
index online possible?
YES. You can
create and rebuild indexes online. This enables you to update base tables at
the same time you are building or rebuilding indexes on that table. You can
perform DML operations while the index building is taking place, but DDL
operations are not allowed. Parallel execution is not supported when creating
or rebuilding an index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
How to recover
password in oracle 10g?
You can query
with the table user_history$. The password history is store in this table.
How can you
track the password change for a user in oracle?
Oracle only tracks the date that the password will expire
based on when it was latest changed. Thus listing the view
DBA_USERS.EXPIRY_DATE and subtracting PASSWORD_LIFE_TIME you can determine when
password was last changed. You can also check the last password change time
directly from the PTIME column in USER$ table (on which DBA_USERS view is
based). But If you have PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in a
profile assigned to a user account then you can reference dictionary table
USER_HISTORY$ for when the password was changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
What is Secure
External password Store (SEPS)?
Through the use
of SEPS you can store password credentials for connecting to database by using
a client side oracle wallet, this wallet stores signing credentials. This
feature introduced since oracle 10g. Thus the application code, scheduled job,
scripts no longer needed embedded username and passwords. This reduces risk
because the passwords are no longer exposed and password management policies
are more easily enforced without changing application code whenever username
and password change.
Why we need
CASCADE option with DROP USER command whenever dropping a user and why “DROP
USER” commands fails when we don’t use it?
If a user having
any object then ‘YES’ in that case you are not able to drop that user without
using CASCADE option. The DROP USER with CASCADE option command drops user
along with its all associated objects. Remember it is a DDL command after the
execution of this command rollback cannot be performed.
What is the
difference between Redo,Rollback and Undo?
I find there is
always some confusion when talking about Redo, Rollback and Undo. They all
sound like pretty much the same thing or at least pretty close.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all changes made to data, including both uncommitted and committed changes. In addition to the online redo logs Oracle also stores archive redo logs. All redo logs are used in recovery situations.
Rollback: More specifically rollback segments. Rollback segments store the data as it was before changes were made. This is in contrast to the redo log which is a record of the insert/update/deletes.
Undo: Rollback segments. They both are really one in the same. Undo data is stored in the undo tablespace. Undo is helpful in building a read consistent view of data.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all changes made to data, including both uncommitted and committed changes. In addition to the online redo logs Oracle also stores archive redo logs. All redo logs are used in recovery situations.
Rollback: More specifically rollback segments. Rollback segments store the data as it was before changes were made. This is in contrast to the redo log which is a record of the insert/update/deletes.
Undo: Rollback segments. They both are really one in the same. Undo data is stored in the undo tablespace. Undo is helpful in building a read consistent view of data.
You have more
than 3 instances running on the Linux server? How can you determine which
shared memory and semaphores are associated with which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
Why drop table
is not going into Recycle bin?
If you are using
SYS user to drop any table then user’s object will not go to the recyclebin as
there is no recyclebin for SYSTEM tablespace, even we have already SET recycle
bin parameter TRUE.
Select * from v$parameter where name = ‘recyclebin’;
Show parameter recyclebin;
Select * from v$parameter where name = ‘recyclebin’;
Show parameter recyclebin;
Temp Tablespace
is 100% FULL and there is no space available to add datafiles to increase temp
tablespace. What can you do in that case to free up TEMP tablespace?
Try to close some of the idle sessions connected to the database will help you to free some TEMP space. Otherwise you can also use ‘Alter Tablespace PCTINCREASE 1’ followed by ‘Alter Tablespace PCTINCREASE 0’
Try to close some of the idle sessions connected to the database will help you to free some TEMP space. Otherwise you can also use ‘Alter Tablespace PCTINCREASE 1’ followed by ‘Alter Tablespace PCTINCREASE 0’
What is Row
Chaning and Row Migration?
Row Migration:
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.
Row Chaining:
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.
Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces.
So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces.
Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces.
So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.
How to find out
background processes ?
SQL> select
SID,PROGRAM from v$session where TYPE=’BACKGROUND’;
SQL> select name,description from V$bgprocess;
SQL> select name,description from V$bgprocess;
How to findout
background processes from OS:
$ ps -ef|grep
ora_|grep SID
To Find and Delete bigger size and older files in Linux
–To find out files size more than 5MB
find . -size +5000 -exec ls -ltr {} \;
– To **Remove** files size more than 5MB
find . -size +5000k -exec rm -rf {} \;
–To find out files older than 30days
find . -mtime +30 -exec ls -ltr {} \;
–To find **Remove** files older than 30days
find . -mtime +30 -exec rm -rf {} \;
Email ThisBlogThis!Share to TwitterShare to Facebook
To Find and Delete bigger size and older files in Linux
–To find out files size more than 5MB
find . -size +5000 -exec ls -ltr {} \;
– To **Remove** files size more than 5MB
find . -size +5000k -exec rm -rf {} \;
–To find out files older than 30days
find . -mtime +30 -exec ls -ltr {} \;
–To find **Remove** files older than 30days
find . -mtime +30 -exec rm -rf {} \;
Email ThisBlogThis!Share to TwitterShare to Facebook
Here is the Collection and Answer to some Interesting ORACLE DBA Interview Questions
1. How many memory layers are in the shared
pool?
Ans: The
shared pool portion of the SGA contains three major areas: library
cache(contains parsed sql statements,cursor information,execution plans),
dictionary cache (contains cache -user account information,priveleges
information,datafile,segment and extent information), buffers for parallel
execution messages, and control structure.
2. How do you find out from the RMAN catalog if a
particular archive log has been backed-up?
Ans: list archivelog all;
3. How can you tell how much space is left on a
given file system and how much space each of the file system’s subdirectories
take-up?
Ans: df -kh and du-sh
4. Define the SGA and:
i) How you would configure SGA for a mid-sized OLTP environment?
i) How you would configure SGA for a mid-sized OLTP environment?
ii) What is involved in tuning the SGA?
Ans: SGA: The
System Global Area (SGA) is a group of shared memory areas that are dedicated
to an Oracle “instance” (an instance is your database programs and RAM). All
Oracle processes use the SGA to hold information. The SGA is used to store
incoming data (the data buffers as defined by thedb_cache_size parameter),
and internal control information that is needed by the database. You control
the amount of memory to be allocated to the SGA by setting some of the Oracle
“initialization parameters”. These might include db_cache_size,
shared_pool_size and log_buffer.
i) 40% of RAM can be used for sizing SGA rest is
reserved for OS and others in 64 bit machine and in 32 bit machine max SGA
configured can be 1.5GB only.
ii) Check the statspack report. Check hit ratio of
Data buffer. If it is less than 90%, then we need to increase the Data buffer.
Check hit ratio of Shared pool. If it is less than 95%, then we need to
increase the Shared pool. Check log buffer. If redo buffer allocation
retries/redo entries is greater than 1%, then we need to increase log_buffer.
5. What is the cache hit ratio, what impact does it
have on performance of an Oracle database and what is involved in tuning
it?
Ans:
Buffer cache hit ratio: It calculates how often a requested block has been found in the buffer
cache without requiring disk access. This ratio is computed using data selected
from the dynamic performance view V$SYSSTAT. The buffer cache hit ratio can be
used to verify the physical I/O as predicted by V$DB_CACHE_ADVICE.
sql> select name, value From v$sysstat Where name in (‘db block gets’, ‘consistent gets’, ‘physical reads’);
The cache-hit ratio can be calculated as follows:
Hit ratio = 1 – (physical reads / (db block gets + consistent gets)) If the
cache-hit ratio goes below 90% then: increase the initialisation
parameter DB_CACHE_SIZE.
Library cache hit ratio: It calculates how often the parsed representation of the statement can be
reused. It also known as soft parse.
sql> select namespace, pins,
pinhits, reloads, invalidations from v$librarycache order by namespace;
Library Cache Hit Ratio = sum(pinhits) / sum(pins)
Dictionary cache hit ratio:It is a measure of the proportion of requests for information from the data
dictionary, the collection of database tables and views containing reference
information about the database, its structures, and its users. On instance
startup, the data dictionary cache contains no data, so any SQL statement
issued is likely to result in cache misses. As more data is read into the
cache, the likelihood of cache misses should decrease. Eventually the database
should reach a "steady state" in which the most frequently used
dictionary data is in the cache.
6. Other than making use of the statspack utility,
what would you check when you are monitoring or running a health check on an
Oracle 8i or 9i database?
Ans: Daily
Monitoring activities and check different logs for any sort of errors.
7. How do you tell what your machine name is and
what is its IP address?
Ans: hostname, uname -n and ifconfig
Ans: hostname, uname -n and ifconfig
8. How would you go about verifying the network
name that the local_listener is currently using?
Ans: lsnrctl stat or ps-eaf|grep tns
Ans: lsnrctl stat or ps-eaf|grep tns
9. You have 4 instances running on the same UNIX
box. How can you determine which shared memory and semaphores are associated with
which instance?
Ans:
SQL> oradebug setmypid
SQL> oradebug ipc
SQL>oradebug tracfile_name
SQL> oradebug ipc
SQL>oradebug tracfile_name
Also you can check the spfile. The parameters will
start with instance_name. parameter_name naming.
10. What view(s) do you use to associate a user’s
SQLPLUS session with his o/s process?
Ans: v$process
and v$session
sql> select a.spid from v$process a, v$session b where a.addr = b.addr and b.audsid=userenv(‘sessionid’);
sql> select a.spid from v$process a, v$session b where a.addr = b.addr and b.audsid=userenv(‘sessionid’);
11. What is the recommended interval at which to
run statspack snapshots, and why?
Ans: Should be
in minutes (15-20 mins approx) because where the time between the two
snapshots is measured in hours, the events that caused serious performance
issues for 20 minutes during peak processing don’t look so bad when they’re
spread out over an 8-hour window. It’s also true with STATSPACK that measuring
things over too long of a period tends to level them off over time. Nothing
will stand out and strike you as being wrong.
12. What spfile/init.ora file
parameter exists to force the CBO to make the execution path of a given
statement use an index, even if the index scan may appear to be calculated as
more costly?
Ans: OPTIMIZER_INDEX_COST_ADJ= FORCE
13. Assuming today is Monday, how would you use the
DBMS_JOB package to schedule the execution of a given procedure owned by SCOTT
to start Wednesday at 9AM and to run subsequently every other day at 2AM.
Ans: dbms_job.submit(:jobno,'statspack.snap;',trunc(sysdate)+9/24,'trunc(SYSDATE+1/24,''HH'')',
TRUE, :instno);
14. How would you edit your CRONTAB to schedule the
running of /test/test.sh to run every other day at 2PM?
Ans: 00 02 * *
* /test/test.sh
15. In which dictionary table or view would you
look to determine at which time a snapshot or MVIEW last successfully
refreshed?
Ans: SQL> SELECT MVIEW_NAME,LAST_REFRESH_DATE from
USER_MVIEWS;
16. How would you best determine why your MVIEW
couldn’t FAST REFRESH?
Ans: Possibly
by checking the MVIEW LOG for errors.
20. How would you begin to troubleshoot an ORA-3113
error?
Ans: End of
File Communication Error. Check Alert Logfile. CheckNetwrok Latency. Check
sqlnet.ora file has expire_time = 0, delete unwanted files and check the swap
and temp spaces.
21. Which dictionary tables and/or views would you
look at to diagnose a locking issue?
Ans: v$lock,
v$session, v$process
22. An automatic job running via DBMS_JOB has
failed. Knowing only that “it’s failed”, how do you approach troubleshooting
this issue?
Ans:Check the log
and possible reason for the JOB failed.
23. How would you extract DDL of a table without
using a GUI tool?
Ans: select
dbms_metadata.get_ddl('OBJECT','OBJECT_NAME') from dual;
24. You’re getting high “busy buffer waits” - how
can you find what’s causing it?
Ans: Buffer
busy wait means that the queries are waiting for the blocks to be read into the
db cache.There could be the reason when the block may be busy in the cache and
session is waiting for it. It could be undo, data block or segment header wait.
Run the following query to find out the p1,p2 and p3 of a session causing buffer busy wait
sql> select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait where event = 'buffer busy waits';
After that running the following query to find the segment causing buffer busy wait:-
sql> select owner,segment_name,segment_type from dba_extents where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
Run the following query to find out the p1,p2 and p3 of a session causing buffer busy wait
sql> select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait where event = 'buffer busy waits';
After that running the following query to find the segment causing buffer busy wait:-
sql> select owner,segment_name,segment_type from dba_extents where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
25. What query tells you how much space a
tablespace named “test” is taking up, and how much space is remaining?
Ans:
SET SERVEROUTPUT ON
SET LINESIZE 1000
SET FEEDBACK OFF
rem column dummy noprintcolumn pct_used format 999.9 heading "%|Used"
column name format a25 heading "Tablespace Name"
column Kbytes format 999,999,999 heading "MBytes"
column used format 999,999,999 heading "Used(MB)"
column free format 999,999,999 heading "Free(MB)"
column largest format 999,999,999 heading "Largest"
break on report
compute sum of kbytes on report
compute sum of free on report
compute sum of used on report
set pagesize 100
select nvl(b.tablespace_name,
nvl(a.tablespace_name,'UNKOWN')) name,(kbytes_alloc/1024) kbytes,
((kbytes_alloc-nvl(kbytes_free,0))/1024) used,(nvl(kbytes_free,0)/1024) free,
((kbytes_alloc-nvl(kbytes_free,0))/kbytes_alloc)*100 "%used",
nvl(largest,0)/1024 largest
from ( select sum(bytes)/1024 Kbytes_free,
max(bytes)/1024 largest, tablespace_name
from sys.dba_free_space group by tablespace_name ) a,
( select sum(bytes)/1024 Kbytes_alloc, tablespace_name
from sys.dba_data_files group by tablespace_name )b
where a.tablespace_name (+) = b.tablespace_name
order by 1
/
SET LINESIZE 1000
SET FEEDBACK OFF
rem column dummy noprintcolumn pct_used format 999.9 heading "%|Used"
column name format a25 heading "Tablespace Name"
column Kbytes format 999,999,999 heading "MBytes"
column used format 999,999,999 heading "Used(MB)"
column free format 999,999,999 heading "Free(MB)"
column largest format 999,999,999 heading "Largest"
break on report
compute sum of kbytes on report
compute sum of free on report
compute sum of used on report
set pagesize 100
select nvl(b.tablespace_name,
nvl(a.tablespace_name,'UNKOWN')) name,(kbytes_alloc/1024) kbytes,
((kbytes_alloc-nvl(kbytes_free,0))/1024) used,(nvl(kbytes_free,0)/1024) free,
((kbytes_alloc-nvl(kbytes_free,0))/kbytes_alloc)*100 "%used",
nvl(largest,0)/1024 largest
from ( select sum(bytes)/1024 Kbytes_free,
max(bytes)/1024 largest, tablespace_name
from sys.dba_free_space group by tablespace_name ) a,
( select sum(bytes)/1024 Kbytes_alloc, tablespace_name
from sys.dba_data_files group by tablespace_name )b
where a.tablespace_name (+) = b.tablespace_name
order by 1
/
26. Database is hung. Old and new user connections
alike hang on impact. What do you do? Your SYS SQLPLUS session is able to
connect.
Ans: Log into
the system and find whether there are any deadlocks in the system using the
following query.
select 'SID ' || l1.sid ||' is
blocking ' || l2.sid blocking
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2
/
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2
/
If so kill the processes caught in deadlock
alter system kill session 'SID,SERIAL#' immediate;
Also find out which wait events exist in the system
using following commands and go in detail as to what events are causing these
waits and take appropriate actions.
select event,count(*) from v$session group by event
/
select u.sid,u.serial#,
u.username,p.spid,to_char(u.logon_time,'DD-MON-YYYY:HH24:MI:SS') from
v$session u, v$session w,v$process p where u.sid = w.sid and w.event like
'%&a%' and u.paddr = p.addr
/
27. Database crashes. Corruption is found scattered among the file system
neither of your doing nor of Oracle’s. What database recovery options are
available? Database is in archive log mode.
Ans: First of all secure all the archives and all the backups you have on the tape or other system. Then run fschk to check the filesystem. If the corruption is detected at the filesystem level and is not recoverable by fschk format the file system and restore the database through RMAN.
Ans: First of all secure all the archives and all the backups you have on the tape or other system. Then run fschk to check the filesystem. If the corruption is detected at the filesystem level and is not recoverable by fschk format the file system and restore the database through RMAN.
28. Illustrate how to determine the amount of physical CPUs a Unix Box
possesses (LINUX and/or Solaris).
Ans:
Ans:
29. How do you increase the OS limitation for open files (LINUX and/or
Solaris)?
Ans: Set the file-max parameter is /etc/sysctl.conf to the number you want.Save the file and execute it by using command /etc/sysctl.conf-p
Ans: Set the file-max parameter is /etc/sysctl.conf to the number you want.Save the file and execute it by using command /etc/sysctl.conf-p
30. Provide an example of a shell script which logs
into SQLPLUS as SYS, determines the current date, changes the date format to
include minutes & seconds, issues a drop table command, displays the date
again, and finally exits.
Ans:
export ORACLE_BASE=/oracle
export ORACLE_HOME=/oracle/ora10g
export ORACLE_SID=ora10g
export path=$ORACLE_HOME/lib
sqlplus sys as sysdba << EOF
@/oracle/date.sql
exit;
Now the contents of /oracle/date.sql
select SYSDATE from dual;
select to_char(SYSDATE,'dd-mon-yyyy hh24:mi:ss')
from dual;
drop table tablename cascade constraints;
select to_char(SYSDATE,'dd-mon-yyyy hh24:mi:ss') from dual;
drop table tablename cascade constraints;
select to_char(SYSDATE,'dd-mon-yyyy hh24:mi:ss') from dual;
/
31. Explain how you would restore a database using
RMAN to Point in Time?
Ans:
restore database
until time "to_date('Aug 27 2001
02:00:00','Mon DD YYYY HH24:MI:SS')";
recover database
recover database
32. How does Oracle guarantee data integrity of
data changes?
Ans: Oracle enables you to define and enforce data
integrity constraints like PRIMARY KEY CONSTRAINTS, FOREIGN KEY
CONSTRAINTS and UNIQUE CONSTRAINTS.
33. Which environment variables are absolutely critical in order to run the
OUI?
Ans: ORACLE_BASE, ORACLE_HOME, ORACLE_SID,path and library path
Ans: ORACLE_BASE, ORACLE_HOME, ORACLE_SID,path and library path
34. What SQL query from v$session can you run to
show how many sessions are logged in as a particular user account?
Ans: select count(1) from v$session where USERNAME='username';
Ans: select count(1) from v$session where USERNAME='username';
A.Oracle
includes many performance tuning enhancements like:
1.Automatic
Performance Diagnostic and Tuning Features
2.Automatic Shared Memory Management - Automatic Shared Memory Management puts Oracle in control of allocating memory within the SGA
3.Wait Model Improvements - A number of views have been updated and added to improve the wait model.
4.Automatic Optimizer Statistics Collection - gathers optimizer statistics using a scheduled job called GATHER_STATS_JOB
5.Dynamic Sampling - enables the server to improve performance
6.CPU Costing - default cost model for the optimizer (CPU+I/O), with the cost unit as time Optimizer Hints
7.Rule Based Optimizer Obsolescence - No more used
8.Tracing Enhancements - End to End Application Tracing which allows a client process to be identified via the client identifier rather than the typical session id
9.SAMPLE Clause Enhancements Hash Partitioned Global Indexes
2.Automatic Shared Memory Management - Automatic Shared Memory Management puts Oracle in control of allocating memory within the SGA
3.Wait Model Improvements - A number of views have been updated and added to improve the wait model.
4.Automatic Optimizer Statistics Collection - gathers optimizer statistics using a scheduled job called GATHER_STATS_JOB
5.Dynamic Sampling - enables the server to improve performance
6.CPU Costing - default cost model for the optimizer (CPU+I/O), with the cost unit as time Optimizer Hints
7.Rule Based Optimizer Obsolescence - No more used
8.Tracing Enhancements - End to End Application Tracing which allows a client process to be identified via the client identifier rather than the typical session id
9.SAMPLE Clause Enhancements Hash Partitioned Global Indexes
Q.What is
proactive tuning and reactive tuning?
A.In Proactive Tuning, the application designers can then determine which combination of system resources and available Oracle features best meet the needs during design and development.In reactive tuning the bottom up approach is used to find and fix the bottlenecks. The goal is to make Oracle run faster.
Q.Describe the
level of tuning in oracle
A.System-level tuning involves the following steps:
1.Monitoring the
operating system counters using a tool such as top, gtop, and GKrellM or the
VTune analyzer’s counter monitor data collector for applications running on
Windows.
2.Interpreting the counter data to locate system-level performance bottlenecks and opportunities for improving the way your application interacts with the system.
3.SQL-level tuning:Tuning disk and network I/O subsystem to optimize the I/O time, network packet size and dispatching frequency is called the server kernel optimization.
2.Interpreting the counter data to locate system-level performance bottlenecks and opportunities for improving the way your application interacts with the system.
3.SQL-level tuning:Tuning disk and network I/O subsystem to optimize the I/O time, network packet size and dispatching frequency is called the server kernel optimization.
Distribution of
data can be studied by the optimizer by collecting and storing optimizer
statistics. This enables intelligent execution plans. Choice of db_block_size,
db_cache_size, and OS parameters (db_file_multiblock_read_count, cpu_count,
&c), can influence SQL performance. Tuning SQL Access workload with
physical indexes and materialized views.
Q.What is
Database design level tuning?
A.The steps involved in database design level tuning are:
1.Determination
of the data needed by an application (what relations are important, their
attributes and structuring the data to best meet the performance goals)
2.Analysis of data followed by normalization to eliminate data redundancy.
3.Avoiding data contention.
4.Localizing access to the data to the partition, process and instance levels.
5.Using synchronization points in Oracle Parallel Server.
6.Implementation of 8i enhancements that can help avoid contention are:
a.Consideration on partitioning the data
b.Consideration over using local or global indexes.
2.Analysis of data followed by normalization to eliminate data redundancy.
3.Avoiding data contention.
4.Localizing access to the data to the partition, process and instance levels.
5.Using synchronization points in Oracle Parallel Server.
6.Implementation of 8i enhancements that can help avoid contention are:
a.Consideration on partitioning the data
b.Consideration over using local or global indexes.
Q.Explain
rule-based optimizer and cost-based optimizer.
A.Oracle decides how to retrieve the necessary data whenever a valid SQL statement is processed.This decision can be made using one of two methods:
A.Oracle decides how to retrieve the necessary data whenever a valid SQL statement is processed.This decision can be made using one of two methods:
1.Rule Based
Optimizer
If the server has no internal statistics relating to the objects referenced by the statement then the RBO method is used.This method will be deprecated in the future releases of oracle.
If the server has no internal statistics relating to the objects referenced by the statement then the RBO method is used.This method will be deprecated in the future releases of oracle.
2.Cost Based
Optimizer
The CBO method is used if internal statistics are present.The CBO checks several possible execution plans and selects the one with the lowest cost based on the system resources.
The CBO method is used if internal statistics are present.The CBO checks several possible execution plans and selects the one with the lowest cost based on the system resources.
Q.What are
object datatypes? Explain the use of object datatypes.
A.Object data types are user defined data types. Both column and row can represent an object type. Object types instance can be stored in the database. Object datatypes make it easier to work with complex data, such as images, audio, and video. Object types provide higher-level ways to organize and access data in the database.The SQL attributes of Select into clause, i.e. SQL % Not found, SQL % found, SQL % Isopen, SQL %Rowcount.
1.% Not found: True if no rows returned
E.g. If SQL%NOTFOUND then return some_value
2.% found: True
if at least one or more rows returned
E.g. If SQL%FOUND then return some_value
E.g. If SQL%FOUND then return some_value
3.%Isopen: True
if the SQL cursor is open. Will always be false, because the database opens and
closes the implicit cursor used to retrieve the data
4.%Rowcount:
Number of rows returned. Equals 0 if no rows were found (but the exception is
raised) and a 1, if one or more rows are found (if more than one an exception
is raised).
Q.What is
translate and decode in oracle?
A.
1.Translate: translate function replaces a sequence of characters in a string with another set of characters. The replacement is done single character at a time.Syntax:
translate( string1, string_to_replace, replacement_string )
A.
1.Translate: translate function replaces a sequence of characters in a string with another set of characters. The replacement is done single character at a time.Syntax:
translate( string1, string_to_replace, replacement_string )
Example:
translate ('1tech23', '123', '456);
translate ('1tech23', '123', '456);
2.Decode: The
DECODE function compares one expression to one or more other expressions and,
when the base expression is equal to a search expression, it returns the
corresponding result expression; or, when no match is found, returns the
default expression when it is specified, or NA when it is not.
Syntax:
DECODE (expr , search, result [, search , result]... [, default])
Syntax:
DECODE (expr , search, result [, search , result]... [, default])
Example:
SELECT employee_name, decode(employee_id, 10000, ‘tom’, 10001, ‘peter’, 10002, ‘jack’ 'Gateway') result FROM employee;
SELECT employee_name, decode(employee_id, 10000, ‘tom’, 10001, ‘peter’, 10002, ‘jack’ 'Gateway') result FROM employee;
Q.What is oracle
correlated sub-queries? Explain with an example.
A.A query which uses values from the outer query is called as a correlated sub query. The subquery is executed once and uses the results for all the evaluations in the outer query.Example:
Here, the sub query references the employee_id in outer query. The value of the employee_id changes by row of the outer query, so the database must rerun the subquery for each row comparison. The outer query knows nothing about the inner query except its results.
A.A query which uses values from the outer query is called as a correlated sub query. The subquery is executed once and uses the results for all the evaluations in the outer query.Example:
Here, the sub query references the employee_id in outer query. The value of the employee_id changes by row of the outer query, so the database must rerun the subquery for each row comparison. The outer query knows nothing about the inner query except its results.
select
employee_id, appraisal_id, appraisal_amount From employee
where
appraisal_amount < (select max(appraisal_amount)
from employee e
where employee_id = e. employee_id);
where
appraisal_amount < (select max(appraisal_amount)
from employee e
where employee_id = e. employee_id);
Q.Explain union
and intersect with examples.
A.
1.UNION: The UNION operator is used to combine the result-set of two or more SELECT statements Tables of both the select statement must have the same number of columns with similar data types. It eliminates duplicates.Syntax:
SELECT column_name(s) FROM table_name1
UNION
SELECT column_name(s) FROM table_name2
A.
1.UNION: The UNION operator is used to combine the result-set of two or more SELECT statements Tables of both the select statement must have the same number of columns with similar data types. It eliminates duplicates.Syntax:
SELECT column_name(s) FROM table_name1
UNION
SELECT column_name(s) FROM table_name2
Example:
SELECT emp_Name FROM Employees_india
UNION
SELECT emp_Name FROM Employees_USA
SELECT emp_Name FROM Employees_india
UNION
SELECT emp_Name FROM Employees_USA
2.INTERSECT
allows combining results of two or more select queries. If a record exists in
one query and not in the other, it will be omitted from the INTERSECT results.
Q.What is
difference between open_form and call_form? What is new_form built-in in oracle
form?
A.Open_form opens the indicated form. Call_form not just opens the indicated form, but also keeps the parent form alive.When new_form is called, the new indicted form is opened and the old one is exited by releasing the memory. The new form is run using the same Run form options as the parent form.
A.Open_form opens the indicated form. Call_form not just opens the indicated form, but also keeps the parent form alive.When new_form is called, the new indicted form is opened and the old one is exited by releasing the memory. The new form is run using the same Run form options as the parent form.
Q.What is
advantage of having disk shadowing/ Mirroring in oracle?
A.Fast recovery of data in case of Disk failure.Improved performance since most OS supports volume shadowing that can direct file I/O request to use the shadow set of files instead of the main set of files.
A.Fast recovery of data in case of Disk failure.Improved performance since most OS supports volume shadowing that can direct file I/O request to use the shadow set of files instead of the main set of files.
1. Can Oracle's
Data Guard be used on Standard Edition, and if so how? How can you test that
the standby database is in sync?
Oracle's Data Guard technology is a layer of software and automation built on top of the standby database facility. In Oracle Standard Edition it is possible to be a standby database, and update it *manually*. Roughly, put your production database in archivelog mode. Create a hotbackup of the database and move it to the standby machine. Then create a standby controlfile on the production machine, and ship that file, along with all the archived redolog files to the standby server. Once you have all these files assembled, place them in their proper locations, recover the standby database, and you're ready to roll. From this point on, you must manually ship, and manually apply those archived redologs to stay in sync with production.
To test your standby database, make a change to a table on the production server, and commit the change. Then manually switch a logfile so those changes are archived. Manually ship the newest archived redolog file, and manually apply it on the standby database. Then open your standby database in read-only mode, and select from your changed table to verify those changes are available. Once you're done, shutdown your standby and startup again in standby mode.
2. What is the difference between Active Dataguard, and the Logical Standby implementation of 10g dataguard?
Active dataguard is mostly about the physical standby.
Use physical standby for testing without compromising protection of the production system. You can open the physical standby read/write - do some destructive things in it (drop tables, change data, whatever - run a test - perhaps with real application testing). While this is happening, redo is still streaming from production, if production fails - you are covered. Use physical standby for reporting while in managed recovery mode. Since physical standby supports all of the datatypes - and logical standby does not (11g added broader support, but not 100%) - there are times when logical standby isn’t sufficient. It also permits fast incremental backups when offloading backups to a physical standby database.
3. What is a Dataguard?
Oracle Dataguard is a disaster recovery solution from Oracle Corporation that has been utilized in the industry extensively at times of Primary site failure, failover, switchover scenarios.
4. What are the uses of Oracle Data Guard?
a) Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data.
b) Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions.
c) With Data Guard, administrators can optionally improve production database performance by offloading resource-intensive backup and reporting operations to standby systems.
5. What is Redo Transport Services?
It control the automated transfer of redo data from the production database to one or more archival destinations.
Redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the
primary database or another standby database.
6. What is apply services?
Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database. It also allows read-only access to the data.
7. What is difference between physical and standby databases?
The main difference between physical and logical standby databases is the manner in
which apply services apply the archived redo data:
a) For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of
an Oracle database.
b) For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the
generated SQL statements on the logical standby database.
8. What is Data Guard Broker?
Data guard Broker manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager. It can be used to perform:
a) Create and enable Data Guard configurations, including setting up redo transport services and apply services
b) Manage an entire Data Guard configuration from any system in the configuration
c) Manage and monitor Data Guard configurations that contain Oracle RAC primary or standby databases
d) Simplify switchovers and failovers by allowing you to invoke them using either a single key click in Oracle Enterprise Manager or a single command in the DGMGRL command-line interface.
e) Enable fast-start failover to fail over automatically when the primary database becomes unavailable. When fast-start failover is enabled, the Data Guard broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, with no need for DBA intervention.
9. What are the Data guard Protection modes and summarize each?
Maximum availability :
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.
Maximum performance :
This is the default protection mode. It provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.
Maximum protection :
This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.
Oracle's Data Guard technology is a layer of software and automation built on top of the standby database facility. In Oracle Standard Edition it is possible to be a standby database, and update it *manually*. Roughly, put your production database in archivelog mode. Create a hotbackup of the database and move it to the standby machine. Then create a standby controlfile on the production machine, and ship that file, along with all the archived redolog files to the standby server. Once you have all these files assembled, place them in their proper locations, recover the standby database, and you're ready to roll. From this point on, you must manually ship, and manually apply those archived redologs to stay in sync with production.
To test your standby database, make a change to a table on the production server, and commit the change. Then manually switch a logfile so those changes are archived. Manually ship the newest archived redolog file, and manually apply it on the standby database. Then open your standby database in read-only mode, and select from your changed table to verify those changes are available. Once you're done, shutdown your standby and startup again in standby mode.
2. What is the difference between Active Dataguard, and the Logical Standby implementation of 10g dataguard?
Active dataguard is mostly about the physical standby.
Use physical standby for testing without compromising protection of the production system. You can open the physical standby read/write - do some destructive things in it (drop tables, change data, whatever - run a test - perhaps with real application testing). While this is happening, redo is still streaming from production, if production fails - you are covered. Use physical standby for reporting while in managed recovery mode. Since physical standby supports all of the datatypes - and logical standby does not (11g added broader support, but not 100%) - there are times when logical standby isn’t sufficient. It also permits fast incremental backups when offloading backups to a physical standby database.
3. What is a Dataguard?
Oracle Dataguard is a disaster recovery solution from Oracle Corporation that has been utilized in the industry extensively at times of Primary site failure, failover, switchover scenarios.
4. What are the uses of Oracle Data Guard?
a) Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data.
b) Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions.
c) With Data Guard, administrators can optionally improve production database performance by offloading resource-intensive backup and reporting operations to standby systems.
5. What is Redo Transport Services?
It control the automated transfer of redo data from the production database to one or more archival destinations.
Redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the
primary database or another standby database.
6. What is apply services?
Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database. It also allows read-only access to the data.
7. What is difference between physical and standby databases?
The main difference between physical and logical standby databases is the manner in
which apply services apply the archived redo data:
a) For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of
an Oracle database.
b) For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the
generated SQL statements on the logical standby database.
8. What is Data Guard Broker?
Data guard Broker manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager. It can be used to perform:
a) Create and enable Data Guard configurations, including setting up redo transport services and apply services
b) Manage an entire Data Guard configuration from any system in the configuration
c) Manage and monitor Data Guard configurations that contain Oracle RAC primary or standby databases
d) Simplify switchovers and failovers by allowing you to invoke them using either a single key click in Oracle Enterprise Manager or a single command in the DGMGRL command-line interface.
e) Enable fast-start failover to fail over automatically when the primary database becomes unavailable. When fast-start failover is enabled, the Data Guard broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, with no need for DBA intervention.
9. What are the Data guard Protection modes and summarize each?
Maximum availability :
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.
Maximum performance :
This is the default protection mode. It provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.
Maximum protection :
This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.
10. If you
didn't have access to the standby database and you wanted to find out what
error has occurred in a data guard configuration, what view would you check in
the primary database to check the error message?
You can check
the v$dataguard_status view. Select message from v$dataguard_status;
11. In
Oracle 11g, what command in RMAN can you use to create the standby
database while the target database is active?
Oracle 11g has
made it extremely simple to set up a standby database environment
because Recovery Manager (RMAN) now supports the ability to
clone the existing primary database directly to the intended standby database
siteover the network via the DUPLICATE DATABASE command
set while the target database is active. RMAN automatically
generates a conversion script in memory on the primary site and uses that
script to manage the cloning operation on the standby site with virtually no
DBA intervention required. You can execute this in a run block in
RMAN:
duplicate target
database for standby dorecover from active database;
12. What
additional standby database mode does Oracle 11g offer?
Oracle 11g has
introduced the Oracle Snapshot Standby Database. In Snapshot
Standby Database a physical standby database can easily open in read-write mode
and again you can convert it back to the physical standby database. This is
suitable for test and development environments and also maintains protection by
continuing to receive data from the production database and archiving it for
later use.
13. In Oracle
11g how can speed up backups on the standby database?
In Oracle 11g,
block change tracking is now supported in the standby database.
With
the availability of Active Data Guard, what role does SQL Apply (logical standby)
continue to play?
Use SQL Apply
for the following requirements: (a) when you require read-write access to a
synchronized standby database but do not modify primary data, (b) when you wish
to add local tables to the standby database that can also be updated, or (c)
when you wish to create additional indexes to optimize read performance.
The ability to handle local writes makes SQL Apply better suited to packaged
reporting applications that often require write access to local tables that
exist only at the target database. SQL Apply also provides rolling upgrade
capability for patchsets and major database releases. This rolling
upgrade functionality can also be used by physical standby databases beginning
with Oracle 11g using Transient Logical Standby.
15. Why
would I use Active Data Guard and not simply use SQL Apply (logical standby)
that is included with Data Guard 11g?
If read-only
access satisfies the requirement - Active Data Guard is a closer fit for the
requirement, and therefore is much easier to implement than any other
approach. Active Data Guard supports all datatypes and is very simple to
implement. An Active Data Guard replica can also easily support additional uses
- offloading backups from the primary database, serve as an open read-write
test system during off-peak hours (Snapshot Standby), and provide an exact copy
of the production database for disaster recovery - fully utilizing standby
servers, storage and software while in standby role.
16. Why do
I need the Oracle 11g Active Data Guard Option?
Previous
capabilities did not allow Redo Apply to be active while a physical standby
database was open read-only, and did not enable RMAN block change tracking on
the standby database. This resulted in (a) read-only access to data that
was frozen as of the time that the standby database was opened read-only, (b)
failover and switchover operations that could take longer to complete due to
the backlog of redo data that would need to be applied, and (c) incremental
backups that could take up to 20x longer to complete - even on a database with
a moderate rate of change. Previous capabilities are still included with Oracle
Data Guard 11g, no additional license is required to use previous capabilities.
17. If you
wanted to upgrade your current 10g physical standby data guard
configuration to 11g, can you upgrade the standby to 11g first then upgrade the
primary ?
Yes, in Oracle
11g, you can temporarily convert the physical standby database to a logical
standby database to perform a rolling upgrade. When you issue the convert
command you need to keep the identity:
alter database
recover logical standby keep identity;
18. If you
have a low-bandwidth WAN network, what can you do to improve the Oracle 11g
data guard configuration in a GAP detected situation?
Oracle 11g
introduces the capability to compress redo log data as it transports over the
network to the standby database. It can be enabled using the compression
parameter. Compression becomes enabled only when a gap exists and the
standby database needs to catch up to the primary database.
alter system set
log_archive_dest_1='SERVICE=DBA11GDR COMPRESSION=ENABLE';
19. In an
Oracle 11g Logical Standby Data Guard configuration, how can you tell the
dbms_scheduler to only run jobs in primary database?
Oracle 11g,
logical standby now provides support for DBMS_SCHEDULER. It is capable of
running jobs in both primary and logical standby database. You can
use the DBMS_SCHEDULER.SET_ATTRIBUTE procedure to set the database_role.
You can specify that the jobs can run only when operating in that particular
database role.
20. How
can you control when an archive log can be deleted in the standby database in
oracle 11g ?
In Oracle 11g,
you can control it by using the log_auto_delete initialization parameter.
The log_auto_delete parameter must be coupled with the
log_auto_del_retention_target parameter to specify the number of minutes an
archivelog is maintained until it is purged. Default is 24 hours. For
archivelog retention to be effective, the log_auto_delete parameter must be set
to true.
21. Can
Oracle Data Guard be used with Standard Edition of Oracle ?
Yes and No. The automated features of Data Guard are not available in the standard edition of Oracle. You can still however, perform log shipping manually and write scripts to manually perform the steps. If you are on unix platform, you can write shell scripts that identify the logs and then use the scp or sftp command to ship it to the standby server. Then on the standby server, identify which logs have not been applied and apply/recover them maually and remove them once applied. |
1. What is RMAN
?Recovery Manager
(RMAN) is a utility that can manage your entire Oracle backup and recovery
activities.
Which Files must
be backed up? Database Files
(with RMAN)
Control Files (with RMAN)
Offline Redolog Files (with RMAN)
INIT.ORA (manually)
Password Files (manually)
Control Files (with RMAN)
Offline Redolog Files (with RMAN)
INIT.ORA (manually)
Password Files (manually)
2.
When you take a hot backup putting Tablespace in begin backup mode, Oracle
records SCN # from header of a database file. What happens when you issue
hot backup database in RMAN at block level backup? How does RMAN mark the
record that the block has been backed up ? How does RMAN know what blocks
were backed up so that it doesn't have to scan them again?
In 11g, there is
Oracle Block Change Tracking feature. Once enabled; this new 10g feature
records the modified since last backup and stores the log of it in a block
change tracking file. During backups RMAN uses the log file to identify the
specific blocks that must be backed up. This improves RMAN's performance as it
does not have to scan whole datafiles to detect changed blocks.
Logging of changed blocks is performed by the CTRW process which is also responsible for writing data to the block change tracking file. RMAN uses SCNs on the block level and the archived redo logs to resolve any inconsistencies in the datafiles from a hot backup. What RMAN does not require is to put the tablespace in BACKUP mode, thus freezing the SCN in the header. Rather, RMAN keeps this information in either your control files or in the RMAN repository (i.e., Recovery Catalog).
Logging of changed blocks is performed by the CTRW process which is also responsible for writing data to the block change tracking file. RMAN uses SCNs on the block level and the archived redo logs to resolve any inconsistencies in the datafiles from a hot backup. What RMAN does not require is to put the tablespace in BACKUP mode, thus freezing the SCN in the header. Rather, RMAN keeps this information in either your control files or in the RMAN repository (i.e., Recovery Catalog).
3. What
are the Architectural components of RMAN?
1.RMAN
executable
2.Server processes
3.Channels
4.Target database
5.Recovery catalog database (optional)
6.Media management layer (optional)
7.Backups, backup sets, and backup pieces
2.Server processes
3.Channels
4.Target database
5.Recovery catalog database (optional)
6.Media management layer (optional)
7.Backups, backup sets, and backup pieces
4. What
are Channels?
A channel is an
RMAN server process started when there is a need to communicate with an I/O
device, such as a disk or a tape. A channel is what reads and writes RMAN
backup files. It is through the allocation of channels that you govern I/O
characteristics such as:
Type of I/O
device being read or written to, either a disk or an sbt_tape
Number of
processes simultaneously accessing an I/O device
Maximum size of
files created on I/O devices
Maximum rate at
which database files are read
Maximum number
of files open at a time
5. Why is
the catalog optional?
Because RMAN
manages backup and recovery operations, it requires a place to store necessary
information about the database. RMAN always stores this information in the
target database control file. You can also store RMAN metadata in a recovery
catalog schema contained in a separate database. The recovery catalog
schema must be stored in a database other than the target database.
schema must be stored in a database other than the target database.
6. What
does complete RMAN backup consist of ?
A backup of all or part of your database. This results from issuing an RMAN backup command. A backup consists of one or more backup sets.
7. What is a Backup set?A logical grouping of backup files -- the backup pieces -- that are created when you issue an RMAN backup command. A backup set is RMAN's name for a collection of files associated with a backup. A backup set is composed of one or more backup pieces.
8. What is a Backup piece?A physical binary file created by RMAN during a backup. Backup pieces are written to your backup medium, whether to disk or tape. They contain blocks from the target database's datafiles, archived redo log files, and control files. When RMAN constructs a backup piece from datafiles, there are a several rules that it follows:
A backup of all or part of your database. This results from issuing an RMAN backup command. A backup consists of one or more backup sets.
7. What is a Backup set?A logical grouping of backup files -- the backup pieces -- that are created when you issue an RMAN backup command. A backup set is RMAN's name for a collection of files associated with a backup. A backup set is composed of one or more backup pieces.
8. What is a Backup piece?A physical binary file created by RMAN during a backup. Backup pieces are written to your backup medium, whether to disk or tape. They contain blocks from the target database's datafiles, archived redo log files, and control files. When RMAN constructs a backup piece from datafiles, there are a several rules that it follows:
· A datafile cannot span backup sets
· A datafile can span backup pieces as
long as it stays within one backup set
· Datafiles and control files can coexist
in the same backup sets
· Archived redo log files are never in
the same backup set as datafiles or control files RMAN is the only tool that
can operate on backup pieces. If you need to restore a file from an RMAN backup,
you must use RMAN to do it. There's no way for you to manually reconstruct
database files from the backup pieces. You must use RMAN to restore files from
a backup piece.
9. What
are the benefits of using RMAN?
1. Incremental
backups that only copy data blocks that have changed since the last backup.
2. Tablespaces are not put in backup mode, thus there is noextra redo log generation during online backups.
3. Detection of corrupt blocks during backups.
4. Parallelization of I/O operations.
5. Automatic logging of all backup and recovery operations.
6. Built-in reporting and listing commands.
2. Tablespaces are not put in backup mode, thus there is noextra redo log generation during online backups.
3. Detection of corrupt blocks during backups.
4. Parallelization of I/O operations.
5. Automatic logging of all backup and recovery operations.
6. Built-in reporting and listing commands.
The PREVIEW
option of the RESTORE command allows you to identify the backups required to
complete a specific restore operation. The output generated by the command is
in the same format as the LIST command. In addition the PREVIEW SUMMARY command
can be used to produce a summary report with the same format as the LIST
SUMMARY command. The following examples show how these commands are used:
# Spool output
to a log file
SPOOL LOG TO c:\oracle\rmancmd\restorepreview.lst;
SPOOL LOG TO c:\oracle\rmancmd\restorepreview.lst;
# Show what
files will be used to restore the SYSTEM tablespace’s datafile
RESTORE DATAFILE 2 PREVIEW;
RESTORE DATAFILE 2 PREVIEW;
# Show what
files will be used to restore a specific tablespace
RESTORE TABLESPACE users PREVIEW;
RESTORE TABLESPACE users PREVIEW;
# Show a summary
for a full database restore
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE DATABASE PREVIEW SUMMARY;
# Close the log
file
SPOOL LOG OFF;
SPOOL LOG OFF;
11. Where should
the catalog be created?
The recovery catalog to be used by rman should be created in a separate database other than the target database. The reason been that the target database will be shutdown while datafiles are restored.
12. How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
13. How to view the current defaults for the database.
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u02/app/oracle/product/10.1.0/db_1/dbs/snapcf_test.f’; # default
14. Backup the database.
The recovery catalog to be used by rman should be created in a separate database other than the target database. The reason been that the target database will be shutdown while datafiles are restored.
12. How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
13. How to view the current defaults for the database.
RMAN> show all;
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u02/app/oracle/product/10.1.0/db_1/dbs/snapcf_test.f’; # default
14. Backup the database.
run
{
backup
incremental level $level ${level_keyword}
tag
INC${target_db}_$level database include current controlfile;
backup
archivelog all not backed up 1 times delete input;
}
15. How to
resolve the ora-19804 error
Basically this error is because of flash recovery area been full. One way to solve is to increase the space available for flashback database.
sql>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=5G; –It can be set to K,M or G.
rman>backup database;
……………….
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 04-JUL-05
channel ORA_DISK_1: finished piece 1 at 04-JUL-05
piece handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1_mf_ncsnf_TAG20050704T205840_1dmy15cr_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 04-JUL-05
Oracle Flashback
After taking a back up resync the database.
Restoring the whole database.
run {
shutdown immediate;
startup mount;
restore database;
recover database;
alter database open;
}
16. What are the various reports available with RMAN
rman>list backup;
rman> list archive;
17. What does backup incremental level=0 database do?
Backup database level=0 is a full backup of the database. rman>>backup incremental level=0 database;
You can also use backup full database; which means the same thing as level=0;
18. What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files, when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backedup will get deleted, if we specify delete all all log_archive_dest_n will get deleted.
DELETE all applies only to archived logs. delete expired archivelog all;
19. How do I backup archive log?
In order to backup archivelog we have to do the following:-
Basically this error is because of flash recovery area been full. One way to solve is to increase the space available for flashback database.
sql>ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=5G; –It can be set to K,M or G.
rman>backup database;
……………….
channel ORA_DISK_1: specifying datafile(s) in backupset
including current controlfile in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 04-JUL-05
channel ORA_DISK_1: finished piece 1 at 04-JUL-05
piece handle=/u02/app/oracle/flash_recovery_area/TEST/backupset/2005_07_04/o1_mf_ncsnf_TAG20050704T205840_1dmy15cr_.bkp comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 04-JUL-05
Oracle Flashback
After taking a back up resync the database.
Restoring the whole database.
run {
shutdown immediate;
startup mount;
restore database;
recover database;
alter database open;
}
16. What are the various reports available with RMAN
rman>list backup;
rman> list archive;
17. What does backup incremental level=0 database do?
Backup database level=0 is a full backup of the database. rman>>backup incremental level=0 database;
You can also use backup full database; which means the same thing as level=0;
18. What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files, when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backedup will get deleted, if we specify delete all all log_archive_dest_n will get deleted.
DELETE all applies only to archived logs. delete expired archivelog all;
19. How do I backup archive log?
In order to backup archivelog we have to do the following:-
run
{
allocate channel
t1 type 'SBT_TAPE';
delete noprompt
archivelog until time = 'sysdate-3/24';
delete noprompt
obsolete;
release channel
t1;
}
20. How do I do
a incremental backup after a base backup?
run
{
backup
incremental level $level ${level_keyword}
tag
INC${target_db}_$level database include current controlfile;
backup
archivelog all not backed up 1 times delete input;
}
21. In catalog
database, if some of the blocks are corrupted due to system crash, How will you
recover?
using RMAN BLOCK
RECOVER command
22. You have
taken a manual backup of a datafile using o/s. How RMAN will know about it?
You have to
catalog that manual backup in RMAN's repository by command
RMAN> catalog
datafilecopy '/DB01/BACKUP/users01.dbf';
restrictions:
> Accessible
on disk
> A complete
image copy of a single file
23. Where RMAN
keeps information of backups if you are using RMAN without Catalog?
RMAN keeps
information of backups in the control file.
CATALOG vs
NOCATALOG
the difference
is only who maintains the backup records like when is the last successful
backup incremental differential etc.
In CATALOG mode
another database (TARGET database) stores all the information.
In NOCATALOG
mode controlfile of Target database is responsible.
24. How do you
see information about backups in RMAN?
RMAN> List
Backup;
Use this SQL to
check
SQL> SELECT
sid totalwork sofar FROM v$session_longops WHERE sid 153;
Here give SID
when back start it will show SID
25. How RMAN
improves backup time?
RMAN backup time
consumption is very less than compared to regular online backup as RMAN copies
only modified blocks
26. What is the
advantage of RMAN utility?
Central
Repository
Incremental
Backup
Corruption
Detection
Advantage over
tradition backup system:
1). copies only
the filled blocks i.e. even if 1000 blocks is allocated to datafile but 500 are
filled with data then RMAN will only create a backup for that 500 filled
blocks.
2). incremental
and accumulative backup.
3). catalog and
no catalog option.
4). detection of
corrupted blocks during backup;
5). can create
and store the backup and recover scripts.
6). increase
performance through automatic parallelization( allocating channels) less redo
generation.
27. List the
encryption options available with RMAN?
RMAN offers
three encryption modes: transparent mode, password mode and dual mode.
28. What are the
steps required to perform in $ORACLE_HOME for enabling the RMAN backups with
netbackup or TSM tape library software?
I can explain
what are all the steps to take a rman backup with TSM tape library as follows
1.Install TDPO
(default path /usr/tivoli/tsm/client/oracle/)
2.Once u
installed the TDPO automatically one link is created from TDPO directory to
/usr/lib.Now we need to Create soft link between OS to ORACLE_HOME
ln -s
/usr/lib/libiobk64.a $ORACLE_HOME/lib/libobk.a(very imporatant)
3.Uncomment and
Modify tdpo.opt file which in
/usr/tivoli/tsm/client/oracle/bin/tdpo.opt
as follows
DSMI_ORC_CONFIG
/usr/Tivoli/tsm/client/oracle/bin64/dsm.opt
DSMI_LOG
/home/tmp/oracle
TDPO_NODE backup
TDPO_PSWDPATH
/usr/tivoli/tsm/client/oracle/bin64
4.create dsm.sys
file in same path and add the entries
SErvername
<Server name >
TCPPort 1500
passwordacess
prompt
nodename backup
enablelanfree
yes
TCPSERVERADDRESS
<Server Address>
5.Create dsm.opt
file add an entry
SErvername
<Server name >
6.Then take
backup
RMAN>run
{
allocate channel
t1 type 'sbt_tape' parms
'ENV
(TDPO_OPTFILE /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
backup database
include current controlfile;
release channel
t1;
}
29. What is the significance
of incarnation and DBID in the RMAN backups?
When you have
multiple databases you have to set your DBID (Database Id) which is unique to
each database. You have to set this before you do any restore operation from
RMAN.
There is
possibility that incarnation may be different of your database. So it is
advised to reset to match with the current incarnation. If you run the RMAN
command ALTER DATABASE OPEN RESETLOGS then RMAN resets the
target database
automatically so that you do not have to run RESET DATABASE. By resetting the
database RMAN considers the new incarnation as the current incarnation of the
database.
30. List at
least 6 advantages of RMAN backups compare to traditional hot backups?
RMAN has the
following advantages over Traditional backups:
1. Ability to
perform INCREMENTAL backups
2. Ability to
Recover one block of datafile
3. Ability to
automatically backup CONTROLFILE and SPFILE
4. Ability to
delete the older ARCHIVE REDOLOG files, with the new one's automatically.
5. Ability to perform
backup and restore with parallelism.
6. Ability to
report the files needed for the backup.
7. Ability to
RESTART the failed backup, without starting from beginning.
8. Much faster
when compared to other TRADITIONAL backup strategies.
31. How do you enable
the autobackup for the controlfile using RMAN?
issue command at
rman prompt.....
RMAN>
configure controlfile autobackup on;
also we can
configure controlfile backup format......
RMAN>
configure controlfile autobackup format for device type disk to
2>
'$HOME/BACKUP/RMAN/ F.bkp';
$HOME/BACKUP/RMAN/ this can be any desired location.
32. How do you
identify what are the all the target databases that are being backed-up with
RMAN database?
You don’t have
any view to identify whether it is backed up or not . The only option is
connect to the target database and give list backup this will give you the
backup information with date and timing.
33. What is the
difference between cumulative incremental and differential incremental backups?
Differential
backup: This is the
default type of incremental backup which backs up all blocks changed after the
most recent backup at level n or lower.
Cumulative
backup: Backup all
blocks changed after the most recent backup at level n-1 orlower.
34. How do you
identify the block corruption in RMAN database? How do you fix it?
using
v$block_corruption view u can find which blocks corrupted.
Rman>>
block recover datafile <fileid> block <blockid>;
Using the above
statement u recover the corrupted blocks.
First check
whether the block is corrupted or not by using this command
sql>select
file# block# from v$database_block_corruption;
file# block
2 507
the above block
is corrupted...
conn to Rman
To recover the
block use this command...
Rman>blockrecover
dataile 2 block 507;
the above
command recover the block 507
Now just verify
it.....
Rman>blockrecover
corruption list;
35. How do you
clone the database using RMAN software? Give brief steps? When do you use
crosscheck command?
Check whether
backup pieces proxy copies or disk copies still exist.
Two commands
available in RMAN to clone database:
1) Duplicate
2) Restore.
36. What is the
difference between obsolete RMAN backups and expired RMAN backups?
The term
obsolete does not mean the same as expired. In short obsolete means "not needed "
whereas expired means "not found."
37. List some of
the RMAN catalog view names which contain the catalog information?
RC_DATABASE_INCARNATION
RC_BACKUP_COPY_DETAILS
RC_BACKUP_CORRUPTION
RC_BACKUP-DATAFILE_SUMMARY
to name a few
38. What is db_recovery_file_dest
? When do you need to set this value?
If Database
Flashback option is on then use this option.
39. How do you
setup the RMAN tape backups?
RMAN Target /
run
{
Allocate channel
ch1 device type sbt_tape maxpiecesize 4g
Format' D_ U_ T_
t';
sql 'alter
system switch logfile';
Backup database;
backup
archivelog from time 'sysdate-7';
Backup Format '
D_CTLFILE_P_ U_ T_ t' Current controlfile;
release channel
ch1;
}
This is backup
script for Tivoli Backup Server
40. How do you
install the RMAN recovery catalog?
Steps to be
followed:
1) Create
connection string at catalog database.
2) At catalog
database create one new user or use existing user and give that user a
recovery_catalog_owner privilege.
3)Login into
RMAN with connection string
a) export
ORACLE_SID
b) rman target
catalog @connection string
4) rman>
create catalog;
5) register
database;
41. When do you
recommend hot backup? What are the pre-reqs?
Database must be
Archivelog Mode
Archive
Destination must be set and LOG_ARCHIVE_START TRUE (EARLIER VERSION BEFORE 10G)
If you go
through RMAN then
CONFIGURE
RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP
OPTIMIZATION OFF; # default
CONFIGURE
DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE
CONTROLFILE AUTOBACKUP ON;
CONFIGURE
CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/oracle/autobackup/
F';
CONFIGURE DEVICE
TYPE DISK PARALLELISM 2BACKUP TYPE TO BACKUPSET; # default
CONFIGURE
DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG
BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE
MAXSETSIZE TO UNLIMITED; # default
CONFIGURE
ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE
ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE
ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE
SNAPSHOT CONTROLFILE NAME TO
'/u01/app/oracle/product/10.2.0/db_2/dbs/snapcf_dba.f';
# default
42. What is the
difference between physical and logical backups?
In Oracle
Logical Backup is "which is taken using either Traditional Export/Import
or Latest Data Pump". Where as Physical backup is known "when you
take Physical O/s Database related Files as Backup".
43. What is
RAID? What is RAID0? What is RAID1? What is RAID 10?
RAID: It is a
redundant array of independent disk
RAID0: Concatenation
and stripping
RAID1: Mirroring
44. What are
things which play major role in designing the backup strategy?
I Believe in
designing a good backup strategy it will not only be simply backup but also a
contingency plan. In this case you should consider the following:
1. How long is
the allowable down time during recovery? - If short you could consider using
dataguard.
2. How long is
the backup period? - If short I would advise to use RMAN instead of user
managed backup.
3. If limited
disk space for backup never use user managed backup.
4. If the
database is large you could consider doing full rman backups on a weekend and
do a incremental backup on a weekday.
5. Schedule your
backup on the time where there is least database activity this is to avoid
resource huggling.
6. Backup script
should always be automized via scheduled jobs. This way operators would never
miss a backup period.
7. Retention
period should also be considered. Try keeping atleast 2 full backups. (current
and previous backup).
Cold backup:
shutdown the database and copy the datafiles with the help of
O.S. command.
this is simply copying of datafiles just like any other text file copy.
Hot backup:
backup process starts even though database in running. The process to take a
hot backup is
1) sql> alter
database begin backup;
2) copy the
datafiles.
3) after copying
sql> alter
database end backup;
Begin backup
clause will generate the timestamp. it'll be used in backup consistency i.e.
when begin backup pressed it'll generate the timestamp. During restore database
will restore the data from backup till that timestamp and remaining backup will
be recovered from archive log.
45. What is hot
backup and what is cold backup?
Hot backup when
the database is online cold backup is taken during shut down period
46. How do you
test that your recovery was successful?
SQL> SELECT
count(*) FROM flashback_table;
47. How do you
backup the Flash Recovery Area?
A:RMAN>
BACKUP RECOVERY FILES;
The files on
disk that have not previously been backed up will be backed up. They are
full and incremental backup sets, control file auto-backups, archive logs and
datafile copies.
48. How to
enable Fast Incremental Backup to backup only those data blocks that have
changed?
A:SQL> ALTER
DATABASE enable BLOCK CHANGE TRACKING;
49. How do you
set the flash recovery area?
A:SQL> ALTER SYSTEM SET db_recovery_file_dest_size
= 100G;SQL> ALTER SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;
50. How can you
use the CURRENT_SCN column in the V$DATABASE view to obtain the currentSCN?
A:SQL> SELECT
current_scn FROM v$database;
51. You have
taken a manual backup of a datafile using o/s. How RMAN will know about it?
You have to catalog
that manual backup in RMAN's repository by command
RMAN>
catalogdatafilecopy '/DB01/BACKUP/users01.dbf';
restrictions:>
Accessible on disk> A complete image copyof a single file
52. In catalog
database, if some of the blocks are corrupted due to system crash, How will you
recover?
using RMAN BLOCK
RECOVER command
53. List
advantages of RMAN backups compare to traditional hot backups?
RMAN has the
following advantages over Traditional backups:
1. Ability to
perform INCREMENTALbackups
2. Ability to Recover
one block of datafile
3. Ability to
automatically backup CONTROLFILEand SPFILE
4. Ability to
delete the older ARCHIVE REDOLOG files
54. How do you
identify the expired, active, obsolete backups? Which RMAN command you use?
Use command:
Rman > crosscheck
backup;
Rman >
crosscheck archivelog all;
Rman >
listbackup;
Rman > list
archive logall
55. How do you
enable the autobackup for the controlfile using RMAN?
RMAN>
configure controlfile autobackup on;
also we can
configurecontrolfile backup format......
RMAN>
configure control file auto backup format for device type disk
56. How do you
identify what are the all the target databases that are being backed-up with
RMAN database?
You don’t have
any view to identify whether it is backed up or not . The only option is
connect to the target database and give list backup, this will give you the
backup information with date and timing
57. What is the
difference between cumulative incremental and differential incremental backups?
Differential
backup: This is the default type of incremental backup which backs up all
blocks changed after the most recent backup at level n or lower.
Cumulative
backup: Backup all blocks changed after the most recent backup at level n-1 or
lower
58. Explain how
to setup the physical stand by database with RMAN?
$ Export
ORACLE_SID=TEST $ rman target /
RMAN>
show all;
Using target
database controlfile instead of recovery catalog RMAN configuration parameters
are:
CONFIGURE
RETENTIONPOLICY TO RECOVERY WINDOW OF 1 DAYS;
CONFIGURE BACKUP
OPTIMIZATION
59. What is
auxiliary channel in RMAN? When do you need this?
An auxiliary
channel is a link to auxiliary instance. If you do not have automatic channels
configured, then before issuing the DUPLICATE command, manually allocate at
least one auxiliary channel with in the same RUN command.
60. What is
backup set?
RMAN can also
store its backups in an RMAN-exclusive format which is called backup set. A
backupset is a collection of backup pieces, each of which may contain one or
more datafile backups
61. What is RMAN
and how does one use it?
Recovery Manager
(or RMAN) is an Oracle provided utility for backing-up, restoring and
recoveringOracle Databases. RMAN ships with the database server and doesn't
require a separate installation. TheRMAN executable is located in your
ORACLE_HOME/bin directory.
62. What kind of
backup are supported by RMAN?
Backup
SetsDatafiles CopiesOS BackupWhat is the Flash Recovery Area?
A: It is a
unified storage location for all recovery-related files and activities in an
Oracle Database. Itincludes Control File, Archived Log Files, Flashback Logs,
Control File Autobackups, Data Files, andRMAN files.
63. How do you
define a Flash Recovery Area?
A: To define a
Flash Recovery Area set the following Oracle Initialization Parameters.
SQL> ALTER
SYSTEM SET db_recovery_file_dest_size = 100G;
SQL> ALTER
SYSTEM SET db_recovery_file_dest = ‘/u10/oradata/school’;
64. How do you
use the V$RECOVERY_FILE_DEST view to display information regarding the
flashrecovery area?
A:SQL> SELECT
name, space_limit, space_used,space_reclaimable, number_of_filesFROM
v$recovery_file_dest;
65. How can you
display warning messages?
A:SQL> SELECT
object_type, message_type,message_level, reason, suggested_actionFROM
dba_outstanding_alerts;
66. How to use
the best practice to use Oracle Managed File (OMF) to let Oracle database to
create andmanage the underlying operating system files of a database?
A:SQL> ALTER
SYSTEM SETdb_create_file_dest = ‘/u03/oradata/school’;
SQL> ALTER
SYSTEM SETdb_create_online_dest_1 = ‘/u04/oradata/school’;
67. How to
enable Fast Incremental Backup to backup only those data blocks that have
changed?
A:SQL> ALTER
DATABASE enable BLOCK CHANGE TRACKING;
68. How do you
monitor block change tracking?
A:SQL> SELECT
filename, status, bytes FROM v$block_change_tracking;
It shows where
the block change-tracking file is located, the status of it and the size.
69. How do you
use the V$BACKUP_DATAFILE view to display how effective the block change
trackingis in minimizing the incremental backup I/O?
A:SQL> SELECT
file#, AVG(datafile_blocks), AVG(blocks_read),AVG
(blocks_read/datafile_blocks), AVG(blocks)FROM v$backup_datafileWHERE
used_change_tracking = ‘YES’ AND incremental_level > 0GROUP BY file#;If the
AVG (blocks_read/datafile_blocks) column is high then you may have to decrease
the timebetween the incremental backups.
70. How do you
backup the entire database?
A:RMAN>
BACKUP DATABASE;
71. How do you
backup an individual tablespaces?
A:RMAN>
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
RMAN> BACKUP
TABLESPACE system;
72. How do you
backup datafiles and control files?
A:RMAN>
BACKUP DATAFILE 3;
RMAN> BACKUP
CURRENT CONTROLFILE;
Use a fast recovery without restoring all backups from their backup location to the location specified inthe controlfile.
Use a fast recovery without restoring all backups from their backup location to the location specified inthe controlfile.
A:RMAN>
SWITCH DATABASE TO COPY;
73. RMAN will
adjust the control file so that the data files point to the backup file
location and then starts recovery.Why use Rman ?
A. 1. No Extra
Costs.. It is available free.
2.RMAN
introduced in Oracle 8 it has become simpler with new version and easier that
user managed backups.
3.Proper
Security
4.You are 100%
sure your database has been backed up .
5.It contains
details of backup taken in the central repository
6.Facility of
Testing validity of backups also command like cross check to checkthe status of
backup.
7.Oracle 10g has
got further optimized incremental backups with has resulted inimprovement of
performance during backup
8.and recovery
time
9.Parrallel
operation are supported
10.Better
Querying facility for knowing different details of backup.
11.No Extra redo
generated when backup is taken. compared to online backup
12.Without
rman.which results in saving of space in hard disk.
13.RMAN is an
intelligent tool
14.Maintains
repository of backup metadata.
15.Remembers
backup locations
16.Knows what
needs backup set locations
17.Knows what
needs to be backed up
18.Knows what is
required for recovery
19.Know what
backups are redundant
20.It handles
database corruptions
74. Oracle
Enhancement for Rman in 10g
A. 1.Flash
Recovery Area
2.Incrementally
Updated Backups
3.Faster
Incremental Backups
4.SWITCH
DATABASE COMMAND.
5.Binary
Compression
6.Global
Scripting
7.Duration
Clause
8.Configure This
9.Oracle
Enhancement for Rman in 10g
10.Automatic
Channel Failover
11.Compress
Backup Sets
12.Recovery
Through Reset Logs
13.Cross Backup
Sets
75. Global
Scripting
A.RMAN> print
script full_backup to file 'my_script_file.txt'
Oracle Database
10g provides a new concept of global scripts, which you can executeagainst any
database registered in the recovery catalog, as long as your RMAN client
isconnected to the recovery catalog and a target database
simultaneously.CPISOLUTION.COM
RMAN> create
global script global_full_backup
76. Outline the
steps for recovery of missing data file?
Losing Datafiles
Whenever you are in NoArchivelog Mode:
###################################################
If you are in noarchivelog mode and you loss any datafile then whether it is temporary or permanent media failure, the database will automatically shut down. If failure is temporary then correct the underline hardware and start the database. Usually crash recovery will perform recovery of the committed transaction of the database from online redo log files. If you have permanent media failure then restore a whole database from a good backup. How to restore a database is as follows:
If a media failure damages datafiles in a NOARCHIVELOG database, then the only option for recovery is usually to restore a consistent whole database backup. As you are in noarchivelog mode so you have to understand that changes after taken backup is lost.
If you logical backup that is export file you can import that also.
In order to recover database in noarchivelog mode you have to follow the following procedure.
1)If the database is open shutdown it.
SQL>SHUTDOWN IMMEDIATE;
2)If possible, correct the media problem so that the backup database files can be restored to their original locations.
3)Copy all of the backup control files, datafiles to the default location if you corrected media failure. However you can restore to another location. Remember that all of the files not only the damaged files.
4)Because online redo logs are not backed up, you cannot restore them with the datafiles and control files. In order to allow the database to reset the online redo logs, you must have to do incomplete recovery:
RECOVER DATABASE UNTIL CANCEL
CANCEL
5)Open the database in RESETLOGS mode:
ALTER DATABASE OPEN RESETLOGS;
In order to rename your control files or in case of media damage you can copy it to another location and then by setting (if spfile)
STARTUP NOMOUNT
alter system set control_files='+ORQ/orq1/controlfile/control01.ctl','+ORQ/orq1/controlfile/control02.ctl' scope=spfile;
STARTUP FORCE MOUNT;
In order to rename data files or online redo log files first copy it to new location and then point control file to new location by,
ALTER DATABASE RENAME FILE '+ORQ/orq1/datafile/system01.dbf';'
TO '+ORQ/orq1/datafile/system02.dbf';
Losing Datafiles Whenever you are in Archivelog Mode:
###################################################
If you are in noarchivelog mode and you loss any datafile then whether it is temporary or permanent media failure, the database will automatically shut down. If failure is temporary then correct the underline hardware and start the database. Usually crash recovery will perform recovery of the committed transaction of the database from online redo log files. If you have permanent media failure then restore a whole database from a good backup. How to restore a database is as follows:
If a media failure damages datafiles in a NOARCHIVELOG database, then the only option for recovery is usually to restore a consistent whole database backup. As you are in noarchivelog mode so you have to understand that changes after taken backup is lost.
If you logical backup that is export file you can import that also.
In order to recover database in noarchivelog mode you have to follow the following procedure.
1)If the database is open shutdown it.
SQL>SHUTDOWN IMMEDIATE;
2)If possible, correct the media problem so that the backup database files can be restored to their original locations.
3)Copy all of the backup control files, datafiles to the default location if you corrected media failure. However you can restore to another location. Remember that all of the files not only the damaged files.
4)Because online redo logs are not backed up, you cannot restore them with the datafiles and control files. In order to allow the database to reset the online redo logs, you must have to do incomplete recovery:
RECOVER DATABASE UNTIL CANCEL
CANCEL
5)Open the database in RESETLOGS mode:
ALTER DATABASE OPEN RESETLOGS;
In order to rename your control files or in case of media damage you can copy it to another location and then by setting (if spfile)
STARTUP NOMOUNT
alter system set control_files='+ORQ/orq1/controlfile/control01.ctl','+ORQ/orq1/controlfile/control02.ctl' scope=spfile;
STARTUP FORCE MOUNT;
In order to rename data files or online redo log files first copy it to new location and then point control file to new location by,
ALTER DATABASE RENAME FILE '+ORQ/orq1/datafile/system01.dbf';'
TO '+ORQ/orq1/datafile/system02.dbf';
Losing Datafiles Whenever you are in Archivelog Mode:
###################################################
If the datafile that is lost is under SYSTEM tablespace or if it is a datafile contain active undo segments then database shuts down. If the failure is temporary then correct the underline hardware and start the database. Usually crash recovery will perform recovery of the committed transaction of the database from online redo log files.
If the datafile that is lost in not under SYSTEM tablespace and not contain active undo segments then the affected datafile is gone to offline. The database remains open. In order to fix the problem take the affected tablespace offline and then recover the tablespace.
If the datafile that is lost is under SYSTEM tablespace or if it is a datafile contain active undo segments then database shuts down. If the failure is temporary then correct the underline hardware and start the database. Usually crash recovery will perform recovery of the committed transaction of the database from online redo log files.
If the datafile that is lost in not under SYSTEM tablespace and not contain active undo segments then the affected datafile is gone to offline. The database remains open. In order to fix the problem take the affected tablespace offline and then recover the tablespace.
77. Outline the
steps for recovery with missing online redo logs?
Redo
log is CURRENT (DB was shut down cleanly)
If the CURRENT redo log is lost and if the DB is closed consistently, OPEN RESETLOGS can be issued directly without any transaction loss. It is advisable to take a full backup of DB immediately after the STARTUP.
Redo log is CURRENT (DB was not shut down cleanly)
When a current redo log is lost, the transactions in the log file are also lost before making to archived logs. Since a DB startup can no more perform a crash recovery (since all the now-available online log files are not sufficient to startup the DB in consistent state), an incomplete media recovery is the only option. We will need to restore the DB from a previous backup and restore to the point just before the lost redo log file. The DB will need to be opened in RESETLOGSmode. There is some transaction loss in this scenario.RMAN> RESTORE CONTROLFILE FROM '<backup tag location>';
RMAN> ALTER DATABASE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE UNTIL TIME "to_date('MAR 05 2009 19:00:00','MON DD YYYY HH24:MI:SS')";
RMAN> ALTER DATABASE OPEN RESETLOGS;
If the CURRENT redo log is lost and if the DB is closed consistently, OPEN RESETLOGS can be issued directly without any transaction loss. It is advisable to take a full backup of DB immediately after the STARTUP.
Redo log is CURRENT (DB was not shut down cleanly)
When a current redo log is lost, the transactions in the log file are also lost before making to archived logs. Since a DB startup can no more perform a crash recovery (since all the now-available online log files are not sufficient to startup the DB in consistent state), an incomplete media recovery is the only option. We will need to restore the DB from a previous backup and restore to the point just before the lost redo log file. The DB will need to be opened in RESETLOGSmode. There is some transaction loss in this scenario.RMAN> RESTORE CONTROLFILE FROM '<backup tag location>';
RMAN> ALTER DATABASE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE UNTIL TIME "to_date('MAR 05 2009 19:00:00','MON DD YYYY HH24:MI:SS')";
RMAN> ALTER DATABASE OPEN RESETLOGS;
78. Outline
steps for recovery with missing archived redo logs?
If a redo log
file is already archived, its loss can safely be ignored. Since all the changes
in the DB are now archived and the online log file is only waiting for its turn
to be re-written by LGWR (redo log files are written circularly) the loss of
the redo log file doesnt matter much. It may be re-created using the
commandSQL> STARTUP MOUNT;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP <group#>;
This will re-create all groups and no transactions are lost. The database can be opened normally after this.
SQL> ALTER DATABASE CLEAR LOGFILE GROUP <group#>;
This will re-create all groups and no transactions are lost. The database can be opened normally after this.
79. What is FRA
? When do you use this ?
Flash recovery
area where you can store not only the traditional components found in a
backup strategy such as control files, archived log files, and Recovery Manager
(RMAN) datafile copies but also a number of other file
components such as flashback logs. The flash recovery area simplifies backup operations, and it increases the availability of the database because many backup and recovery operations using the flash recovery area can be performed when the database is open and available to users.
components such as flashback logs. The flash recovery area simplifies backup operations, and it increases the availability of the database because many backup and recovery operations using the flash recovery area can be performed when the database is open and available to users.
Because the
space in the flash recovery area is limited by the initialization parameter DB_
RECOVERY_FILE_DEST_SIZE , the Oracle database keeps track of which files are no
longer needed on disk so that they can be deleted when there is not enough free
space for new files. Each time a file is deleted from the flash recovery area,
a message is written to the alert log.
A message is
written to the alert log in other circumstances. If no files can be deleted,
and the recovery area used space is at 85 percent, a warning message is issued.
When the space used is at 97 percent, a critical warning is
issued. These warnings are recorded in the alert log file, are viewable in the data dictionary view DBA_OUTSTANDING_ALERTS , and are available to you on the main page of the EM Database Control
issued. These warnings are recorded in the alert log file, are viewable in the data dictionary view DBA_OUTSTANDING_ALERTS , and are available to you on the main page of the EM Database Control
80. What is
Channel? How do you enable the parallel backups with RMAN?
Channel is a
link that RMAN requires to link to target database. This link is required when
backup and recovery operations are performed and recorded. This channel can be
allocated manually or can be preconfigured by using
automatic channel allocation.
automatic channel allocation.
The number of
allocated channels determines the maximum degree of parallelism that is used
during backup, restore or recovery. For example, if you allocate 4 channels for
a backup operation, 4 background processes for the operation can run
concurrently.
Parallelization
of backup sets allocates multiple channels and assigns files to specific
channels. You can configure parallel backups by setting a PARALLELISM option of
the CONFIGURE command to a value greater than 1 or by
manually allocating multiple channels.
manually allocating multiple channels.
RMAN>
CONFIGURE DEVICE TYPE PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
81. What are
RTO, MTBF, and MTTR?
RTO: Recovery
Time objective-is the maximum amount of time that the database can be
unavailable and still stasfy SLA's
MTBF (Meant tiem Between Failure)-
MTTR (Mean tie to recover)- fast recovery solutions
MTBF (Meant tiem Between Failure)-
MTTR (Mean tie to recover)- fast recovery solutions
82. How do you
enable the encryption for RMAN backups?
If you wish to
modify your existing backup environment so that all RMAN backups are encrypted,
perform the following steps:
· Set up the
Oracle Encryption Wallet
· Issue the following RMAN command:
· Issue the following RMAN command:
RMAN> CONFIGURE
ENCRYPTION ALGORITHM 'AES256'; -- use 256 bit encryption
RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON; -- encrypt backups
RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON; -- encrypt backups
83. What is the
difference between restoring and recovering?
Restoring
involves copying backup files from secondary storage (backup media) to disk.
This can be done to replace damaged files or to copy/move a database to a new
location.
Recovery is the
process of applying redo logs to the database to roll it forward. One can
roll-forward until a specific point-in-time (before the disaster occurred), or
roll-forward until the last transaction recorded in the log files.
SQL> connect
SYS as SYSDBA
SQL> RECOVER
DATABASE UNTIL TIME '2001-03-06:16:00:00' USING BACKUP CONTROLFILE;
RMAN> run {
set until
time to_date('04-Aug-2004 00:00:00', 'DD-MON-YYYY HH24:MI:SS');
restore
database;
recover
database;
}
No comments:
Post a Comment