Thursday 16 March 2017

Working with Redo Logs



Script to View Online RedoLog Information :


SQL> SELECT * FROM V$LOGFILE;
OR

SQL > SELECT a.group#
,a.thread# ,a.status grp_status
,b.member member ,b.status mem_status
,a.bytes/1024/1024 mbytes FROM v$log a, v$logfile b
WHERE a.group# = b.group#
ORDER BY a.group#, b.member;


Status for Online Redo Log of V$LOG View  :

CURRENT The log group is currently being written to by the log writer.

ACTIVE The log group is required for crash recovery and may or may not have been archived.

CLEARING The log group is being cleared out by an ALTER DATABASE CLEAR LOGFILE command.

CLEARING_CURRENT The current log group is being cleared of a closed thread.

INACTIVE The log group isn’t required for crash recovery and may or may not have been archived.

UNUSED The log group has never been written to; it was recently created.




Status of Online Redo Log File Members in V$LOGFILE View


INVALID The log file member is inaccessible or has been recently created.

DELETED The log file member is no longer in use.

STALE The log file member’s contents aren’t complete.

NULL The log file member is being used by the database.



How to Rename the Redolog files:

1 . Shut down the DB
2. Rename the redolog files on OS level (using ‘mv’ command in UNIX)
3. Startup mount

SQL> alter database rename file ‘<loc with prev file name> to <loc with current file name>;
Do the same for all renamed redolog files
SQL> alter database open;


How to Multiplex the redolog files:

To protect against a failure involving the redo log itself, Oracle Database allows a multiplexed redo log, meaning that two or more identical copies of the redo log can be automatically maintained in separate locations. For the most benefit, these locations should be on separate disks. Even if all copies of the redo log are on the same disk, however, the redundancy can help protect against I/O errors, file corruption, and so on. When redo log files are multiplexed, LGWR concurrently writes the same redo log information to multiple identical redo log files, thereby eliminating a single point of redo log failure.
Multiplexing is implemented by creating groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of the group. Each redo log group is defined by a number, such as group 1, group 2, and so on.

FOR EXAMPLE:

In group 1 all memebers will have same type of data, if one member is corrupted then another member is user for recovery


SQL> SELECT MEMBER FROM V$LOGFILE WHERE GROUP# = ‘1’;

SQL>ALTER DATABASE ADD LOGFILE MEMBER ‘<>loc with redolog file name> TO GROUP <group_num>;







Thursday 2 March 2017

Decommission of Oracle Database




Decommission : A process to remove, Retire or making it inactive
Every organization has their own documents (SOP – Standard Operation Procedure) maintain for  removing or decommissioning the active  databases
I am presenting some general steps below.
1.      Requirement : Business/Customer to proceed for database decommissioning.

2.    Notifying Business/Customer / Application team that  “we are going to decommission the database at _ _ specific time” .

3.      Making the fresh backup :  Raise a ticket and assign it to Storage / Backup team and Co-ordinate with storage / backup team to take required backup. And to keep the backup for specified time(retention period).

4.      Make a list of parameter file, datafiles, controlfiles, redolog files.

-          Details of DB links using query: select * from dba_db_links;
-          List of all the data files using query: select * from dba_data_files;
-          List of log files using query: select * from v$logfile;
-          List of control files using query: select * from v$controlfile;

5.      Bring down the database and stop the listener.

6.      Take whole database backup and make sure backup is completed without any errors/warnings.


7.    Make sure Storage team has taken the backup as specified in the raised ticket against them and has set the retention period correctly. You can take a Signoff mail from Storage/Backup team in case it is required.

8.   Remove the monitoring jobs entry from crontab and also remove the monitoring jobs (if any) running from third party tool(s) in the database and comment out the entry from oratab file located in /etc directory.


9.   Mark in your database repository/inventory or your DB records , that this specific Database is going to Decom under this specific request at this specific time. (This will help to notify other DBA in your team).

10.   Startup the database in restrict mode and give drop database command.
                        SQL> startup restrict mount ;
                        SQL> drop database;

11.  Check if the instance is still running or not. Shutdown the instance if it is still running. Check and verify Alertlog for the list of files deleted.

12. Now remove the tracefiles, archive log files, dump files, old backup files and respective Database directories.


13.  Be careful before deleting the physical files. Crosscheck and verify before deleting these file.

14.  Now remove the backup “entry” through which backup was getting triggered.

15.  Notify the customer/Application team about the backup ticket , backup information such as retention period and location of backup.

Note : Retention Period may be varied based on your business needs and SLA


See Also :

- Dataguard Creation


- Pre-Check for Dataguard SwitchOver  

- Common Wait events in AWR and Solutions 

- Most Common Daily Datapump Scenarios