Saturday 26 March 2016

Monitoring ASM Instance and Database

Guideline for Shared pool Size in ASM instance-


Increase shared pool size based on the following guidelines:
·         For disk groups using external redundancy: Every 100 GB of space needs 1 MB of extra shared pool plus a fixed amount of 2 MB of shared pool.

·         For disk groups using normal redundancy: Every 50 GB of space needs 1 MB of extra shared pool plus a fixed amount of 4 MB of shared pool.

·         For disk groups using high redundancy: Every 33 GB of space needs 1 MB of extra shared pool plus a fixed amount of 6 MB of shared pool.


How to check database Size?
 

To obtain the current database storage size that is either already on ASM or will be stored in ASM:

SELECT d+l+t DB_SPACE
FROM
(SELECT SUM(bytes)/(1024*1024*1024) d FROM v$datafile),
(SELECT SUM(bytes)/(1024*1024*1024) l FROM v$logfile a, v$log b
 WHERE a.group#=b.group#),
(SELECT SUM(bytes)/(1024*1024*1024) t FROM v$tempfile
WHERE status='ONLINE'

 
How to check Disk Group Details?

select group_number, name, total_mb, free_mb, state, type from v$asm_diskgroup;


How to Check ASM Disk Details?

SELECT group_number, disk_number, mount_status, header_status, state, path FROM v$asm_disk
GROUP_NUMBER        DISK_NUMBER          MOUNT_S        HEADER_STATU      STATE               PATH
0                               0                              CLOSED            CANDIDATE           NORMAL     C:\ASMDISKS\_FILE_DISK1
0                              1                              CLOSED            CANDIDATE           NORMAL      C:\ASMDISKS\_FILE_DISK2
0                              2                              CLOSED            CANDIDATE           NORMAL       C:\ASMDISKS\_FILE_DISK3
0                             3                               CLOSED            CANDIDATE           NORMAL        C:\ASMDISKS\_FILE_DISK4


Note:

The value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk is available but hasn't yet been assigned to a disk group.


Dynamice Performance Views

V$ASM_DISKGROUP
This view provides information about a disk group. In a database instance, this view contains one row for every ASM disk group mounted by the ASM instance.

V$ASM_CLIENT
This view identifies all the client databases using various disk groups. In a Database instance, the view contains one row for the ASM instance if the database has any open ASM files.

V$ASM_DISK
This view contains one row for every disk discovered by the ASM instance. In a database instance, the view will only contain rows for disks in use by that database instance.

V$ASM_FILE
This view contains one row for every ASM file in every disk group mounted by the ASM instance.

V$ASM_TEMPLATE
This view contains one row for every template present in every disk group mounted by the ASM instance.



See Also                                                                                               See Also 
Rename Disk / Delete Disk in ASM                                                        Moving Control Files in ASM

Thursday 24 March 2016

Pre-check For SWITCHOVER using DG Broker

1. Verify the primary database instance is open (READ/WRITE Mode) and the standby database instance is mounted.

SELECT NAME, OPEN_MODE FROM GV$DATABASE;

2. Verify there are no active users connected to the databases.

SET LINES 10000 pages 10000
SELECT SID, SCHEMANAME, OSUSER, MACHINE, STATUS FROM GV$SESSION WHERE USERNAME IS NOT NULL;

3. Check for any active jobs running

SELECT * FROM DBA_JOBS_RUNNING;

SELECT OWNER, JOB_NAME, START_DATE, END_DATE, ENABLED
FROM DBA_SCHEDULER_JOBS WHERE ENABLED='TRUE' AND OWNER <> 'SYS';

4) Check the primary db has standby redolog files

SELECT GROUP#, BYTES/1024, STATUS FROM GV$STANDBY_LOG;

5) Check standby database has tempfile and it should match the size of the temp file from primary db.

SELECT NAME, BYTES FROM V$TEMPFILE;

6) Switch the backup process to the new primary database, if the backup policy is from the primary db.

7) Check whether the following settings are available for redo transport services such as LogXptMode, NetTimeout, LogShipping, StandbyFileManagement, FastStartFailoverTarget, StandbyArchiveLocation & AlternateLocation., DelayMins in the standby & primary database.

DGMGRL> SHOW DATABASE VERBOSE 'PRIMARY_DB_UNIQUE_NAME';
DGMGRL> SHOW DATABASE VERBOSE 'STANDBY_DB_UNIQUE_NAME';

Check whether StaticConnectIdentifier is configured on ALL NODES, so you have run this command 4 times if you have 2 node RAC primary & standby database.

SHOW INSTANCE VERBOSE 'INSTANCE_NAME' ON DATABASE 'DB_NAME'

Check start options for the primary & standby database

 srvctl config database -d ‘PRIMARY_DB_NAME’ –a
 srvctl config database -d ‘STANDBY_DB_NAME’ –a


Check the DELAYMINS, if it has set for value. Reduce the DELAYMINS to ZERO and apply all the logs before starting switchover.

DGMGRL> SHOW DATABASE 'PRIMARY_DB_UNIQUE_NAME' DELAYMINS;
DGMGRL> EDIT DATABASE 'PRIMARY_DB_UNIQUE_NAME' SET PROPERTY 'DELAYMINS'='0';

8)  Check the datafile status in the standby database.

SELECT DISTINCT STATUS FROM V$DATAFILE;

Note: The status should be in “ONLINE” & “SYSTEM” mode. If any file/files are in RECOVER status, then identify the reason.

Check the offline datafiles in the primary & standby database

SELECT DISTINCT STATUS FROM V$DATAFILE_HEADER WHERE STATUS <> 'ONLINE';

9) Perform a log switch from the primary DB and verify logs are applied on the standby database.

IF RAC

ALTER SYSTEM ARCHIVE LOG CURRENT;

NON – RAC

ALTER SYSTEM SWITCH LOGFILE;

10) Make sure things are fine with the database on both primary (All instances) and Standby database (All instances). Check alert log, trace files from all nodes to make sure the database running without any issues.

11) check the status of the DG Broker LISTENER status, it should be up and running and check the status of the REMOTE_LISTENER, LISTENER_NETWORKS, LOCAL_LISTENER

lsnrctl status <LSNR_NAME>

12) Check FAST START FAILOVER is enabled and check the preferred STANDBY database.

13) Once the switch over process completes, Oracle DG Broker will have the same DG protection mode & Network Transmission role for the NEW PRIMARY DB.

14) Check the following
The primary database is enabled and is in the TRANSPORT-ON state.

DGMGRL> show database 'PRIMARY_DB_UNIQUE_NAME';

Note: Check for “Intended State” and it should be “TRANSPORT-ON”


 The target standby database is enabled and is in the APPLY-ON state.

DGMGRL> show database 'STANDBY_DB_UNIQUE_NAME';
Note: Check for “Intended State” and it should be “APPLY-ON”

15) Check the flashback database is enabled in the primary/standby database. If it is not enabled then enable the FLASHBACK database

SELECT FLASHBACK_ON FROM GV$DATABASE;


TIPS :

ORA-12514 during the switchover” - Check whether the StaticConnectIdentifier is set correctly

TO CHECK (NEED TO CHECK ON ALL THE INSTANCES)

SHOW INSTANCE VERBOSE 'INSTANCE_NAME' ON DATABASE 'DB_NAME'

TO CHANGE

DGMGRL> edit instance dg112i1 on database dg112i_prm set PROPERTY StaticConnectIdentifier='';

Ex

DGMGRL> edit instance dg112i1 on database dg112i_prm set PROPERTY StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.11.225)(PORT=1555))(CONNECT_DATA=(SERVICE_NAME=DG112I_PRM_DGMGRL.au.oracle.com)(INSTANCE_NAME=dg112i1)(SERVER=DEDICATED)))';

What Happens Behind SELECT statement

Working Behind SELECT statement

This is very Important Concept for a DBA perspective.
This wil be Helpful in All aspects


1. A user requests a connection to the Oracle server through a 3-tier or an n-tier web-based client using Oracle Net Services.

2. Upon validating the request, the server starts a new dedicated server process for that user.

3. The user executes a statement to insert a new row into a table.

4. Oracle checks the user’s privileges to make sure the user has the necessary rights to perform the insertion. If the user’s privilege information isn’t already in the library cache, it will have to be read from disk into that cache.

5. If the user has the requisite privileges, Oracle checks whether a previously executed SQL statement that’s similar to the one the user just issued is already in the shared pool. If there is, Oracle executes this version of the SQL; otherwise Oracle parses and executes the user’s SQL statement. Oracle then creates a private SQL area in the user session’s PGA.

6. Oracle first checks whether the necessary data is already in the data buffer cache. If not, the server process reads the necessary table data from the datafiles on disk.

7. Oracle immediately applies row-level locks, where needed, to prevent other processes from trying to change the same data simultaneously.

8. The server P. writes the change vectors to the redo log buffer.

9. The server P. modifies the table data (inserts the new row) in the data buffer cache.

10. The user commits the transaction, making the insertion permanent. Oracle releases the row locks after the commit is issued.

11. The log writer process immediately writes out the changed data in the redo log buffers to the online redo log file.

12. The server process sends a message to the client process to indicate the successful completion of the INSERT operation. The message would be “COMMIT COMPLETE” in this case. (If it couldn’t complete the request successfully, it sends a message indicating the failure of the operation.)

13. Changes made to the table by the insertion may not be written to disk right away. The database writer process writes the changes in batches, so it may be some time before the inserted information is actually written permanently to the database files on disk.


Wednesday 23 March 2016

Change ASM Diskgroup from normal redundancy to external redundancy



Step 1 :
SQL> shut immediate;

Step 2:
SQL> startup mount

Step 3:
Open RMAN  and  do a Backup of Whole Database

SQL> !rman target /

RMAN> backup device type disk format '/u01/asm1/database_format%u' database;

Starting backup at 22-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/myasm/datafile/system.302.907085491
input datafile file number=00002 name=+DATA/myasm/datafile/sysaux.303.907085493
input datafile file number=00003 name=+DATA/myasm/datafile/undotbs1.304.907085495
input datafile file number=00005 name=+DATA/myasm/datafile/users.320.907107373
input datafile file number=00006 name=+DATA/myasm/datafile/demo.319.907106321
input datafile file number=00004 name=+DATA/myasm/datafile/users.305.907085495
channel ORA_DISK_1: starting piece 1 at 22-MAR-16
channel ORA_DISK_1: finished piece 1 at 22-MAR-16
piece handle=/u01/asm1/database_format09r148in tag=TAG20160322T122141 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:37
Finished backup at 22-MAR-16

Starting Control File and SPFILE Autobackup at 22-MAR-16
piece handle=+DATA/myasm/autobackup/2016_03_22/s_907157976.272.907158211 comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAR-16

RMAN> backup device type disk format '/u01/asm1/arch_format%u' archivelog all;

Starting backup at 22-MAR-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=1 STAMP=907086880
input archived log thread=1 sequence=3 RECID=2 STAMP=907105575
input archived log thread=1 sequence=4 RECID=3 STAMP=907110892
input archived log thread=1 sequence=5 RECID=4 STAMP=907115995
input archived log thread=1 sequence=6 RECID=5 STAMP=907116001
input archived log thread=1 sequence=7 RECID=6 STAMP=907116020
input archived log thread=1 sequence=8 RECID=7 STAMP=907116024
input archived log thread=1 sequence=9 RECID=8 STAMP=907116037
input archived log thread=1 sequence=10 RECID=9 STAMP=907116045
input archived log thread=1 sequence=11 RECID=10 STAMP=907116050
input archived log thread=1 sequence=12 RECID=11 STAMP=907116066
input archived log thread=1 sequence=13 RECID=12 STAMP=907116177
input archived log thread=1 sequence=14 RECID=13 STAMP=907116218
input archived log thread=1 sequence=15 RECID=14 STAMP=907116267
input archived log thread=1 sequence=16 RECID=15 STAMP=907116290
input archived log thread=1 sequence=17 RECID=16 STAMP=907116297
input archived log thread=1 sequence=18 RECID=17 STAMP=907153331
input archived log thread=1 sequence=19 RECID=18 STAMP=907154322
input archived log thread=1 sequence=20 RECID=19 STAMP=907154422
input archived log thread=1 sequence=21 RECID=20 STAMP=907155366
input archived log thread=1 sequence=22 RECID=21 STAMP=907155677
channel ORA_DISK_1: starting piece 1 at 22-MAR-16
channel ORA_DISK_1: finished piece 1 at 22-MAR-16
piece handle=/u01/asm1/arch_format0br148ns tag=TAG20160322T122427 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 22-MAR-16

Starting Control File and SPFILE Autobackup at 22-MAR-16
piece handle=+DATA/myasm/autobackup/2016_03_22/s_907157976.289.907158289 comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAR-16


Step 4 :
Create pfile for Database and backup controlfile

SQL> create pfile='/u01/asm1/initnew.ora' from spfile;
File created.

SQL> alter database backup controlfile to '/u01/asm1/control01.ctl';
Database altered.

SQL> shut immediate;


Step 6
Run ASMCA, Create new diskgroup and drop disk group existing Diskgroup

And create new diskgroup with External Redundnacy
$ASMCA

[oracle@oracleasm1 Desktop]$ . oraenv
ORACLE_SID = [myasm] ? myasm
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle
[oracle@oracleasm1 Desktop]$ rlwrap sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 22 12:49:17 2016
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount pfile='/u01/asm1/initnew.ora';
ORACLE instance started.

Total System Global Area  313860096 bytes
Fixed Size                                1336232 bytes
Variable Size                        197135448 bytes
Database Buffers               109051904 bytes
Redo Buffers                         6336512 bytes

SQL> create spfile ='+DATA' from pfile='/u01/asm1/initnew.ora' ;
File created.

SQL> !rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Tue Mar 22 12:50:34 2016
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: MYASM (not mounted)

RMAN> restore controlfile from '/u01/asm1/control01.ctl';

Starting restore at 22-MAR-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/myasm/controlfile/current.257.907160017
output file name=+DATA/myasm/controlfile/current.258.907160019
Finished restore at 22-MAR-16

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> restore database;

Starting restore at 22-MAR-16
Starting implicit crosscheck backup at 22-MAR-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 device type=DISK
Crosschecked 7 objects
Finished implicit crosscheck backup at 22-MAR-16

Starting implicit crosscheck copy at 22-MAR-16
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 22-MAR-16

searching for all files in the recovery area
cataloging files...
no files cataloged

using channel ORA_DISK_1

datafile 4 not processed because file is offline
datafile 5 not processed because file is offline
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to +DATA/myasm/datafile/system.302.907085491
channel ORA_DISK_1: restoring datafile 00002 to +DATA/myasm/datafile/sysaux.303.907085493
channel ORA_DISK_1: restoring datafile 00003 to +DATA/myasm/datafile/undotbs1.304.907085495
channel ORA_DISK_1: restoring datafile 00006 to +DATA/myasm/datafile/demo.319.907106321
channel ORA_DISK_1: reading from backup piece /u01/asm1/database_format09r148in
channel ORA_DISK_1: piece handle=/u01/asm1/database_format09r148in tag=TAG20160322T122141
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:03:16
Finished restore at 22-MAR-16

RMAN> recover database ;

Starting recover at 22-MAR-16
using channel ORA_DISK_1
datafile 4 not processed because file is offline
datafile 5 not processed because file is offline

starting media recovery

archived log for thread 1 with sequence 23 is already on disk as file +NEWDATA/myasm/onlinelog/group_3.258.907115921
archived log file name=+NEWDATA/myasm/onlinelog/group_3.258.907115921 thread=1 sequence=23
media recovery complete, elapsed time: 00:00:01
Finished recover at 22-MAR-16



RMAN> alter database open resetlogs;

Difference between Conventional path Export & Direct path Export

Conventional path Export
Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file.

 Direct path Export

When using a direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file.

. The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).


 You may be able to improve performance by increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export.  Your exact performance gain depends upon the following factors:
- DB_BLOCK_SIZE
- the types of columns in your table
- your I/O layout (the drive receiving the export file should be separate from the disk drive where the database files reside)

For example, invoking a Direct path Export with a maximum I/O buffer of 64kb can improve the performance of the Export with almost 50%. This can be achieved by specifying the additional Export parameters DIRECT and RECORDLENGTH

LIMITATIONS

1)  A Direct path Export does not influence the time it takes to Import the data. That is, an export file created using direct path Export or Conventional path Export, will take the same amount of time to Import. 

2) You cannot use the DIRECT=Y parameter when exporting in transportable tablespace mode.  You can use the DIRECT=Y parameter when exporting in full, user or table mode

3) The parameter QUERY applies ONLY to conventional path Export. It cannot be specified in a direct path export (DIRECT=Y).

4) A Direct path Export can only export the data when the NLS_LANG environment variable of the session who is invoking the export, is equal to the database character set. If NLS_LANG is not set (default is AMERICAN_AMERICA.US7ASCII) and/or NLS_LANG is different, Export will display the warning EXP-41 and abort with EXP-0


Tuesday 22 March 2016

ORA-01565: error in identifying file /dbs/spfile@.ora


ORA-01565: error in identifying file /dbs/spfile@.ora

When we’re going to create pfile from spfile while spfile running on ASM instance, We can face the below errors,

SQL> create pfile=’/u01/inittestdb.ora’ from spfile;
create pfile=’/u01/inittestdb.ora’ from spfile
*
ERROR at line 1:
ORA-01565: error in identifying file ‘?/dbs/spfile@.ora’
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3

Solution :
We need to give ASM path of SPFILE.

SQL> create pfile=’/u01/inittestdb.ora’ from spfile=’+DATA/testdb/spfiletestdb.ora’;

File created.

How to View Disk Group Clients using V$ASM_CLIENT view ?



Use the Below Query :

SQL > SELECT dg.name AS diskgroup, SUBSTR(c.instance_name,1,12) AS instance, SUBSTR(c.db_name,1,12) AS dbname, SUBSTR(c.SOFTWARE_VERSION,1,12) AS software, SUBSTR(c.COMPATIBLE_VERSION,1,12) AS compatible FROM V$ASM_DISKGROUP dg, V$ASM_CLIENT c WHERE dg.group_number = c.group_number ;

Multiplex of redolog files in ASM

If you have two diskgroup, you want to multiplex redo in different diskgroup, you just add redo log desitnation diskgroup.

Step 1 :
Check the Current Disk of REDOLOGS. (In my case , its  ‘+DATA’)

So , Now I will move it to another DiskGroup (i,e, NEWDATA)


SQL> select l.group# , l.bytes , l.status , lf.member from v$logfile lf , v$log l where lf.group# = l.group#;

    GROUP#         BYTES STATUS
---------- ---------- ----------------
MEMBER
--------------------------------------------------------------------------------
                 3   52428800 INACTIVE
+DATA/myasm/onlinelog/group_3.312.907086019

                 3   52428800 INACTIVE
+DATA/myasm/onlinelog/group_3.313.907086039

                 2   52428800 CURRENT
+DATA/myasm/onlinelog/group_2.310.907085987


    GROUP#         BYTES STATUS
---------- ---------- ----------------
MEMBER
--------------------------------------------------------------------------------
                 2   52428800 CURRENT
+DATA/myasm/onlinelog/group_2.311.907086001

                 1   52428800 INACTIVE
+DATA/myasm/onlinelog/group_1.308.907085955

                 1   52428800 INACTIVE
+DATA/myasm/onlinelog/group_1.309.907085973

6 rows selected.



Step 2:
Edit the 2 Parameter as bellow  , create pfile &  Bounce the Database

SQL> alter system set db_create_online_log_dest_1='+NEWDATA' scope=spfile;
System altered.

SQL> alter system set db_create_online_log_dest_2='+NEWDATA' scope=spfile;
System altered.

SQL> create pfile='/u01/asm1/initmyasm.ora' from spfile;
File created.

SQL> shut immediate
Database closed.


SQL> show parameter db_create_online_log_dest;

NAME                                                        TYPE               VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_1          string               +NEWDATA
db_create_online_log_dest_2          string               +NEWDATA
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string

Step 3 :
Drop and Re-Create Redolog as Follows

SQL> alter database drop logfile group 3;
Database altered.

SQL> alter database add logfile group 3 size 50m;
Database altered.


SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01624: log 2 needed for crash recovery of instance myasm (thread 1)
ORA-00312: online log 2 thread 1: '+DATA/myasm/onlinelog/group_2.310.907085987'
ORA-00312: online log 2 thread 1: '+DATA/myasm/onlinelog/group_2.311.907086001'


SQL> alter system switch logfile;
System altered.

SQL> alter database drop logfile group 2;
Database altered.

SQL> alter database add logfile group 2 size 50m;
Database altered.


SQL> alter database drop logfile group 1;
Database altered.

SQL> alter database add logfile group 1 size 50m;
Database altered.


Step 4 :
Check the view again, see. .  Disk  is now changed (+NEWDATA)

SQL> select l.group# , l.bytes , l.status , lf.member from v$logfile lf , v$log l where lf.group# = l.group#;

    GROUP#         BYTES STATUS
---------- ---------- ----------------
MEMBER
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                 3   52428800 INACTIVE
+NEWDATA/myasm/onlinelog/group_3.258.907115921

                 3   52428800 INACTIVE
+NEWDATA/myasm/onlinelog/group_3.259.907115933

                 2   52428800 UNUSED
+NEWDATA/myasm/onlinelog/group_2.262.907116331


    GROUP#         BYTES STATUS
---------- ---------- ----------------
MEMBER
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                 2   52428800 UNUSED
+NEWDATA/myasm/onlinelog/group_2.263.907116339

                 1   52428800 CURRENT
+NEWDATA/myasm/onlinelog/group_1.260.907116199

                 1   52428800 CURRENT
+NEWDATA/myasm/onlinelog/group_1.261.907116207


6 rows selected.


SQL>