Oracle

October 25, 2017

Auto Optimizer Stats Collection in 12c

Filed under: Uncategorized — srivenu @ 10:22 am

Environment – Oracle 12c EE 12.1.0.2.0, 64bit on RHEL 7.3
The task auto optimizer stats collection is created by catmwin.sql (run by catproc.sql during database creation). catmwin.sql creates 2 other Automatic Maintenance Clients – auto space advisor & sql tuning advisor.
It creates a MAINTENANCE_WINDOW_GROUP with 7 Maintenance Windows – 5 windows named from MONDAY_WINDOW to FRIDAY_WINDOW starting at 10PM with 4 hour duration, and 2 windows named SATURDAY_WINDOW & SUNDAY_WINDOW starting at 06AM with 20 hour duration. It creates a program named gather_stats_prog which runs dbms_stats.gather_database_stats_job_proc.
In 10g the stats collection was created as a separate job and appeared in DBA_SCHEDULER_JOBS as GATHER_STATS_JOB. From 11g it appears in DBA_AUTOTASK_WINDOW_CLIENTS and only appear in DBA_SCHEDULER_JOBS with system generated names (like ORA$AT_OS_OPT_SY_nnn) when they are executed.
To check on all auto task clients and their status query *_AUTOTASK_CLIENT

col con_id head "Con|tai|ner" form 999
col client_name form a33
col status form a8
col service_name form a20
col window_group form a15
col attributes form a55
col last_change form a15

select	con_id, client_name, status, service_name, window_group, 
	attributes, to_char(last_change, 'DD-MON-YY HH24:MI') last_change
from   	cdb_autotask_client
order	by 1, 2
/

Don’t query *_AUTOTASK_TASK to check the status. A Task can be used by different/multiple clients. So even though we disable the client, the CDB_AUTOTASK_TASK may still show the status as enabled.
I did not find any information on the relationship between *_AUTOTASK_CLIENT and *_AUTOTASK_OPERATION, but looking at the view definitions, i think there is one to many relationship from CLIENT’s to OPERATION’s. Right now the client “auto optimizer stats collection” has only one operation “auto optimizer stats job”.

To disable all automatic maintenance tasks you can execute

execute DBMS_AUTO_TASK_ADMIN.DISABLE;

To disable just the auto optimizer stats collection, you can execute

 exec dbms_auto_task_admin.disable(
       client_name => 'auto optimizer stats collection',
       operation => NULL, window_name => NULL);

To get a summary of client job executions in each maintenance window, query *_AUTOTASK_CLIENT_HISTORY

col con_id head "Con|tai|ner" form 999
col window_name head "Window" form a16
col wst head "Window|Start|Time" form a12
col window_duration head "Window|Duration|Hours" form 999999
col jobs_created head "Jobs|Created" form 999
col jobs_started head "Jobs|Started" form 999
col jobs_completed head "Jobs|Completed" form 999
col wet head "Window|End|Time" form a12

select 	con_id, window_name, to_char(window_start_time, 'DD-MON HH24:MI') wst,
	extract(hour from window_duration) + round(extract(minute from window_duration)/60) window_duration,
	jobs_created, jobs_started, jobs_completed, 
	to_char(window_end_time, 'DD-MON HH24:MI') wet
from 	cdb_autotask_client_history
where 	client_name = 'auto optimizer stats collection'
order	by window_start_time, con_id
/

 Con                  Window         Window                           Window
 tai                  Start        Duration    Jobs    Jobs      Jobs End
 ner Window           Time            Hours Created Started Completed Time
---- ---------------- ------------ -------- ------- ------- --------- ------------
   3 SUNDAY_WINDOW    15-OCT 03:00       20       1       0         0 15-OCT 23:00
   1 SUNDAY_WINDOW    15-OCT 06:00       20       6       6         6 16-OCT 02:00
   1 MONDAY_WINDOW    16-OCT 22:00        4       2       2         2 17-OCT 02:00

As seen above, the client “auto optimizer stats collection” can run multiple times per window (if the prior job completes and the window is open).
As per my observation, the second job is scheduled 1 hour after the first and the rest every 4 hours.
To get the details of each of these jobs like job name, status etc query *_AUTOTASK_JOB_HISTORY

col con_id head "Con|tai|ner" form 999
col window_name head "window" form a16
col wst head "window|start|time" form a12
col window_duration head "window|dura|tion|hours" form 999999
col job_name head "job name" form a22
col jst head "job|start|time" form a12
col job_duration head "job|dura|tion|mins" form 999999
col job_status head "job|status" form a10
col job_error head "job error" form 99
col job_info head "job info" form a40

select 	con_id, window_name, to_char(window_start_time, 'DD-MON HH24:MI') wst,
	extract(hour from window_duration) + round(extract(minute from window_duration)/60) window_duration,
	job_name, to_char(job_start_time, 'DD-MON HH24:MI') jst, job_status,
	extract(hour from job_duration)*60 + round(extract(minute from job_duration)) job_duration,
	job_error, job_info
from 	cdb_autotask_job_history
where 	client_name = 'auto optimizer stats collection'
order	by job_start_time, con_id
/

                                    window                                                    job
 Con                  window          dura                        job                        dura
 tai                  start           tion                        start        job           tion
 ner window           time           hours job name               time         status        mins job error job info
---- ---------------- ------------ ------- ---------------------- ------------ ---------- ------- --------- ----------------------------------------
   1 SUNDAY_WINDOW    15-OCT 06:00      20 ORA$AT_OS_OPT_SY_1543  15-OCT 06:00 SUCCEEDED        5         0
   1 SUNDAY_WINDOW    15-OCT 06:00      20 ORA$AT_OS_OPT_SY_1555  15-OCT 07:07 SUCCEEDED        2         0
   1 SUNDAY_WINDOW    15-OCT 06:00      20 ORA$AT_OS_OPT_SY_1546  15-OCT 11:14 SUCCEEDED        2         0
   1 SUNDAY_WINDOW    15-OCT 06:00      20 ORA$AT_OS_OPT_SY_1557  15-OCT 15:18 SUCCEEDED        4         0
   1 SUNDAY_WINDOW    15-OCT 06:00      20 ORA$AT_OS_OPT_SY_1548  15-OCT 19:25 SUCCEEDED        3         0
   1 SUNDAY_WINDOW    15-OCT 06:00      20 ORA$AT_OS_OPT_SY_1559  15-OCT 23:29 SUCCEEDED        4         0
   1 MONDAY_WINDOW    16-OCT 22:00       4 ORA$AT_OS_OPT_SY_1561  16-OCT 22:00 SUCCEEDED        9         0
   1 MONDAY_WINDOW    16-OCT 22:00       4 ORA$AT_OS_OPT_SY_1564  16-OCT 23:03 SUCCEEDED        3         0

The statistics operations performed by each of these jobs are recorded in *_OPTSTAT_OPERATIONS & *_OPTSTAT_OPERATION_TASKS

                                                                                      
col con_id head "Con|tai|ner" form 999
col id head "Opera|tion|ID" form 9999999
col operation head "Operation" form a30
col job_name head "job name" form a22
col target head "Target" form a10
col jst head "Operation|start|time" form a12
col duration head "Operation|dura|tion|mins" form 999999
col status head "Operation|status" form a10

select 	con_id, id, operation, job_name, target, to_char(start_time, 'DD-MON HH24:MI') jst,
	extract(hour from (end_time - start_time))*60 + extract(minute from (end_time - start_time)) duration,
	status
from  	cdb_optstat_operations
where	operation = 'gather_database_stats (auto)'
order 	by  start_time, con_id
/

                                                                                            Operation
 Con    Opera                                                                  Operation         dura
 tai     tion                                                                  start             tion Operation
 ner       ID Operation                      job name               Target     time              mins status
---- -------- ------------------------------ ---------------------- ---------- ------------ --------- ----------
   1     2042 gather_database_stats (auto)   ORA$AT_OS_OPT_SY_1602  AUTO       23-OCT 23:04         1 COMPLETED
   1     2047 gather_database_stats (auto)   ORA$AT_OS_OPT_SY_1612  AUTO       24-OCT 22:00         4 COMPLETED
   1     2065 gather_database_stats (auto)   ORA$AT_OS_OPT_SY_1604  AUTO       24-OCT 23:10         1 COMPLETED
   3    35049 gather_database_stats (auto)   ORA$AT_OS_OPT_SY_1032  AUTO       25-OCT 03:00       239 TIMED OUT

Details of the parameters used during the operation can be obtained from the "notes" column (not shown above)

Details of stats gathering on individual objects can be viewed in *_OPTSTAT_OPERATION_TASKS (with opid of the task from *_OPTSTAT_OPERATIONS).
For ex – to see all object stats gathered by operation 35049, use the below query

col con_id head "Con|tai|ner" form 999
col jst head "Operation|start|time" form a12
col target head "Target" form a60
col target_type head "Target Type" form a15
col status head "Operation|status" form a10
col duration head "Dura|tion|mins" form 999

select	con_id, to_char(start_time, 'DD-MON HH24:mi') jst,
	target, target_type, status, 
        extract(hour from (end_time - start_time))*60 + extract(minute from (end_time - start_time)) duration
from	cdb_optstat_operation_tasks
where	opid=35049
order	by start_time, con_id
/

 Con Operation                                                                                            Dura
 tai start                                                                                     Operation  tion
 ner time         Target                                                       Target Type     status     mins
---- ------------ ------------------------------------------------------------ --------------- ---------- ----
   3 25-OCT 03:00 USER1.TABLE12345678                                          TABLE           COMPLETED     0
   3 25-OCT 03:00 USER1.TABLE_123                                              TABLE           COMPLETED     0
   3 25-OCT 03:00 SYS.OPATCH_XML_INV                                           TABLE           FAILED        0
   3 25-OCT 03:00 SYS.MON_MODS$                                                TABLE           COMPLETED     0
   3 25-OCT 03:00 SYS.I_MON_MODS$_OBJ                                          INDEX           COMPLETED     0
   3 25-OCT 03:00 SYS.WRI$_HEATMAP_TOP_TABLESPACES                             TABLE           COMPLETED     0
   3 25-OCT 03:00 SYS.SEQ$                                                     TABLE           COMPLETED     0

Details of the column stats and extended stats gathered by each task and the errors encountered can be seen in the "notes" column (not shown above)
Advertisements

auto optimizer stats collection jobs not started as existing job hung in RUNNING (IN PROGRESS) status

Filed under: Uncategorized — srivenu @ 10:08 am

Recently we observed that stats collection is not happening in one of the PDB’s.
Environment – Oracle 12c EE 12.1.0.2.0, 64bit on RHEL 7.3

col con_id head "Con|tai|ner" form 999
col window_name head "Window" form a16
col wst head "Window|Start|Time" form a12
col window_duration head "Window|Duration|Hours" form 999999
col jobs_created head "Jobs|Created" form 999
col jobs_started head "Jobs|Started" form 999
col jobs_completed head "Jobs|Completed" form 999
col wet head "Window|End|Time" form a12

select 	con_id, window_name, to_char(window_start_time, 'DD-MON HH24:MI') wst,
	extract(hour from window_duration) + round(extract(minute from window_duration)/60) window_duration,
	jobs_created, jobs_started, jobs_completed, 
	to_char(window_end_time, 'DD-MON HH24:MI') wet
from 	cdb_autotask_client_history
where 	client_name = 'auto optimizer stats collection'
and	con_id = 3
order	by window_start_time, con_id
/


 Con                  Window         Window                           Window
 tai                  Start        Duration    Jobs    Jobs      Jobs End
 ner Window           Time            Hours Created Started Completed Time
---- ---------------- ------------ -------- ------- ------- --------- ------------
......................................
......................................
   3 WEDNESDAY_WINDOW 11-OCT 03:00        4       1       0         0 11-OCT 07:00
   3 THURSDAY_WINDOW  12-OCT 03:00        4       1       0         0 12-OCT 07:00
   3 FRIDAY_WINDOW    13-OCT 03:00        4       1       0         0 13-OCT 07:00
   3 SATURDAY_WINDOW  14-OCT 03:00       20       1       0         0 14-OCT 23:00
   3 SUNDAY_WINDOW    15-OCT 03:00       20       1       0         0 15-OCT 23:00
   3 TUESDAY_WINDOW   17-OCT 03:00        4       1       0         0 17-OCT 07:00
   3 WEDNESDAY_WINDOW 18-OCT 03:00        4       1       0         0 18-OCT 07:00

Auto stats not running since 18-OCT.
One more thing to notice is that the jobs_started column will remain 0 even though a job is started but TIMED OUT
Checked if the job is running

sys@mes01pc1> select * from DBA_AUTOTASK_CLIENT_JOB;

CLIENT_NAME                       job name               JOB_SCHEDULER_STATUS
--------------------------------- ---------------------- ---------------------------------------------
TASK_NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TASK_TARGET_TYPE
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TASK_TARGET_NAME
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TASK_PRIORITY
---------------------
TASK_OPERATION
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
auto optimizer stats collection   ORA$AT_OS_OPT_SY_977   RUNNING
gather_stats_prog
System
system
HIGH
auto optimizer stats job

It is in RUNNING status
query DBA_OPTSTAT_OPERATIONS

col OPERATION form a40
col job_name form a21
col st form a15
col max_duration format 9999
col status form a15

select 	id, operation, JOB_NAME, target, to_char(start_time, 'DD-MON HH24:MI') st,
	extract(hour from (end_time - start_time)) * 60 + extract(minute from (end_time - start_time))     max_duration,
	status
from  	dba_optstat_operations
where	operation = 'gather_database_stats (auto)'
order 	by  start_time
/

OPERATION                                START_TIME                               MAX_DURATION STATUS
---------------------------------------- ---------------------------------------- ------------ ---------------
gather_database_stats (auto)             04-OCT-17 10.00.11.484514 PM -07:00               239 TIMED OUT
gather_database_stats (auto)             06-OCT-17 03.00.11.302874 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             07-OCT-17 03.00.03.656205 AM -07:00              1200 TIMED OUT
gather_database_stats (auto)             08-OCT-17 03.00.11.408271 AM -07:00              1199 TIMED OUT
gather_database_stats (auto)             09-OCT-17 03.00.07.521434 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             10-OCT-17 03.00.11.757108 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             11-OCT-17 03.00.05.698675 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             12-OCT-17 03.00.12.582480 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             13-OCT-17 03.00.11.758035 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             14-OCT-17 03.00.06.752382 AM -07:00              1199 TIMED OUT
gather_database_stats (auto)             15-OCT-17 03.00.12.278580 AM -07:00               256 TIMED OUT
gather_database_stats (auto)             17-OCT-17 03.00.05.702189 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             18-OCT-17 03.00.04.534989 AM -07:00               239 TIMED OUT
gather_database_stats (auto)             19-OCT-17 03.00.12.160346 AM -07:00                 0 IN PROGRESS

It is IN PROGRESS status
Querying DBA_OPTSTAT_OPERATION_TASKS shows that the job is processing SYS.X$KQLFSQCE. Looks like we are hitting bug – BUG:23022578 (Metalink Doc – 2294247.1).
I grepped the trace directory for AT_OS_OPT_SY_977 and found the j001 trace file

*** MODULE NAME:(DBMS_SCHEDULER) 2017-10-19 03:00:30.990
*** CLIENT DRIVER:() 2017-10-19 03:00:30.990
*** ACTION NAME:(ORA$AT_OS_OPT_SY_977) 2017-10-19 03:00:30.990
*** CONTAINER ID:(3) 2017-10-19 03:00:30.990

CursorDiagnosticsNodes:  
  ChildNode:  ChildNumber=23 ID=45 reason=NLS Settings(2) size=2x568 NLS_DATE_FORMAT='DD-MON-RR'->'YYYYMMDD HH24:MI:SS' 
.................
.................
  ChildNode:  ChildNumber=0 ID=3 reason=Optimizer mismatch(12) size=2x356 advanced_queuing_internal_cursor= 1
0
kgskupdbalance:  time_used[263179579] greater than threshold[30000000] current_consumed_cpu_time[1858258774] exec_start_consumed_time[1595079195]exec_start_elapsed_time[3782739545093]

*** 2017-10-19 04:53:37.048
kgskupdbalance:  time_used[156486102] greater than threshold[30000000] current_consumed_cpu_time[2075822666] exec_start_consumed_time[1919336564]exec_start_elapsed_time[3783063847416]

I did not find the Job queue process j001 running currently. Looks like the process died or somehow killed and the job is stuck in RUNNING (IN PROGRESS) status. This is causing new jobs not to get started.
So i decided to drop the job manually

exec DBMS_SCHEDULER.drop_job(job_name => 'ORA$AT_OS_OPT_SY_977');
BEGIN DBMS_SCHEDULER.drop_job(job_name => 'ORA$AT_OS_OPT_SY_977'); END;

*
ERROR at line 1:
ORA-27478: job "SYS"."ORA$AT_OS_OPT_SY_977" is running
ORA-06512: at "SYS.DBMS_ISCHED", line 243
ORA-06512: at "SYS.DBMS_SCHEDULER", line 778
ORA-06512: at line 1

That did not work, so i decided to “force” it

exec DBMS_SCHEDULER.drop_job(job_name => 'ORA$AT_OS_OPT_SY_977',force =>TRUE);

PL/SQL procedure successfully completed.

Checked for running jobs in DBA_AUTOTASK_CLIENT_JOB

select * from DBA_AUTOTASK_CLIENT_JOB;

no rows selected

This resolved the issue and the auto optimizer stats collection jobs started running from the next day

 Con                  Window         Window                           Window
 tai                  Start        Duration    Jobs    Jobs      Jobs End
 ner Window           Time            Hours Created Started Completed Time
---- ---------------- ------------ -------- ------- ------- --------- ------------
   3 THURSDAY_WINDOW  19-OCT 03:00        4       1       0         0 19-OCT 07:00
   3 WEDNESDAY_WINDOW 25-OCT 03:00        4       1       0         0 25-OCT 07:00

To fix the issue permanently, i copied the stats for the table SYS.X$KQLFSQCE from the root container to the PDB.

October 2, 2017

RMAN restore does not create GUID directory during Active Database Duplication even in 12cR2

Filed under: Uncategorized — srivenu @ 11:54 am

Due to Bug 16874123 – RMAN restore does not create GUID directory for pre-12c database. As per oracle this was supposedly fixed in 12c (Metalink Doc ID 16874123.8).
But looks like this bug is not fixed for Active Database Duplication. It’s only fixed for Backup based duplication.

Example –
I have a primary with these pdb’s and data files


    CON_ID NAME       GUID
---------- ---------- ----------------------------------------
         2 PDB$SEED   4FC03F9A813CD865E0530F921F0A7669
         3 PDB1       4FC1001DB1E656BDE0530F921F0AE21B
         4 PDB2       4FC102D6030C4E11E05310921F0A76FA

 
Con
tai  File                                                                                                                                Size
ner    No File Name                                                                                            Status     Enabled       In GB
---- ----- ---------------------------------------------------------------------------------------------------- ---------- ---------- --------
   1     3 +DATA2/TEMP/DATAFILE/sysaux.347.944238237                                                            ONLINE     READ WRITE        1
   1     1 +DATA2/TEMP/DATAFILE/system.348.944232527                                                            SYSTEM     READ WRITE        1
   1     5 +DATA2/TEMP/DATAFILE/undotbs1.342.944232553                                                          ONLINE     READ WRITE        0
   1     7 +DATA2/TEMP/DATAFILE/undotbs2.352.944232587                                                          ONLINE     READ WRITE        0
   1     8 +DATA2/TEMP/DATAFILE/users.356.944238259                                                             ONLINE     READ WRITE        0
 
   2     4 +DATA2/TEMP/4FC03F9A813CD865E0530F921F0A7669/DATAFILE/sysaux.343.944232551                           ONLINE     READ WRITE        0
   2     2 +DATA2/TEMP/4FC03F9A813CD865E0530F921F0A7669/DATAFILE/system.339.944232537                           SYSTEM     READ WRITE        0
   2     6 +DATA2/TEMP/4FC03F9A813CD865E0530F921F0A7669/DATAFILE/undotbs1.331.944232557                         ONLINE     READ WRITE        0
 
   3    10 +DATA2/TEMP/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/sysaux.334.944238287                           ONLINE     READ WRITE        0
   3     9 +DATA2/TEMP/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/system.340.944235739                           SYSTEM     READ WRITE        0
   3    12 +DATA2/TEMP/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/undo_2.335.944235777                           ONLINE     READ WRITE        0
   3    11 +DATA2/TEMP/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/undotbs1.341.944235739                         ONLINE     READ WRITE        0
   3    13 +DATA2/TEMP/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/users.337.944238295                            ONLINE     READ WRITE        0
 
   4    15 +DATA2/TEMP/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/sysaux.351.944238311                           ONLINE     READ WRITE        0
   4    14 +DATA2/TEMP/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/system.332.944235785                           SYSTEM     READ WRITE        0
   4    17 +DATA2/TEMP/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/undo_2.350.944235815                           ONLINE     READ WRITE        0
   4    16 +DATA2/TEMP/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/undotbs1.333.944235785                         ONLINE     READ WRITE        0
   4    18 +DATA2/TEMP/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/users.338.944238321                            ONLINE     READ WRITE        0
 
18 rows selected.

I will try to do an Active Database Duplication

I tried creating the standby for this on atx6 with DB_UNIQUE_NAME of test

created an spfile for test with these parameters

 
db_domain=xyz.com
db_name="temp"
db_unique_name="test"
log_archive_format=%t_%s_%r.dbf
db_block_size=16384
db_recovery_file_dest="+FRA1"
db_recovery_file_dest_size=1024g
compatible=12.2.0
diagnostic_dest=/oracle/app/oracle
enable_pluggable_database=true
nls_language="AMERICAN"
nls_territory="AMERICA"
sga_target=40g
audit_file_dest="/oracle/app/oracle/admin/test/adump"
audit_trail=db
remote_login_passwordfile=exclusive
dispatchers="(PROTOCOL=TCP) (SERVICE=testXDB)"
pga_aggregate_target=76800m
db_file_multiblock_read_count=64
fast_start_mttr_target=300
log_checkpoint_timeout=0
open_cursors=512
processes=5000
sessions=5000
cluster_database=false
instance_number=1
CONTROL_FILES='+DATA1/test/CONTROLFILE/control01.ctl'

Ran the following rman command

 
rman target sys/manager@temp  auxiliary sys/manager@test
 
set echo on
spool log to /home/oracle/active_duplicate_temp.log
 
run
{
allocate channel c1 device type disk;
allocate channel c2 device type disk;
allocate auxiliary channel ac1 device type disk;
allocate auxiliary channel ac2 device type disk;
allocate auxiliary channel ac3 device type disk;
allocate auxiliary channel ac4 device type disk;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
USING COMPRESSED BACKUPSET
SECTION SIZE 500M
NOFILENAMECHECK
DORECOVER;
}
 

This is the structure that I get

 
Con
tai  File                                                                                                                                Size
ner    No File Name                                                                                            Status     Enabled       In GB
---- ----- ---------------------------------------------------------------------------------------------------- ---------- ---------- --------
   1     3 +DATA2/TEST/DATAFILE/sysaux.289.944296087                                                            ONLINE     READ WRITE        1
   1     1 +DATA2/TEST/DATAFILE/system.291.944296085                                                            SYSTEM     READ WRITE        1
   1     5 +DATA2/TEST/DATAFILE/undotbs1.287.944296121                                                          ONLINE     READ WRITE        0
   1     7 +DATA2/TEST/DATAFILE/undotbs2.285.944296125                                                          ONLINE     READ WRITE        0
   1     8 +DATA2/TEST/DATAFILE/users.284.944296129                                                             ONLINE     READ WRITE        0
 
   2     4 +DATA2/TEST/DATAFILE/sysaux.288.944296087                                                            ONLINE     READ ONLY         0
   2     2 +DATA2/TEST/DATAFILE/system.290.944296085                                                            SYSTEM     READ ONLY         0
   2     6 +DATA2/TEST/DATAFILE/undotbs1.286.944296125                                                          ONLINE     READ ONLY         0
 
   3    10 +DATA2/TEST/DATAFILE/sysaux.282.944296131                                                            ONLINE     UNKNOWN           0
   3     9 +DATA2/TEST/DATAFILE/system.283.944296131                                                            SYSTEM     READ WRITE        0
   3    12 +DATA2/TEST/DATAFILE/undo_2.280.944296155                                                            ONLINE     UNKNOWN           0
   3    11 +DATA2/TEST/DATAFILE/undotbs1.281.944296147                                                          ONLINE     UNKNOWN           0
   3    13 +DATA2/TEST/DATAFILE/users.279.944296155                                                             ONLINE     UNKNOWN           0
 
   4    15 +DATA2/TEST/DATAFILE/sysaux.277.944296157                                                            ONLINE     UNKNOWN           0
   4    14 +DATA2/TEST/DATAFILE/system.278.944296157                                                            SYSTEM     READ WRITE        0
   4    17 +DATA2/TEST/DATAFILE/undo_2.275.944296163                                                            ONLINE     UNKNOWN           0
   4    16 +DATA2/TEST/DATAFILE/undotbs1.276.944296159                                                          ONLINE     UNKNOWN           0
   4    18 +DATA2/TEST/DATAFILE/users.274.944296163                                                             ONLINE     UNKNOWN           0
 
18 rows selected.

I tried several things (changing file convert parameters), but the PDB GUID directory is not getting created

Lets try RMAN Backup Based Recovery

Did this –
Created standby spfile, controlfile and mounted the standby (TEST on atx6)
Took a level 0 backup of primary (database TEMP on atx8)
Copied the backup files to standby
Started rman on standby – cataloged the backups restored to standby
Restore database


select CON_ID, NAME, GUID from v$pdbs order by 1;
 
    CON_ID NAME       GUID
---------- ---------- ----------------------------------------
         2 PDB$SEED   4FC03F9A813CD865E0530F921F0A7669
         3 PDB1       4FC1001DB1E656BDE0530F921F0AE21B
         4 PDB2       4FC102D6030C4E11E05310921F0A76FA
 
Con
tai  File                                                                                                                                Size
ner    No File Name                                                                                            Status     Enabled       In GB
---- ----- ---------------------------------------------------------------------------------------------------- ---------- ---------- --------
   1     3 +DATA2/TEST/DATAFILE/sysaux.288.944309695                                                            ONLINE     READ WRITE        1
   1     1 +DATA2/TEST/DATAFILE/system.289.944309695                                                            SYSTEM     READ WRITE        1
   1     5 +DATA2/TEST/DATAFILE/undotbs1.287.944309695                                                          ONLINE     READ WRITE        0
   1     7 +DATA2/TEST/DATAFILE/undotbs2.286.944309695                                                          ONLINE     READ WRITE        0
   1     8 +DATA2/TEST/DATAFILE/users.285.944309695                                                             ONLINE     READ WRITE        0
 
   2     4 +DATA2/TEST/4FC03F9A813CD865E0530F921F0A7669/DATAFILE/sysaux.274.944309723                           ONLINE     READ ONLY         0
   2     2 +DATA2/TEST/4FC03F9A813CD865E0530F921F0A7669/DATAFILE/system.256.944309723                           SYSTEM     READ ONLY         0
   2     6 +DATA2/TEST/4FC03F9A813CD865E0530F921F0A7669/DATAFILE/undotbs1.273.944309723                         ONLINE     READ ONLY         0
 
   3    10 +DATA2/TEST/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/sysaux.284.944309709                           ONLINE     UNKNOWN           0
   3     9 +DATA2/TEST/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/system.283.944309709                           SYSTEM     READ WRITE        0
   3    12 +DATA2/TEST/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/undo_2.281.944309709                           ONLINE     UNKNOWN           0
   3    11 +DATA2/TEST/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/undotbs1.282.944309709                         ONLINE     UNKNOWN           0
   3    13 +DATA2/TEST/4FC1001DB1E656BDE0530F921F0AE21B/DATAFILE/users.280.944309709                            ONLINE     UNKNOWN           0
 
   4    15 +DATA2/TEST/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/sysaux.279.944309717                           ONLINE     UNKNOWN           0
   4    14 +DATA2/TEST/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/system.278.944309717                           SYSTEM     READ WRITE        0
   4    17 +DATA2/TEST/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/undo_2.276.944309717                           ONLINE     UNKNOWN           0
   4    16 +DATA2/TEST/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/undotbs1.277.944309717                         ONLINE     UNKNOWN           0
   4    18 +DATA2/TEST/4FC102D6030C4E11E05310921F0A76FA/DATAFILE/users.275.944309717                            ONLINE     UNKNOWN           0
 
 

Now we see the GUID directory is created for the PDB’s on standby.

Cross platform Database upgrade to 12c using Data Pump with NETWORK_LINK

Filed under: Uncategorized — srivenu @ 11:30 am

This post is about my recent experience in migrating a 15TB database running on Oracle 11.2.0.3 Enterprise Edition on Solaris 11 to Oracle 12.2.0.1.0 on Oracle Linux Release 7.3 using NETWORK_LINK.
I was asked to perform the initial load using Data Pump and the delta catch up was going to be performed by Golden Gate. The source data was stored using OLTP basic compression with a effective compression ratio of about 50%.
The objective was to perform the initial load in the shortest possible time. The CPU and Memory resources on the source and target are plenty and not a bottleneck. There was a 1Gb (please note the small “b”) dedicated network pipe between the source and target systems. The effective transfer rate was 1Gbps – which is 128MB/sec, 7680MB(or 7.5GB)/minute or 450GB/hour. So if i could use the network capacity in full, i should complete 15TB data transfer in about 34 hours.

I made a simple test with a select across the dblink on a large table and observed that i could only get 15MB/second. If i used a parallel create with a parallel query across a database link (using CTAS- “Create table as Select” or INSERT APPEND), only the query on the source and the Create or Insert on the target are parallelized. The data transfer between these slaves across the database link is still serial. I wonder why Oracle has not provided a means to parallelize the network transfer even in 12c. So using a parallel CTAS and INSERT APPEND i found that the effective transfer rate was only around 15MB/second. This means i was just using only around 10-15 % of the network bandwidth for 1 session. If i used the CTAS method, the only option to parallelize was to start multiple sessions with each session operating on a set of tables.

That’s when i decided to check out the Data Pump NETWORK_LINK introduced in 12c. Using Data pump NETWORK_LINK is pretty simple and has advantages.
No dump file, no disk writes or file transfers. You just create a database link from target to source and specify that for the NETWORK_LINK parameter with data pump. The data is moved from target to source across the network.
One more big advantage is that data is moved in compressed format. My source is around 50% compressed so if the data is not moved in a compressed format, i would need double the network time.
But the key question is about Parallelism? The wonderful thing is that Data Pump Parallelizes the data transfer. But there are a few caveats which are elaborated at the end.

Let’s say my source database is MYAPP111 and the target is MYAPP122
I use the userid srivenu (for simplicity I grant dba to it)
I create a database link called initload on MYAPP122 to connect to MYAPP111
create public database link INITLOAD connect to srivenu identified by “…” using ‘MYAPP111_TNS’;
My MYAPP111 database has 2 schemas HR & SALES, that I want to copy over.
I create a parameter file, called impdp_hrsales.par with the following contents (Even though no dump file is created, you need to specify a DIRECTORY where the LOGFILE is generated)

SCHEMAS=HR,SALES
REMAP_SCHEMA=HR:HR
REMAP_SCHEMA=SALES:SALES
DIRECTORY=DATA_PUMP_INITLOAD
LOGFILE=impdp_hrsales.log
NETWORK_LINK=INITLOAD
PARALLEL=16
LOGTIME=ALL
METRICS=YES
CLUSTER=NO
TRANSFORM=SEGMENT_CREATION:N
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
TRANSFORM=TABLE_COMPRESSION_CLAUSE:”COMPRESS BASIC”
EXCLUDE=INDEX
EXCLUDE=CONSTRAINT
EXCLUDE=TRIGGER
EXCLUDE=STATISTICS

Logically I was be faced with a lot of questions
What operations in the import are Parallelized?
What is the unit of parallelization or distribution of work among these data pump processes?
What is the order in which the table data is loaded? Can i see it? How do i monitor what tables/objects have completed and what come next?
How do we monitor this data pump import job what each of the 16 data pump processes are doing?

These are the answers I could come up with

What operations are parallelized
Following is from the Oracle doc
The first worker begins to load all the metadata – the tablespaces, schemas, etc, until all the tables are created – This is done in Serial
• Once the tables are created, the first worker starts loading data instead of metadata and the rest of the workers start loading data too – Data load is Parallel
• Once the table data is loaded, the first worker returns to loading metadata again. The rest of the workers are idle until the first worker loads all the metadata up to package bodies – This is serial
• Multiple workers load package bodies in parallel – Package bodies loaded in Parallel
• One worker loads metadata up to and including secondary tables – Serial
• Multiple workers load secondary table data – Parallel
• One worker loads the remaining metadata – Serial
• One worker creates all the indexes but uses PX processes up to the PARALLEL value – Serial with px slaves.

Index definitions by default on OLTP systems will have no parallelism. So the data pump will create the indexes in serial with no slaves. This is the reason I excluded indexes and constraints. We will see at the end how i worked on indexes and constraints.

What is the unit of parallelization during Table Data Load?
– The unit of parallelization is “table_data objects”.
Data Pump considers the following items to be individual “table_data objects”:
– Tables, if the table is not partitioned or sub partitioned
– Partitions, if the table is partitioned but not sub partitioned
– Sub partitions, if the table is sub partitioned
During a network mode import, each “table_data object” is assigned its own worker process, up to the value specified for the PARALLEL parameter.
So if the table SALES_DETAIL has 4 partitions JAN17, FEB17, MAR17, APR17 then each of these partitions are assigned to one data pump process.
Each of these data pump processes opens a separate network connection to the source (MYAPP111) database and uses a serial query to get data from its corresponding partition.
No parallel query (PQ) slaves are assigned on the source to the data pump session because network mode import does not use parallel query (PQ) slaves.
So Multiple table_data objects can be unloaded at the same time, but each table_data object is unloaded using a single process.
So if you have a table with 100 partitions, and if you specified a parallel of 10, then each of these partitions is handled by one of the 10 parallel processes

What is the order in which the table data is loaded? Can I see it? How do i see what is completed and what comes next?
You can refer this good article by Carl Dudley on Data Pump Master Table
When schema level import job is running, data pump creates a master table named SYS_IMPORT_SCHEMA_nn (nn are numerals). If there is only 1 job it will be SYS_IMPORT_SCHEMA_01.
If it is a FULL level import, the master table is named SYS_IMPORT_FULL_nn
The “table_data objects” in a schema are loaded in the descending order of their size and for objects of same size in ascending order of the OBJECT_NAME.
The size estimate is based on the parameter ESTIMATE.
If it is BLOCKS (the default) – It will be based on existing segment size of the object in the source database.
If it is STATISTICS – It is calculated using statistics for each table

You can query the master table using the following sql to see the order in which the “table_data objects” are loaded

set lines 200 pages 100
col PROCESS_NAME head "Pro|cess" form a4
col OBJECT_TYPE_PATH head "Object Type Path" form a30
col OBJECT_NUMBER head "Obj|ect|Num|ber" form 99,999
col OBJECT_SCHEMA head "Object|Schema" form a10
col OBJECT_NAME head "Object" form a30
col PARTITION_NAME head "Partition Name" form a30
col PARALLELIZATION head "Pa|ra|ll|el" form 999
col PROCESS_ORDER head "Pro|ce|ss|Or|der" form 99,999
col PROCESSING_STATE head "Proce|ssing|State" form a10
col PROCESSING_STATUS head "Proce|ssing|Status" form a10
col se head "Total|Size|Esti|mate|(MB)" form 99,999,999
col tmb head "Total|Size|(MB)" form 99,999,999
col st head "Start Time" form a15
 
select	PROCESS_NAME, OBJECT_NUMBER, OBJECT_SCHEMA, OBJECT_NAME, PARTITION_NAME,
	PARALLELIZATION, PROCESS_ORDER, trunc(SIZE_ESTIMATE/1048576) se, trunc(TOTAL_BYTES/1048576) tmb,
	to_char(START_TIME, 'ddMon hh24:mi') st,
	to_char(LAST_UPDATE, 'ddMon hh24:mi') lu
from	SYS_IMPORT_SCHEMA_01
where	OBJECT_TYPE_PATH='SCHEMA_EXPORT/TABLE/TABLE_DATA'
order	by 3, 7
/

You will see that the process order matches the descending SIZE_ESTIMATE and ascending OBJECT_NAME within a schema. Also for a FULL level import, you will see that jobs in another schema will not be started even if they have a lower PROCESS_ORDER until all the jobs in the first schema have completed

How do we monitor this data pump import job and see what each of these 16 data pump processes are doing?
Of course you can use import’s Interactive-Command Modeand shows what “table_data objects” each of the data pump processes are working on.
But I find it to be of limited use and does not answer all my questions

You can use the following sql to monitor the data pump jobs

col PROCESS_NAME head "Pro|cess" form a4
col OBJECT_TYPE_PATH head "Object Type Path" form a30
col OBJECT_NUMBER head "Obj|ect|Num|ber" form 99,999
col OBJECT_SCHEMA head "Object|Schema" form a10
col OBJECT_NAME head "Object" form a30
col PARTITION_NAME head "Partition Name" form a25
col PROCESS_ORDER head "Pro|ce|ss|Or|der" form 99,999
col TMB head "Total|Size|(MB)" form 99,999,999
col PARALLELIZATION head "Pa|ra|ll|el" form 999
col ST head "Start Time" form a15
col LU head "Last Updated" form a15

select PROCESS_NAME, OBJECT_NUMBER, OBJECT_SCHEMA, OBJECT_NAME, PARTITION_NAME,
PARALLELIZATION, PROCESS_ORDER, trunc(TOTAL_BYTES/1048576) TMB,
to_char(START_TIME, 'ddMon hh24:mi') ST,
to_char(LAST_UPDATE, 'ddMon hh24:mi') LU
from SYS_IMPORT_SCHEMA_01
where STATE='EXECUTING'
order by 2 nulls first
/

This is the sample output Showing which “table_data objects” each of the data pump process is working on

                                                                                                                                                                                    Pro
         Obj                                                                       Pa      ce
         ect                                                                       ra      ss       Total
Pro      Num Object                                                                ll      Or        Size
cess     ber Schema     Object                         Partition Name              el     der        (MB) Start Time      Last Updated
---- ------- ---------- ------------------------------ ------------------------- ---- ------- ----------- --------------- ---------------
                                                                                           -2   3,685,569 26Apr 15:01
DW02       1 HR         TABLEXXX                                                    1     -42     486,617 26Apr 15:07     26Apr 15:07
DW01       2 HR         TABLEXXXYYYYYYYY                                            1     -42      91,142 26Apr 15:07     26Apr 15:07
DW0A     833 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_JUL06                   1     -42         904 26Apr 19:49     26Apr 19:49
DW09     834 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_JUN06                   1     -42         904 26Apr 19:49     26Apr 19:49
DW07     835 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_OCT06                   1     -42         904 26Apr 19:49     26Apr 19:49
DW05     842 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_AUG05                   1     -42         896 26Apr 19:50     26Apr 19:50
DW0B     843 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_AUG09                   1     -42         896 26Apr 19:50     26Apr 19:50
DW0E     844 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_DEC05                   1     -42         896 26Apr 19:50     26Apr 19:50
DW0F     846 HR         TABLEAAAAAAAAAAAAAAA           JKHKJJHJKHKJHKJHJK_APR09     1     -42         896 26Apr 19:52     26Apr 19:52
DW06     847 HR         TABLEAAAAAAAAAAAAAAA           JKHKJJHJKHKJHKJHJK_APR15     1     -42         896 26Apr 19:51     26Apr 19:51
DW08     848 HR         TABLEAAAAAAAAAAAAAAA           JKHKJJHJKHKJHKJHJK_JUN10     1     -42         896 26Apr 19:51     26Apr 19:51
DW00     849 HR         TABLEAAAAAAAAAAAAAAA           JKHKJJHJKHKJHKJHJK_MAR14     1     -42         896 26Apr 15:07     26Apr 19:51
DW04     850 HR         COMDATA_DETAIL_FILE_DT         CD_DDF_OCT10                 1     -42         888 26Apr 19:51     26Apr 19:51
DW0C     851 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_APR06                   1     -42         888 26Apr 19:52     26Apr 19:52
DW0D     852 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_DEC10                   1     -42         888 26Apr 19:52     26Apr 19:52
DW03     853 HR         TABLEXXXYYYYYYYYYZZZZZZZZZZ    CCCC_JUL04                   1     -42         888 26Apr 19:52     26Apr 19:52

During monitoring I observed the following
Rows with a NULL value in START_TIME means, that the data load on that object has not yet started.
Rows with a NOT NULL value in START_TIME mean, that the data load on that object has started.
Rows with a NOT NULL value in LAST_UPDATE mean, that the data load on that object is going on.

Even without starting the job, you can see the order in which the data pump gets data with the below sql (For sub partitions you will need to modify it)

col owner head "Owner" form a10
col segment_name head "Segment Name" form a30
col segment_type head "Segment Type" form a18
col partition_name head "Partition Name" form a30
col tablespace_name head "Tablespace" form a20
col extents head "Extents" form 9999999
col blocks head "Blocks" form 9,999,999,999
col siz head "Size (MB)" form 999,999,999
col rwnm head "Rownum" form 999,999,999
 
select    segment_type, owner, segment_name, partition_name, blocks, siz, rownum rwnm
from      (
                select    /*+no_merge no_unnest*/
                                segment_type, owner, segment_name, partition_name, blocks, round(bytes/1048576) siz
                from      dba_segments
                where   owner = 'HR'
                and        segment_type not in ('INDEX', 'INDEX PARTITION')
                order     by 5 desc, 2, 3, 4
                ) iv
/

So what use is this monitoring? Does it have any practical value?
Of course. It gives us an understanding of how the import job works.
Better monitoring always allows better analysis of the problem and possibly better ideas for problem resolution or optimization
Now you can see that I have one single table with around 486GB. This will be the first table picked to process as it has the largest SIZE_ESTIMATE.
This determines the time taken by the data pump job.
So even though all other “table_data objects” in HR schema complete and the 15 data pump processes are idle, data pump will not start the SALES schema until this table in HR schema completes.

You can monitor the transfer rate in the database issuing this query in the target
select sum(bytes)/(1024*1024*1024) from dba_segments where segment_name=’TABLE1′;
On solaris you can monitor the network transfer rate using
(I use net0 for my transfer)

dlstat -t -i 60|while read line ; do print `date` : $line; done|grep net0

In my environment i observed that a single process transfers at the rate of 15MB/second (roughly around 1GB/min).
So the table TABLE1 of size 486,617 MB will take around 9 hours to complete.
How to handle such tables and parallelize them?

When all the data pump processes were working in parallel, this was the dlstat output

Wed Apr 26 23:44:10 PDT 2017 : net0 2.40M 7.04G 0
Wed Apr 26 23:45:10 PDT 2017 : net0 2.51M 7.03G 0
Wed Apr 26 23:46:10 PDT 2017 : net0 2.59M 7.04G 0
Wed Apr 26 23:47:10 PDT 2017 : net0 2.60M 7.04G 0
Wed Apr 26 23:48:10 PDT 2017 : net0 2.58M 7.03G 0
Wed Apr 26 23:49:10 PDT 2017 : net0 2.59M 7.04G 0
Wed Apr 26 23:50:10 PDT 2017 : net0 2.59M 7.02G 0
Wed Apr 26 23:51:10 PDT 2017 : net0 2.59M 7.04G 0
Wed Apr 26 23:52:10 PDT 2017 : net0 2.60M 7.03G 0
Wed Apr 26 23:53:10 PDT 2017 : net0 2.59M 7.03G 0
Wed Apr 26 23:54:10 PDT 2017 : net0 2.58M 7.01G 0

When i only had 1 session, this was the output

Thu Apr 27 08:09:29 PDT 2017 : net0 110.42K 1.20G 0
Thu Apr 27 08:10:29 PDT 2017 : net0 111.50K 1.21G 0
Thu Apr 27 08:11:29 PDT 2017 : net0 106.08K 1.15G 0
Thu Apr 27 08:12:29 PDT 2017 : net0 107.22K 1.15G 0
Thu Apr 27 08:13:29 PDT 2017 : net0 117.67K 1.25G 0

So our goal during the data transfer should be that we should fully utilize the network. ie make all the processes work and push 7GB/min
We need to minimize or avoid Serial processing as it will result in suboptimal usage of the network and increase in time taken.

You can exclude those big skewed tables (create these tables manually before you start the data pump job and specify TABLE_EXISTS_ACTION=SKIP when you run this job)
For these big tables, you can start multiple TABLE level data pump jobs using QUERY clause and specify a range of values for each of those jobs

Also if you have multiple major schemas, you can split the data pump jobs across these schemas

Now for the caveats of parallelization

– For Parallel transfer to work, the Partitioned tables should not be pre created, the Data Pump process has to create it during the import (I checked this myself)
If you pre created the partitioned table (either manually or in a prior data pump job with METADATA_ONLY), only 1 data pump process will load the data, even if you specified PARALLEL while loading data into the table.
The rest of the data pump processes will remain idle

– If you want to skip some partitions/sub partitions of a partitioned table, it’s better to specify the partitions/sub partitions explicitly instead of using QUERY clause.

– Data pump does not parallelize across schemas.
Example – if you specified multiple schemas in the parameter file like this SCHEMAS=HR,SALES and specified PARALLEL=16
Let’s say data pump started the import of HR schema and at the end 8 large “table_data objects” in the HR schema are running and the rest (8) of the data pump processes are idle.
Still the SALES schema will not be started until all the “table_data objects” in HR schema complete.
So if your data is spread across multiple schemas, run each data pump job separately.

– The total time taken for the data pump to complete is determined by the size of the largest “table_data object”

– NETWORK_LINK does not work with tables which have columns of type LONG/LONG RAW. For such other restrictions, refer to the data pump doc

For Indexes & Constraints, I did this
– After the table data import, i use INCLUDE for INDEX and CONSTRAINTS, generate SQLFILE
– Modify the PARALLEL attributes in the SQLFILE
– Split the Indexes SQLFILE into parts (8 or 16 or whatever you want). Make sure all indexes on a table remain in the same file
– Run the scripts in parallel
For indexes associated with constraints. Only the following order will ensure a parallel operation during each step
– Create the index in Parallel
– Create the constraint with this index in ENABLE NOVALIDATE state
– alter the constraint to ENABLE VALIDATE

June 5, 2016

Analyze index validate structure “online” does not populate “index_stats”

Filed under: Uncategorized — srivenu @ 6:35 pm

Using “online” clause on analyze index … validate structure does not populate index_stats. It hasn’t changed even in the latest version of Oracle Database 12c. If you don’t use the online clause, the session running analyze index locks up other sessions performing dml’s on the index on a TM enqueue.

To reduce the locking and get the index_stats, i use the analyze command twice, first with the online clause to fill the cache with the index blocks immediately followed by the same analyze command without the online clause.

ex –

analyze index ix validate structure online;

immediately followed by

analyze index ix validate structure;

« Previous PageNext Page »

Create a free website or blog at WordPress.com.