Oracle

December 20, 2009

FCF

Filed under: Uncategorized — srivenu @ 9:37 am

I configured and tested FCF (Fast Connection Failover) setup for use by TIBCO application. I was using Metalink note – 433827.1. (I also found one example code snippet on http://www.idevelopment.info)

Since i did not want to install ONS on the middle tier, i was using “remote subscription” (this limits the debugging options on the client side). For testing i registered a new service in CRS. I used this in the JDBC connection URL. I tested with both vip and physical ip in the JDBC URL and both worked. I had the ONS ports opened on the server firewall.

One issue (or i thought it was an issue) i faced was related to the min number of connections in the cache after an instance shutdown. While testing, i used a value of 10 for both “MinLimit” & “InitialLimit”. On startup, the pool had a total of 10 connections, 5 each from instances 1 & 2. On shutdown abort of instance 1, i expected the JDBC application to open 5 new connections on instance 2 in lieu of the 5 closed connections on instance 1. But the connection count remained at 5. I scratched my head till i had a small “aha” moment on reading this in the oracle manual (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/concache.htm)

MinLimit
………….

Connections can fall below the minimum limit set on the connection pool when JDBC Fast Connection Failover DOWN events are processed. The processing removes affected connections from the pool. MinLimit will be honored as requests to the connection pool increase and the number of connections get past the MinLimit value.

One more test case i performed was to use a service name which wasn’t registered in the CRS, the code just failed with exception 17410 when it tried to use a connection from instance 1 (which was shutdown sometime after application startup). This was expected as no FAN events would be received by the JDBC application from ONS for this unregistered service name.

December 14, 2009

Hash Join – LAST_EXECUTION in V$SQL_WORKAREA and NUMBER_PASSES in V$SQL_WORKAREA_ACTIVE

Filed under: Uncategorized — srivenu @ 1:39 pm

Most of us wouldn’t like to see an n-pass hash join in V$SQL_WORKAREA. We always strive to see that our hash joins complete in OPTIMAL or 1-PASS mode.  So does a 12-PASS hash join mean that we had iterated over the same data 12 times ? or we had iterated over (used) the hash memory 12 times ? As a corollary to the above question, does a 12-PASS hash join mean it was 12 times as costly or resource consuming or timetaking than a 1-PASS hash join ?

This note is just to explain how the number of passes are recorded for a hash join in V$SQL_WORKAREA & V$SQL_WORKAREA_ACTIVE.

LAST_EXECUTION in V$SQL_WORKAREA

The LAST_EXECUTION column in V$SQL_WORKAREA records the number of passes required by the Hash Join. At the start of the Join, the LAST_EXECUTION contains the value OPTIMAL for a newly loaded SQL. And this is how the number of passes is evaluated.

1) If the build table fit completely in memory, then the hash table is built from the build table and the probe table is scanned once to probe this in-memory hash table. End of Story. You see that the this type of execution is recorded as OPTIMAL. In this ideal case, we had processed the build and probe data only once.

if this is not the case, then

2) The Build table and Probe table data are split into partitions. If for each partition that the build and probe tables are split into, if the smallest of either of these fits completely in memory, it will complete in 1-PASS. ie for each of the spilled partitions, the hash table is built from the smallest partition and the other one is used to probe into it. In this case, we have processed the data (or atleast some of it) 2 times. The reason i said some of it is because if some partitions of the build table fit in memory during the initial hash table build phase, we build the hash table with those partitions and process those rows while scanning the probe data. We only partition and spill to disk those partitions of probe table that do not have matching partition table (of build table) in memory. So only the partitions that spilled to disk are processed twice.

If this is not the case, then

3) The worst case is the nested loops hash join. In this case, there is atleast 1 partition where the data from both build & hash tables is too large to fit in memory and hence requires to be processed multiple times in parts. I will use an example to illustrate this case.

Lets say we have 2 tables X1 & X2. X1 is our build table and X2 is the probe. Our hash join is fanned out 4 ways (ie 4 partitions) into partitions 0, 1, 2 & 3. We start scan of X1 and lets say all partitions except partition 1 spilled to disk. We build hash table for partition 1 and spill the other partitions to disk.

Lets say these are partition sizes for X1

200 blocks for partition 0,

300 blocks for partition 2 &

50 blocks for partition 3.

We start scanning X2 and we process all rows from X2 that get partitioned to 1.

So processing of partition 1 data complete in OPTIMAL mode.

The data from X2 thats partitioned into 0, 2 & 3 partitions now spills to disk.

Lets say these are partition sizes for X2

150 blocks for partition 0,

200 blocks for partition 2 &

300 blocks for partition 3.

Lets say our hash area size fits 80 blocks. It will now start processing the smallest of the partitions. So it will start with partition 3. It will be able to fit 50 blocks in memory so it will use partition 3 of X1 to create the hash table and will probe this hash table by scanning partition 3 X2.

So processing of partition 3 completes in 1-Pass.

But this is not the end of the story. It will now start start processing partition 0. Since the size of X1 partition 0 is smaller than X2 partition 0, a role reversal will occur and it will use partition 0 of X2 to build the hash table. But it will be able to fit only 80 blocks in memory, so it will create a hash table out of 80 blocks and will probe this hash table with the X1 partition 0. After this it will build a hash table with the remaining 70 blocks of X2 partition 0 and will probe this hash table by scanning X1 partition 0 once more.

So processing of partition 0 completes in 2-Passes.

Similarly processing of partition 2 completes in 3 passes.

So this is the final status –

Processing of partition 1 completed in OPTIMAL mode.

Processing of partition 3 completed in 1-Pass mode.

Processing of partition 0 completed in 2-Pass mode.

Processing of partition 2 completed in 3-pass mode.

But what is recorded for LAST_EXECUTION in V$SQL_WORKAREA ?

It is 3.

What is recorded is not the sum of passes across all partitions but the maximum of the passes faced across all partitions. So the Number of Passes is not the number of times we reused the hash area or number of times we scanned the whole data. I think we can say it is the number of times we had to iterate over the largest partitions data.

NUMBER_PASSES in V$SQL_WORKAREA_ACTIVE

If you had monitored a long running hash join, sometimes, you might have observed that the value in NUMBER_PASSES changes during the execution. It starts with 0 and steadily increases before falling back to 1 or 2 (once it crosses the value 2 it doesn’t go down below 2) and again steadily increases till it falls back to 1 or 2. This cycle is repeated till the join completes. The high values repeated for each cycle increases.

For ex – you might see the NUMBER_PASSES like this 0…1…2…3…2…3…4…5…2…3…4…5…6.

What is recorded in V$SQL_WORKAREA_ACTIVE is the Number of Passes within each partition. Since large partitions are processed at the end, you will see values getting larger towards the end of the join.

December 7, 2009

Hash Join

Filed under: Uncategorized — srivenu @ 8:18 am

I wrote this note while running some test cases in trying to understand hash join as implemented by oracle. I was running 11.1.0.6.0 on Windows XP 32-bit. I used events 10046 & 10104 for gathering the traces while running my test cases. For reference I was using Jonathan’s excellent book on CBO Fundamentals, Steve Adams notes on Hash Join and an Oracle technical document on hash join

I was using this simple test case.
(I played with the data volume and values while testing out various scenarios)

*******************************************
drop table x1;
drop table x2;
create table x1 (a char(…)) tablespace data1;

begin
for i in 1…. loop
insert into x1 values(i);
end loop;
end;
/

commit;

create table x2 (a char(…)) tablespace data2;

begin
for i in 1…. loop
insert into x2 values(i);
end loop;
end;
/

commit;

exec dbms_stats.gather_table_stats(‘TEST’,’X1′);
exec dbms_stats.gather_table_stats(‘TEST’,’X2′);

alter session set workarea_size_policy = manual;
alter session set hash_area_size=….;
alter session set “_hash_multiblock_io_count”=..;
alter system flush shared_pool;
alter system flush buffer_cache;

alter session set events ‘10104 trace name context forever’;
alter session set events ‘10046 trace name context forever, level 12’;

set pause on

select /*+leading(x2) use_hash(x1)*/
*
from x1, x2
where x1.a=x2.a
/

*******************************************

In the trace
kxhfInit() seems to be the entry point for the 10104 event.

Hash Initialization
*************
*** RowSrcId: 1 HASH JOIN STATISTICS (INITIALIZATION) ***
Here we get to see the memory allocation details.

Join Type: INNER join
Original hash-area size: 971776
Memory for slot table: 884736
Calculated overhead for partitions and row/slot managers: 87040
Hash-join fanout: 8
Number of partitions: 8
Number of slots: 12
Multiblock IO: 9
Block size(KB): 8
Cluster (slot) size(KB): 72
Minimum number of bytes per block: 8160
Bit vector memory allocation(KB): 32
Per partition bit vector length(KB): 4
Maximum possible row length: 413
Estimated build size (KB): 1028
Estimated Build Row Length (includes overhead): 117
# Immutable Flags:
Not BUFFER(execution) output of the join for PQ
Evaluate Left Input Row Vector
Evaluate Right Input Row Vector
# Mutable Flags:
IO sync

Out of the total allocated hash area some of it is used as overhead and appears as “Calculated overhead for partitions and row/slot managers”. (I think this includes among other things the bit map vectors and the as the name suggests the memory allocated for row/slot managers increases with row count). The remaining hash area, after overhead (Memory for slot table) is split up into clusters, each of size “_hash_multiblock_io_count” * Block size. Most of these clusters are used for partition data and some for asynchronous io.

Build table scan
************
kxhfSetPhase: phase=BUILD
The above line in the trace file seems to mark the scan of the build table.

One of the decisions that made earlier on is the number of partitions. This i think is determined by the cluster size, hash area size and estimated build table size. Actually if the “build table” fits in memory there is no need to partition the data. But there is no way to know that for sure and it wouldn’t be optimal to start partitioning the build table after reading it the first time and determining that it doesn’t fit in memory. So i think the build table is always partitioned whether it fits in memory or not. For each row read from the build table, a hash function (hash function 1) is applied on the hash join key and the row is placed in the appropriate partition. At the same time another hash function, hash function 2, is also applied on the join key. This determines the hash bucket that the row goes into. Steve Adams suggests that this hash value is obtained by applying the dbms_utility.get_hash_value function. This value is stored along with the row. (I tested this by making the build table so as to spill on to disk and dumping the tempfile blocks. An 8 byte string, which i think is the hashkey, was attached to each row). Hash function 2 determines which hash bucket the row goes into when the hash table is built. I think the bit vector (seperate for each partition ?) is also built at this time. I couldn’t find info about the internals of the bit vector, maybe its a bloom filter (?). The basic purpose of the bit vector is to provide a cost-effective way to eliminate as many rows as possible from the probe table during the partitioning phase.

kxhfWrite: hash-join is spilling to disk
If the build table is too big to fit in memory it starts spilling onto disk and you see the above line in the trace (for trivia, this line actually appears 1 row before it starts to spill). At this stage, not all partitions are spilled to disk. Initially it spills only one partition and keeps the rest completely in memory. If more data is coming in, it will continue to spill more and more partitions. But it tries to keep atleast 1 partition completely in memory as far as possible. Lets say your hash area (slot memory) is 160K and your cluster size is 16K and the partition count is 4. So it keeps 4 clusters (one for each partition in memory) and atleast 1 used for async io. So 5 of the 10 clusters are used up. Lets say it decided to retain clusters on p2 in memory. It will retain p2 clusters in memory till they exceed 4 clusters. If more data from build table is going on into that partition, it spills that partition as well onto disk.

kxhfSetPhase: phase=PROBE_1
This line appears just after the end of build table input. It reads the header and some blocks from the probe table at this stage.
I wonder why its reading some of the probe table now ? It then marks the end of the build phase 1.

*** RowSrcId: 1 END OF BUILD (PHASE 1) ***
This marks the end of data input from build table.

It prints out new stats of the build table
Revised row length: 115
Revised build size: 1009KB

If any of the partitions spilled to disk, you can see that info in the trace
kxhfFlush(): pid=0 nRows=… build= topQ= (this means partition 0 spilled to disk)

*** RowSrcId: 1 HASH JOIN RESIZE BUILD (PHASE 1) ***
I wonder what RESIZE BUILD means ? Does it adjust memory requirements as per the partition size ?

You see the following lines in trace file
Total number of partitions: 8
Number of partitions which could fit in memory: 3
Number of partitions left in memory: 3
Total number of slots in in-memory partitions: 6

If the build table completely fit in memory (1-pass), you see that number of partitions left in memory the same as the total number of partitions. If all partitions spilled to disk, you will see “Number of partitions left in memory: 0”

*** RowSrcId: 1 HASH JOIN BUILD HASH TABLE (PHASE 1) ***
The partitioning phase of the build table is complete. Now it starts building a hash table using the hash keys (generated during partitioning phase of each row using hash function 2).

You see it dumping the following info
Total number of partitions: 8
Number of partitions left in memory: 3
Total number of rows in in-memory partitions: 3365

The number of rows determines bucket count for the hash table. The bucket count is the next power of 2 just greater than the row count. For ex – in above case the number of buckets are 4096 (power of 2 greater than 3365). But i saw sometimes it goes for the next one, ie 8192 (under memory pressure i saw it getting reduced. But i don’t see any point in having more than 4096)

Estimated max # of build rows that can fit in avail memory: 15336
I did not understand this line. for ex – In my test rowlength was 115. So 15336*115=around 1850K. How could it fit this in my slot memory of around 860K ?

It shows data in the in-memory partitions

### Partition Distribution ###
Partition:0 rows:0 clusters:0 slots:0 kept=0
Partition:1 rows:0 clusters:0 slots:0 kept=0
Partition:2 rows:0 clusters:0 slots:0 kept=0
Partition:3 rows:0 clusters:0 slots:0 kept=0
Partition:4 rows:0 clusters:0 slots:0 kept=0
Partition:5 rows:1128 clusters:2 slots:2 kept=1
Partition:6 rows:1120 clusters:2 slots:2 kept=1
Partition:7 rows:1117 clusters:2 slots:2 kept=1
*** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***

Revised number of hash buckets (after flushing): 3365
Allocating new hash table.
*** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***

Requested size of hash table: 1024
Actual size of hash table: 1024
Number of buckets: 8192
Match bit vector allocated: FALSE
*** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
Total number of rows (may have changed): 3365
Number of in-memory partitions (may have changed): 3
Final number of hash buckets: 8192
Size (in bytes) of hash table: 32768

The size of hash table is 4*number of hash buckets (on my 32-bit windows system).
It is here that i see that it spills are remaining clusters from non-memory resident partitions to disk.
I don’t know why it says “may have changed” in Total number of rows and Number of in-memory partitions ? Maybe it is one more place it could change workarea sizes (in auto PGA mode) and the number of in-memory partitions might change ?

Now the hash table is built if and only if there is atleast 1 partition (of build table input) completely in memory. If there are more than 1 partitions completely in memory, it builds the hash table for all the memory resident partitions. (In the above example it built the hash table for the partitions 5, 6 & 7). If there are no memory resident partitions, it doesn’t build the hash table now.

If a hash table is built, you will see the following lines
kxhfIterate(end_iterate): numAlloc=12, maxSlots=12
*** (continued) HASH JOIN BUILD HASH TABLE (PHASE 1) ***
### Hash table ###
# NOTE: The calculated number of rows in non-empty buckets may be smaller
# than the true number.
Number of buckets with 0 rows: 5437
Number of buckets with 1 rows: 2217
Number of buckets with 2 rows: 472
Number of buckets with 3 rows: 61
Number of buckets with 4 rows: 4
Number of buckets with 5 rows: 1
Number of buckets with 6 rows: 0
Number of buckets with 7 rows: 0
Number of buckets with 8 rows: 0
Number of buckets with 9 rows: 0
Number of buckets with between 10 and 19 rows: 0
Number of buckets with between 20 and 29 rows: 0
Number of buckets with between 30 and 39 rows: 0
Number of buckets with between 40 and 49 rows: 0
Number of buckets with between 50 and 59 rows: 0
Number of buckets with between 60 and 69 rows: 0
Number of buckets with between 70 and 79 rows: 0
Number of buckets with between 80 and 89 rows: 0
Number of buckets with between 90 and 99 rows: 0
Number of buckets with 100 or more rows: 0
### Hash table overall statistics ###
Total buckets: 8192 Empty buckets: 5437 Non-empty buckets: 2755
Total number of rows: 3365
Maximum number of rows in a bucket: 5
Average number of rows in non-empty buckets: 1.221416

This shows the data distribution of rows across the hash table buckets. This is the place where you can check for skew in the data distribution, which might cause plenty of cpu consumption during the join.

It now starts reading the probe table. What it does with the probe data depends on whether the hash table is built or not.
If there is no hash table – It partitions the probe table data into the same number of partitions as the build table using the same hash function 1. But it now uses the bit vector (which it built during the scan of the build table) to determine whether the row is retained or thrownaway. The bit vector doesn’t do a complete elimination but does provide a very good estimate as to whether a matching key could exist. A hash key is then generated (using hash function 2) and the row is added to the corresponding partition’s cluster.
If there is a hash table – Then i think the next step depends on whether the hash table is for all the data or only part of the build table data.
If the hash table is only for 1 or more (but not all) partitions of the build table, then i think the probe table row is first partitioned and if it corresponds to a partition in memory, a hash key is generated (using hash function 2) it is just verified in matching the hash bucket in memory (no need for bit vectoring isn’t it ?). If a matching row is found, data is fetched to next step in the plan. If the probe row corresponds to a partition which is not memory resident, then it is evaluated against the bit vector for that partition to see if it can be retained.
If the hash table corresponds to all partitions (ie the build table completely fit in memory), then the partitioning and bit vector steps could be skipped. (I couldn’t get this detail for Oracle but i saw a patent in mysql for this optimization). Hash function 2 is applied on the row key values and the corresponding hash bucket searched for matching rows.

You can see data fetch starting at this phase if the hash table exists (for the build table).
An optimal hash join ends here. No further steps are necessary.

After the end of probe input, you can see it dumping the probe partitions to disk
kxhfSetPhase: phase=PROBE_2
kxhfFlush(): pid=0 nRows=131 build=0 topQ=0
kxhfFlush(): pid=1 nRows=135 build=0 topQ=1
kxhfFlush(): pid=2 nRows=124 build=0 topQ=2
kxhfFlush(): pid=3 nRows=119 build=0 topQ=3
kxhfFlush(): pid=4 nRows=124 build=0 topQ=4

So in the above lines, it evaluated and fetched all rows corresponding to partitions 5, 6 & 7. (A hash table exists in memory for these partitions). Data from probe table for partitions 0, 1, 2, 3, 4 is written down to disk as a hash table for these partitions, of the build table, doesn’t exist.

qerhjFetchPhase2(): building a hash table
It now starts iterating over the remaining partitions on disk (partitions 0, 1, 2, 3 & 4 in this case). It picks the partitions based on size, starting from smaller to larger ones (It has this data in the Partition Histogram). It picks matching partitions from probe and build tables. It takes the smaller of the two partitions to build a hash table in memory and uses the other one to probe into this hash table. For ex – partition 0 data for probe table fills 2 clusters and partition 0 data for build table fills 1 cluster, it will build the hash table using probe table data. This is called Dynamic Role Reversal.

*** RowSrcId: 1 HASH JOIN GET FLUSHED PARTITIONS (PHASE 2) ***
Getting a pair of flushed partions.
BUILD PARTION: nrows:1169 size=(2 slots, 144K)
PROBE PARTION: nrows:131 size=(1 slots, 72K)
ROLE REVERSAL OCCURRED

You can see it reading the data that it previously spilled to disk. It builds the hash table first
*** RowSrcId: 1 HASH JOIN BUILD HASH TABLE (PHASE 2) ***

Number of blocks that may be used to build the hash hable 63
Number of rows left to be iterated over (start of function): 131
Number of rows iterated over this function call: 131
Number of rows left to be iterated over (end of function): 0

You see here that the “Number of rows left to be iterated over (end of function): ” is 0. This is the next best case (first being optimal join where the build table fit completely in memory). The smallest partition of either the probe or hash table fits in memory. This will cause only 1 scan of the other tables partition.

At each stage of this hash table build and probe, you will see the hash table stats in the trace like this
### Hash table ###
# NOTE: The calculated number of rows in non-empty buckets may be smaller
# than the true number.
Number of buckets with 0 rows: 8062
Number of buckets with 1 rows: 129
Number of buckets with 2 rows: 1
Number of buckets with 3 rows: 0
………….
### Hash table overall statistics ###
Total buckets: 8192 Empty buckets: 8062 Non-empty buckets: 130
Total number of rows: 131
Maximum number of rows in a bucket: 2
Average number of rows in non-empty buckets: 1.007692

After each partition, you start the iteration on the next partition
kxhfResetIter(0C7F9700)

It picks another partition
*** RowSrcId: 1 HASH JOIN GET FLUSHED PARTITIONS (PHASE 2) ***
Getting a pair of flushed partions.
BUILD PARTION: nrows:1071 size=(2 slots, 144K)
PROBE PARTION: nrows:124 size=(1 slots, 72K)
ROLE REVERSAL OCCURRED

The worst case scenario for the hash join results in a nested-loops hash join. This will occur if, even after partitioning, the probe or build table partitions are too big to fit in memory.

In such a case, for each partition, the smallest of the 2 inputs will be picked up and a hash table will be built with those rows that fit in memory and the other input is scanned to probe into this hash table. This process is repeated till all data is processed. In such a case of nested loops hash join, you will see lines like these in the trace

*** RowSrcId: 1 HASH JOIN GET FLUSHED PARTITIONS (PHASE 2) ***
Getting a pair of flushed partions.
BUILD PARTION: nrows:2984 size=(20 slots, 320K)
PROBE PARTION: nrows:1492 size=(10 slots, 160K)
ROLE REVERSAL OCCURRED

*** RowSrcId: 1 HASH JOIN BUILD HASH TABLE (PHASE 2) ***
Number of blocks that may be used to build the hash hable 2
Number of rows left to be iterated over (start of function): 1492
Number of rows iterated over this function call: 151
Number of rows left to be iterated over (end of function): 1341

So for this partition data, it has picked the probe table to build the hash table (ROLE REVERSAL OCCURRED), and it was able to build the hash table for 151 rows (out of the total 1492). It now scans all build table data for that partition to find the matching rows. Then you see the following rows.

kxhfResetIter(0CA476F8)
qerhjFetchPhase2(): building a hash table

*** RowSrcId: 1 HASH JOIN BUILD HASH TABLE (PHASE 2) ***
Number of blocks that may be used to build the hash hable 2
Number of rows left to be iterated over (start of function): 1341
Number of rows iterated over this function call: 151
Number of rows left to be iterated over (end of function): 1190

It picked up the next 151 rows of the probe table to build the hash table. It again iterates over the build table data to find matching rows. This is called nested loops hash join and is the worst case scenario in a hash join.

The following line, i think marks the end of the hash join.
*** RowSrcId: 1, qerhjFreeSpace(): free hash-join memory

Significance of table order in a Hash Join

One of the important decisions taken by the CBO in a hash join is the choice of the Build table. Which of the 2 tables in the hash join should be chosen to build the hash table. I was testing the impact of this decision on the resource usage and performance of the join. Leaving other contentions aside, the major components that we need to look at are going to be cpu & io.

Case 1) Either of the join tables fit in available memory

Even if you have a hash area large enough to either of the tables in memory, it would be wiser to choose the smaller one as the build table. Even though the IO cost is going to be the same, the cost of building a hash table is proportionate to the input data size. The hash join cost is the cost of building the hash table plus the cost of probing this hash table. Building hash table is going to be more costly than probing, given the same size of inputs. But things could turn around if the the collision chains get longer which increases the probe cost.

Case 2) Only the smaller of the 2 inputs one fits in memory.

Its ideal to pick the smaller one and build the hash table in memory and use the larger one to probe this. This has the minimum cpu & io cost.

Case 3) Either of the join tables don’t fit in available memory.

I think several factors come into play here. Dynamic Role Reversal which automatically kicks in this case would choose the smallest partition to build the hash table. I think the one major factor you have to consider here is bit-vector filtering. (I’m assuming no data major skew).

Choosing the smaller table as the probe table and applying bit vector filtering on it could reduce the size of the spilled partitions. This would reduce the number of passes in a nested loops hash join. In some border cases, the join might even complete in 1-pass

At the same time, taking advantage of bit-vector filtering on the larger table could reduce the probe cost.

The cost difference between either choice might not be significant.

December 6, 2009

One more window into Oracle Internals

Filed under: Uncategorized — srivenu @ 3:14 pm

I use this site to get some Oracle Internals – http://www.freepatentsonline.com. You can use the search field for what you want. Example – search for “oracle automatic memory management” and this is one article you get – Dynamic and Automatic Memory Management. The pdf in the article gives more info.

Blog at WordPress.com.