Thursday, October 9, 2014

Tuning PGA Memory Allocation

Problem
You’ve decided to set a specific minimum memory size for the PGA_AGGREGATE_TARGET parameter, although you’re using Oracle’s automatic memory management to allocate memory. You’d like to know the best way to figure out the optimal PGA memory allocation.

Solution
There are no hard and fast rules for allocating the size of the PGA_AGGREGATE_TARGET parameter. Having said that, if you’re operating a data warehouse, you’re more likely to need much larger amounts of memory set apart for the PGA component of Oracle’s memory allocation. You can follow these basic steps to allocate PGA memory levels:

  1. Use a starting allocation more or less by guessing how much memory you might need for the PGA.
  2. Let the database run for an entire cycle or two of representative workload. You can then access various Oracle views to figure out if your first stab at the PGA estimation was on target.

How It Works
Although automatic memory management is designed to optimally allocate memory between the two major components of Oracle memory—the SGA and the PGA—it’s certainly not uncommon for many
DBAs to decide to set their own values for both the SGA and the PGA, either as part of the alternative mode of memory management, automatic shared memory management, wherein you set the
SGA_TARGET and the PGA_AGGREGATE_TARGET parameters explicitly to manage memory, or even under the newer automatic memory management system. Unlike the SGA_TARGET parameter, where cache hit ratios
could mislead you as to the efficacy of the instance, you’ll find that an analysis of the hit ratios for the PGA_AGGREGATE_TARGET parameter are not only valid, but also highly useful in configuring the appropriate
sizes for this parameter.

The Oracle database uses PGA memory to perform operations such as sorting and hashing. The memory you allocate to the PGA component is used by various SQL work areas running in the database (along with other users of the PGA memory such as PL/SQL and Java programs). Ideally, you’d want all work areas to have an optimal PGA memory allocation. When memory allocation is ideal, a work area performs the entire operation in memory. For example, if a SQL operation involves sorting operations, under optimal PGA allocation, all of the sorting is done within the PGA memory allocated to that process. If the PGA memory allocation isn’t optimal, the work areas make one or more passes over the data—this means they have to perform the operations on disk, involving time consuming I/O. The more passes the database is forced to make, the more I/O and the longer it takes to process the work. Oracle computes the PGA cache hit percentage with the following formula:

Bytes Processed * 100 /(Bytes processed + Extra Bytes Processed)

Bytes Processed is the total number of bytes processed by all the PGA memory using SQL operations since the instance started. You should seek to get this ratio as close to 100 as possible—if your PGA cache
hit percentage is something like 33.37%, it’s definitely time to increase PGA memory allocation by raising the value you’ve set for the PGA_AGGREGATE_TARGET parameter. Fortunately, the PGA_AGGREGATE_TARGET parameter is dynamic, so you can adjust this on the fly without a database restart, to address a sudden slowdown in database performance due to heavy sorting and hashing activity.
You can issue the following simple query to find out the PGA cache hit percentage as well as a number of PGA performance-related values.

SQL>select * from v$pgastat;
NAME VALUE UNIT


aggregate PGA target parameter 570425344 bytes
aggregate PGA auto target 481397760 bytes
total PGA inuse 35661824 bytes
total PGA allocated 70365184 bytes
maximum PGA allocated 195681280 bytes
over allocation count 0 bytes processed
extra bytes read/written 0 bytes
cache hit percentage 100 percent
SQL>

Since we’re using our test database here, the cache hit percentage is a full 100%, but don’t expect that in a real-life database, especially if it is processing a lot of data warehouse–type operations!
You can also use the V$SQL_WORKAREA_HISTOGRAM view to find out how much of its work the database is performing in an optimal fashion. If a work area performs its work optimally, that is, entirely within
PGA memory, it’s counted as part of the OPTIMAL_COUNT column. If it makes one or more passes, it will go under the ONEPASS_COUNT or the MULTIPASS_COUNT columns. Here’s a query that shows how to do this:

**SQL> select optimal_count, round(optimal_count*100/total, 2) optimal_perc,
2 onepass_count, round(onepass_count*100/total, 2) onepass_perc,
3 multipass_count, round(multipass_count*100/total, 2) multipass_perc
4 from
5 (select decode(sum(total_executions), 0, 1, sum(total_executions)) total,
6 sum(OPTIMAL_EXECUTIONS) optimal_count,
7 sum(ONEPASS_EXECUTIONS) onepass_count,
8 sum(MULTIPASSES_EXECUTIONS) multipass_count
9 from v$sql_workarea_histogram
10* where low_optimal_size > (64*1024))
SQL> /**

OPTI_COUNT OPTI_PERC ONEPASS_CT ONEPASS_PERC MULTIPASS_CT MULTIPASS_PERC


8069 100 0 0 0 0
One pass is slower than none at all, but a multi-pass operation is a sure sign of trouble in your database, especially if it involves large work areas. You’ll most likely find that your database has slowed
to a crawl and is unable to scale efficiently when the database is forced to make even a moderate amount of multi-pass executions that involve large work areas, such as those that are sized 256 MB to 2 GB. To make sure that you don’t have any huge work areas running in the multi-pass mode, issue the following query:

**SQl> select low_optimal_size/1024 low,
(high_optimal_size+1)/1024 high,
optimal_executions, onepass_executions, multipasses_executions
from v$sql_workarea_histogram
where total_executions !=0;**

You can also execute simple queries involving views such as VSYSSTATandVSESSTAT as shown here, to find out exactly how many work areas the database has executed with an optimal memory size (in the PGA), one-pass memory size, and multi-pass memory sizes.

**SQL>select name profile, cnt, decode(total, 0, 0, round(cnt*100/total)) percentage from (SELECT name, value cnt, (sum(value) over ()) total
from V$SYSSTAT where name like ‘workarea exec%’);**

Remember that this query shows the total number of work areas executed under each of the three different execution modes (optimal, one-pass, and multi-pass), since the database was started. To get
the same information for a specific period of time, you can use queries involving Automatic Session History (ASH).

You can also view the contents of the Automatic Workload Repository (AWR) for information regarding how the database used its PGA memory for any interval you choose. If you regularly create these reports and save them, you can have a historical record of how well the PGA allocation has been over a period of time. You can also view the ADDM report for a specific time period to evaluate what
proportion of work the database is executing in each of the three modes of execution we discussed earlier. In a data warehouse environment, where the database processes huge sorts and hashes, the
optimal allocation of the PGA memory is one of the most critical tasks a DBA can perform.

No comments:

Post a Comment