In Part 1 of this series, I introduced the concept of querying the underlying Advisor views with the purpose of scaling up your tuning capability.
In Part 2, I explained what the Advisor Framework is and why you would want to scale-up your use of the Advisor Framework.
Part 3 was about the view structure for the Advisor framework and provide a description as to what is stored in the Advisor tables.
In Part 4 I described and provided various useful queries against various Advisor views. This was intended to give you the essence of the content of these key Advisor views.
Part 5: Case Studies / Advisor framework in use
In Part 5a, (this post) I will provide several case studies of the Advisor framework in use, and hopefully show how useful these custom queries are. N.B. Most of the SQL code was developed to run from TOAD using “:” bind variables. I have some versions which I’ve modified a bit to run from SQL*Plus; it’s mostly a formatting exerciser to run from SQL*Plus due to the line size limitations.
In Part 5a, the use cases I cover are:
· Tune Top ADDM Flagged SQL
· Accept Profiles enmasse
· Evaluate Index Recommendations
· Evaluate Gather Statistics Recommendations
There were too many use cases to cover in 1 post, so I’ll finish off use cases with Part 5b.
In part 5b, the use cases I will cover are:
· Evaluate Parameter Recommendations
· Segment Tuning enmasse
· Evaluating “Worthy” SQL Access Advisor Recommendations
Use Case: Tune Top ADDM Flagged SQL
Tuning from a top SQL perspective is not a first go to method for me; actually, I usually avoid tuning top SQL, because in my experience the part of the application that is not meeting the performance requirements is usually somewhere else in the application as the top SQL tend to be from scheduled jobs or other batch processing workloads that are expected to take long. Not that batch processing workloads don’t need tuning, but from my perspective, if the SQL is already meeting the performance requirements, if you tune it, you become a hero in your own eyes and the users are no happier. Generally, I would approach SQL tuning by getting a good problem description, then diving into ASH (the active session history) to identify the bottleneck SQL, once the SQL is identified, pull the metrics related to those to help identify root cause and solutions.
There are cases though when “top SQL” types of tuning is called for. For example, users are complaining about “everything” (i.e. there are no specific application functions that are not working or there are no specific times of day when things are not working). Before I discovered how to use the historic data in the Advisor framework, I would look in DBA_HIST_SQLSTAT for the top SQL, but this presents a bit of a challenge, because in deciding which SQL to focus in on. For example, you could get the Top SQL by sorting the SQL stats in reverse order by:
elapsed time per execution
overall total execution time
CPU time per execution
Buffered gets
IO wait time
Number of executions
etc.
Since ADDM runs every snapshot interval and picks top SQL for me, I decided to mine this information to get the top SQL flagged by the automatic ADDM runs. The following SQL aggregates by sql_id and surfaces the SQL that appear most in the automated ADDM reports:
with sql_tuning_actions as
(
select aa.attr1 sql_id
, (Select min(plan_hash_value) from dba_hist_SQL_plan p where p.sql_id = aa.attr1) as plan_hash_value
, (Select max(plan_hash_value) from dba_hist_SQL_plan p where p.sql_id = aa.attr1) as max_plan_hash_value
, (Select count(distinct plan_hash_value) from dba_hist_SQL_plan p where p.sql_id = aa.attr1) as num_plans
, count(*) cnt
, round(avg(round(num_attr1/1000000)),4) avg_elap_sec_per_Exec
, round(avg(num_attr2), 2) avg_est_benefit_aas
, (select max(DBMS_LOB.SUBSTR(sql_text, 3800)) from dba_hist_sqltext st where st.sql_id = aa.attr1) sql_text
, min(substr(substr(description, instr(description, '[')+1, length(description) - instr(description, '[')+1), 1, instr( substr(description, instr(description, '[')+1, length(description) - instr(description, '[')+1), ',', 1)-1)) min_start_snap_id
, max(substr(substr(description, instr(description, ', ')+1, length(description) - instr(description, ', ')+1), 1, instr( substr(description, instr(description, ', ')+1, length(description) - instr(description, ', ')+1), ']', 1)-1)) max_end_snap_id
from dba_advisor_actions aa
, dba_advisor_tasks at
where 1=1
and aa.task_id = at.task_id
and aa.command = 'RUN SQL TUNING ADVISOR'
and decode(:days_back_only_Y_N,'Y', created, trunc(sysdate-:days_back) ) >= trunc(sysdate-:days_back)
and to_number(to_char(created, 'HH24')) between
nvl(:begin_hour, 0) and nvl(:end_hour, 24)
group by aa.attr1
)
select sql_id
, (select sum(executions_delta) from dba_hist_sqlstat s where s.sql_id = a.sql_id and snap_id between min_start_snap_id and max_end_snap_id) executions
, plan_hash_value
, max_plan_hash_value
, num_plans
, cnt
, avg_elap_sec_per_exec
, avg_est_benefit_aas
, sql_text
from sql_tuning_actions a
where 1=1
-- exclude SQL that can't be profiled
and case when (sql_text like '%dbms_stats%' or sql_text like '/* SQL Analyze%' or sql_text like 'SELECT /* DS_SVC */%') then 'EXCLUDE' else 'INCLUDE' end = 'INCLUDE'
order by avg_elap_sec_per_exec desc, avg_est_benefit_aas desc, cnt desc
fetch first 20 rows only
;
This minimalist version of the query will accept a few parameters [of course, there are many ways to subset the data, so youc an get creative as per your individual needs]:
:days_back_only_Y_N
:days_back
:begin_hour
:end_hour
For example, if I’m running this on a Friday late afternoon and I want to see the weekday workload during business hours, so I’ll say 4 days back between 8 AM and 4 PM as follows:
:days_back_only_Y_N = Y
:days_back = 4
:begin_hour = 08
:end_hour = 16
The output from running the above query follows.
Column descriptions:
Executions - is the number of executions of the SQL during the snapshot intervals (obtained by subquery from DBA_HIST_SQLSTAT)
Sql_id – you know (hopefully) – the unique identifier created by Oracle for the SQL statement
Plan_hash_value is the minimum plan hash value of the SQL (obtained by subquery from DBA_HIST_SQL_PLAN)
Num_plans – is the count of SQL planhash values for this sql_id (also from DBA_HIST_SQL_PLAN)
CNT – this is the count of times that this finding appeared in an ADDM run
AVG_ELAP_SEC_PER_EXEC – this is the average elapsed time per execution of this query (from DBA_ADVISOR_ACTIONS)
AVG_EST_BENEFIT_AAS – this is the average expected benefit as measured by Average Active Sessions
SQL_TEXT – the sql text as obtained from DBA_HIST_SQL_TEXT
The output example has been edited to focus in on a few findings.
You may have noticed that the output is ordered by the average estimated benefit in AAS; this will allow you to focus in on SQL that is more likely going to be high value in terms of beneficial impact if tuned.
Also, I highlighted with the arrows a possible problem with not using substation variables, as several SQL have the same plan hash values and their SQL text at least looks somewhat the same.
If I’m thinking of tuning these SQL, the first thing I’ll do is cut and paste the SQL_ID’s edited into the form of SQL*Plus commands that run the SQL Monitor report and run them through the SQL Tuning Advisor (all in one shot) to see if there is any low hanging fruit.
Use Case: Accept Profiles enmasse
Let’s say for example that there has been a load of sql_id’s run through the SQL Tuning Advisor – what I will do is query the persisted runs from Advisor framework data to get this information condensed all in one shot (rather than picking through the SQL Tuning Advisor reports one by one). This is especially helpful when there are lots of STA runs; I will run the following SQL statement to automatically extract lots of SQL Profiles. The query provides the sql_id, the percent benefit [very useful to gauge the expected benefit of the SQL Profile], the Accept SQL Profile command [can easily be used to run a large set of accept’s] along with the recommendation type [which shows other recommendations that may be useful].
with advisor_objects as
(
select
owner,object_id,type,type_id,task_id,task_name
,attr1 sql_id
,attr2 plan_hash_value
,attr3 username
,attr4 sql_text
,attr5 dunno1
,other
, (select max(created) from dba_advisor_tasks at where AT.TASK_ID = ao.task_id) created
from dba_advisor_objects ao
where 1=1
and task_name in(
select task_name from dba_advisor_tasks
where 1=1
and advisor_name = 'SQL Tuning Advisor'
and decode(:days_back_only_Y_N,'Y', created, trunc(sysdate-:days_back) ) >= trunc(sysdate-:days_back)
)
)
, advisor_actions as
(
select aa.owner,aa.task_id,aa.task_name,aa.rec_id,aa.action_id,aa.object_id,aa.command,aa.message
, case when aa.attr3 = 'PARALLEL EXECUTION' then 'DBMS_SQLTUNE.PX_PROFILE'
else NULL
end profile_type
, round(ar.benefit/100) benefit
, (select distinct listagg( ar2.type, ',') within group (order by ar2.type desc) from dba_advisor_recommendations ar2 where ar2.task_id = aa.task_id) rec_type -- a value of 'ALTERNATE PLAN' lets me know if the SQL has other execution plans that can possibly be baselines
from dba_advisor_actions aa
, dba_advisor_recommendations ar
where 1=1
and ar.task_id = aa.task_id
and ar.rec_id = aa.rec_id
and aa.task_name in(
select task_name from dba_advisor_tasks
where 1=1
and advisor_name = 'SQL Tuning Advisor'
)
and command = 'ACCEPT SQL PROFILE'
)
, accept_sql_profile_cmds as
(
select sql_id
, plan_hash_value
, message
, 'execute dbms_sqltune.accept_sql_profile('||
'task_name => ''' || aa.task_name ||
''', name => ''' || sql_id || case when profile_type is null
then '_PHV_'
|| (select min(plan_hash_value) from dba_advisor_sqlplans p where p.task_name = ao.task_name and p.sql_id = ao.sql_id and p.attribute = 'Using SQL profile' )
else '_PXP_'
|| (select min(plan_hash_value) from dba_advisor_sqlplans p where p.task_name = ao.task_name and p.sql_id = ao.sql_id and p.attribute = 'Using parallel execution')
end ||
''', replace => TRUE ' ||
', force_match => TRUE' ||
case when profile_type is not null then ', profile_type => ' || profile_type else ' ' end ||
' ); ' cmd
, aa.profile_type
, aa.benefit
, aa.rec_type
, ao.task_id
, ao.owner
, ao.task_name
, DBMS_LOB.SUBSTR(ao.sql_text, 3600) sql_text
, ao.created
from dual
, advisor_objects ao
, advisor_actions aa
where 1=1
and aa.task_id = ao.task_id
and aa.owner = ao.owner
and aa.task_name = ao.task_name
)
, snaps as
(
select snap_id
from dba_hist_snapshot
where 1=1
and decode(:days_back_only_Y_N,'Y', begin_interval_time, trunc(sysdate-:days_back) ) >= trunc(sysdate-:days_back)
)
/* main query */
select sql_id, asp.benefit || '%' benefit
, asp.cmd
, asp.rec_type
, max(sql_text)
, (select sum(executions_delta) from dba_hist_sqlstat stat where stat.sql_id = asp.sql_id and stat.snap_id in (select snap_id from snaps)) executions
, (select sum(rows_processed_delta) from dba_hist_sqlstat stat where stat.sql_id = asp.sql_id and stat.snap_id in (select snap_id from snaps)) rows_processed
, (select round(sum(elapsed_time_delta)/1000000) from dba_hist_sqlstat stat where stat.sql_id = asp.sql_id and stat.snap_id in (select snap_id from snaps)) as elapsed_sec
, owner, max(created) created
, (select min(sql_profile) from dba_hist_sqlstat stat where stat.sql_id = asp.sql_id and stat.snap_id in (select snap_id from snaps)) as SQL_PROFILE
from accept_sql_profile_cmds asp
where 1=1
group by owner, sql_id, asp.benefit , asp.cmd , asp.rec_type
order by asp.benefit desc
;
Note: DBA_ADVISOR_OBJECTS has the SQL id's for the SQL Tuning Advisor tasks.
This minimalist version of the query will accept two parameters:
:days_back_only_Y_N => Use ‘Y’
:days_back => 0=today; 1=yesterday and today; …
The following output is redacted for [hopefully] obvious reasons, but I hope you can get the idea:
Note: For the SQL Profile commands I construct in the above SQL statement:
The SQL Profile name has embedded in it the plan hash value that the SQL Profile is intended to generate; this helps me to decide if the proper plan hash value is getting generated [useful in future troubleshooting efforts].
I always use “force_match => TRUE”, so that the SQL Profile will apply to all SQL Text that parse out to the same force_match_signature.
Note: REC_TYPE column, there can be many recommendations from a SQL Tuning Advisor run, which is stored in the REC_TYPE column:
SQL PROFILE: means that there was a SQL Profile finding
INDEX: means that there was an index finding as well
STATISTICS: means that there was a statistics finding as well
ALTERNATE PLAN: if REC_TYPE contains a value of 'ALTERNATE PLAN', that lets me know if the SQL has other execution plans that can possibly be baselined or have those SQL Profiles pinned with the COE script.
Use Case: Evaluate Index Recommendations
The following query aggregates index recommendations from multiple STA runs. If the same index is recommended for more than one SQL statement, it will only appear once in this query [basically, I grew tired of hand-picking index recommendations from the STA reports]. The query also provides details on benefit and applicable SQL’s, and provides the create index commands which can be cut and pasted into a script for implementation:
with advisor_objects as
(
select
owner,object_id,type,type_id,task_id,task_name
,attr1 sql_id
,attr2 plan_hash_value
,attr3 username
,attr4 sql_text
,attr5 dunno1
,other
, (select max(created) from dba_advisor_tasks at where AT.TASK_ID = ao.task_id) created
from dba_advisor_objects ao -- advisor objects has the SQL id's for the SQL Tuning Advisor tasks
where 1=1
and task_name in(
select task_name from dba_advisor_tasks
where 1=1
and advisor_name = 'SQL Tuning Advisor'
and decode(:days_back_only_Y_N,'Y', created, trunc(sysdate-:days_back) ) >= trunc(sysdate-:days_back)
)
)
, advisor_actions as
(
select aa.owner,aa.task_id,aa.task_name,aa.rec_id,aa.action_id,aa.object_id,aa.command,aa.message
, attr1 index_name
, attr4 index_type
, attr3 table_name
, to_char(substr(attr5,1, 3000)) index_columns
, round(ar.benefit/100) benefit
, (select min(type) from dba_advisor_recommendations ar2 where ar2.task_id = aa.task_id) rec_type -- a value of 'ALTERNATE PLAN' lets me know if the SQL has other execution plans that can possibly be baselines
--, aa.*
from dba_advisor_actions aa
, dba_advisor_recommendations ar
where 1=1
and ar.task_id = aa.task_id
and ar.rec_id = aa.rec_id
and aa.task_name in(
select task_name from dba_advisor_tasks
where 1=1
and advisor_name = 'SQL Tuning Advisor'
)
and command = 'CREATE INDEX'
)
, index_sql_benefits as
(
select sql_id
, plan_hash_value
--, command
, message
, command || ' ' || index_name || ' on ' || table_name || ' ' || index_columns || ' ; ' cmd
, index_type
, aa.benefit
, table_name , index_columns
, aa.rec_type
, ao.task_id
, ao.owner
, ao.task_name
, DBMS_LOB.SUBSTR(ao.sql_text, 3600) sql_text
, ao.created
from dual
, advisor_objects ao
, advisor_actions aa
where 1=1
and aa.task_id = ao.task_id
and aa.owner = ao.owner
and aa.task_name = ao.task_name
)
, all_actions as
(
select owner, max(created) created, sql_id, benefit
, cmd
, rec_type -- a value of 'ALTERNATE PLAN' lets me know if the SQL has other execution plans that can possibly be baselines
, substr(table_name, instr(table_name, '.')+1) table_name
, table_name || ' ' || index_columns index_tab_cols
, max(sql_text) sql_text
, ', ''' || sql_id || '''' sqlid
, 'select ''' || sql_id || ''' sql_id from dual union ' sql_list
from index_sql_benefits isb
where 1=1
group by owner, sql_id, benefit , cmd , rec_type, index_columns, table_name
)
/* main query */
select count( distinct sql_id) || ': ' || listagg(sql_id,', ') within group(order by sql_id) sql_list
, min(benefit) || '% - ' || max(benefit) || '%' benefit
, min(cmd) cmd
, table_name, index_tab_cols
, min(sql_id) min_sql_id
, max(sql_id) max_sql_id
, min(created) min_created
, max(created) max_created
, min(sql_text) min_sql_text
, max(sql_text) max_sql_text
from all_actions
where 1=1
group by table_name, index_tab_cols
order by 1, 2 desc
;
The following output is redacted for [hopefully] obvious reasons.
Note: I always recommend
app teams test new indexes in non-prod environments and follow their usual change control practices
app teams use their own index naming conventions
Use Case: Evaluate Gather Statistics Recommendations
Similarly, we can condense the stats gathering recommendations. [I forgot to mention, if no rows are returned, that means there were none of that kind of recommendation for the subset queried]. The following SQL will aggregate all the stats gathering recommendations. If the stats gathering recommendation appeared for multiple SQL ID’s it will appear in this output only once.
with advisor_objects as
(
select
owner,object_id,type,type_id,task_id,task_name
,attr1 sql_id
,attr2 plan_hash_value
,attr3 username
,attr4 sql_text
,attr5 dunno1
,other
, (select max(created) from dba_advisor_tasks at where AT.TASK_ID = ao.task_id) created
from dba_advisor_objects ao -- advisor objects has the SQL id's for the SQL Tuning Advisor tasks
where 1=1
and task_name in(
select task_name from dba_advisor_tasks
where 1=1
and advisor_name = 'SQL Tuning Advisor'
and decode(:days_back_only_Y_N,'Y', created, trunc(sysdate-:days_back) ) >= trunc(sysdate-:days_back)
)
)
, advisor_actions as
(
select aa.owner,aa.task_id,aa.task_name,aa.rec_id,aa.action_id,aa.object_id,aa.command,aa.message
, attr1 object_owner
, attr2 object_name
, decode (aa.command, 'GATHER INDEX STATISTICS', 'INDEX', 'GATHER TABLE STATISTICS', 'TABLE', aa.command) object_type
, round(ar.benefit/100) benefit
, (select min(type) from dba_advisor_recommendations ar2 where ar2.task_id = aa.task_id) rec_type -- a value of 'ALTERNATE PLAN' lets me know if the SQL has other execution plans that can possibly be baselines
--, aa.*
from dba_advisor_actions aa
, dba_advisor_recommendations ar
where 1=1
and ar.task_id = aa.task_id
and ar.rec_id = aa.rec_id
and aa.task_name in(
select task_name from dba_advisor_tasks
where 1=1
and advisor_name = 'SQL Tuning Advisor'
)
and command like 'GATHER%'
)
, stats_sql_benefits as
(
select sql_id
, plan_hash_value
, message
, case when object_type = 'TABLE' then 'execute dbms_stats.gather_table_stats(ownname => '''
|| object_owner || ''', tabname => ''' || object_name || ''', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE); '
when object_type = 'INDEX' then 'execute dbms_stats.gather_index_stats(ownname => '''
|| object_owner || ''', indname => ''' || object_name || ''', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE); '
end as cmd
, object_owner, object_name, object_type
, aa.benefit
, aa.rec_type
, ao.task_id
, ao.owner
, ao.task_name
, DBMS_LOB.SUBSTR(ao.sql_text, 3600) sql_text
, ao.created
from dual
, advisor_objects ao
, advisor_actions aa
where 1=1
and aa.task_id = ao.task_id
and aa.owner = ao.owner
and aa.task_name = ao.task_name
)
, all_actions as
(
select owner, max(created) created, sql_id, benefit
, cmd
, rec_type -- a value of 'ALTERNATE PLAN' lets me know if the SQL has other execution plans that can possibly be baselines
, object_owner, object_name, object_type
, max(sql_text) sql_text
from stats_sql_benefits isb
where 1=1
group by owner, sql_id, benefit , cmd , rec_type, object_owner, object_name, object_type
)
/* main query */
select object_owner, object_name, object_type
, min(cmd) cmd
, min(benefit) min_benefit
, max(benefit) max_benefit
, count( distinct sql_id) cnt_sql
, listagg(sql_id,', ') within group(order by sql_id) sql_list
, min(sql_id) min_sql_id
, max(sql_id) max_sql_id
, min(created) min_created
, max(created) max_created
, min(sql_text) min_sql_text
, max(sql_text) max_sql_text
from all_actions
where 1=1
group by object_owner, object_name, object_type
order by 1, 2 desc
;
The following output is redacted for [hopefully] obvious reasons.
I will usually turn over the entire set of stats recommendations to the app team for implementation. Also, I will enquire with the application DEVOPS team to check if there is a mismatch with the data life cycle in the objects and the stats gathering cycle. For example: after bulk loads it is usually better to gather stats rather than wait for the system scheduled job.
What’s next:
I hope these use cases and queries have piqued your interest in mining the Advisor framework.
There were too many use cases to cover in one post, so I’ll finish off use cases with Part 5b.
In Part 5b, the use cases I will cover are:
· Evaluate Parameter Recommendations
· Segment Tuning enmasse
· Evaluating “Worthy” SQL Access Advisor Recommendations
In the upcoming Part 6 I will recap and provide a catalog of the SQL tools I’ve developed. The SQL I use is in the public domain.
Series Outline
If you have been following along. there is a lot of content in this topic, and I’m breaking up the blog posts up into a more easily digestible 6-part series.
Part 1: Series Introduction
The Series Introduction is Part 1, where I introduce the concept of querying the underlying Advisor views with the purpose of scaling up your tuning capability.
Part 2: What is the Advisor Framework?
In Part 2, I wrote on what the Advisor Framework is and why you would want to scale-up your use of the Advisor Framework.
Part 3: Advisor tables structure.
Part 3 was about the view structure for the Advisor framework and provide a description as to what is stored in the Advisor tables.
Part 4: Querying the advisor framework
In Part 4, I described and provided various useful queries against various Advisor views. This was intended to give you the essence of the content of several key Advisor views.
Part 5a/b: Case Studies / Advisor framework in use
In Part 5a, (this post) I will provide several case studies of the Advisor framework in use, and hopefully show how useful these custom queries are.
In Part 5a, (this post) the use cases I cover are:
· Tune Top ADDM Flagged SQL
· Accept Profiles enmasse
· Evaluate Index Recommendations
· Evaluate Gather Statistics Recommendations
In Part 5b, the use cases I will cover are:
· Evaluate Parameter Recommendations
· Segment Tuning enmasse
· Evaluating “Worthy” SQL Access Advisor Recommendations
Part 6: Jet Tour Through the Catalog of SQL Tools
In the last part of the series, I plan to provide a catalog of the SQL tools I’ve developed. The SQL I use is in the public domain.
Update Your Bag-Of-Tricks
Again, stick around for this month’s series and get actual working SQL code to put in your bag-of-tricks, and hopefully use and improve your Oracle database tuning skills.
I like the bag-of-tricks motif because it reminds me that, as a kid, I was impressed by Felix the Cat (cartoon character) [I guess I’m dating myself and admitting I watched too much TV]. Felix would get himself into some kind of trouble, and at the last moment he would reach into his bag-of-tricks and come up with a solution to save the day. Early in my Oracle career [goes back to Oracle V4 in 1985], I would learn of various SQL queries against the data dictionary and other ways to work your way through what’s going on under the covers in Oracle. I was struggling for what to call the directory where I kept this code, a light bulb goes on and I decided on using Felix’s bag-of-tricks as a metaphor. So, all new capabilities for introspecting the database, documents, … go into my bag-of-tricks.