SQLTXPLAIN (SQLT) 11.4.4.4 was made available on April 2, 2012. Find this tool under MOS
215187.1. It contains 2 fixes and 37 enhancements:
Fixes:
1. Peeked and Captured Binds in Execution Plan of MAIN was showing :B1 where predicate was
“COL=:B10! (false positives). Fix display of bind peeking and bind capture when SQL contains
binds like :b1 and :b10.
2. Metadata script includes now creation of Statistics Extensions.
Enhancements:
1. New HC when derived stats are detected on a Table or Index (GLOBAL_STATS = ‘NO’ and
PARTITIONED = ‘YES’).
2. New HC when SQL Plan Baseline contains non-reproducible Plans.
3. New HC indicating sizes of SQL Plan History and SQL Plan Baseline (enabled and accepted
Plans).
4. New HC when there is an enabled SQL Profile and there are VPD policies affecting your SQL.
Plan may be unstable.
5. New HC when there is more than 1 CBO environment in memory or AWR for given SQL.
6. New HC when Indexes or their Partitions/Subpartitions have UNUSABLE segments.
7. New HC when Indexes are INVISIBLE.
8. New HC when an Index is referenced in a Plan and the index or its partitions/subpartitions
are now UNUSABLE.
9. New HC when an Index is referenced in a Plan and the index is now INVISIBLE.
10. New HC when Table has locked statistics.
11. New HC when INTERNAL_FUNCTION is present in a Filter Predicate since it may denote an
undesired implicit data_type conversion.
12. New HC when Plan Operations have a Cost of 0 and Cardinality of 1. Possible incorrect
Selectivity.
13. New HC when SREADTIM differs from actual db file sequential read for more then 10%.
14. New HC when MREADTIM differs from actual db file scattered read for more then 10%.
15. New HC when BLEVEL has changed for an Index, an Index Partition or an Index Subpartition
according to statistics versions.
16. New HC when NUM_ROWS has changed more than 10% for a Table, a Table Partition or a
Table Subpartition according to statistics versions.
17. New HC when Index is redundant because its leading columns are a subset of the leading
columns of another Index on same Table.
18. New HC when leaf blocks on a normal non-partitioned index are greater than estimated leaf
blocks with a 70% efficiency.
19. Active Session History sections on MAIN report include now up to 20 sessions and 20
snapshots (it was 10 and 10).
20. Parameter _optimizer_fkr_index_cost_bias has been added to SQLT XPLORE.
21. SQLTPROFILE and script coe_xfr_sql_profile.sql take now SQL statement with SQL Text
larger than 32767 characters.
22. Add metrics similar to what we have now on summary tables/indexes on SQLHC.
23. Tables and Indexes sections on MAIN contain now links showing object counts instead of a
constant. Similar to SQLHC.
24. Execution Plans on SQLT to show with mouse-over, schema statistics for both: current and as
per plan timestamp.
25. Add new columns on 11.2.0.3 for all V$, GV$ and DBA views accessed by SQLT.
26. Include reason WHY a cursor is not shared (out of XML “reason” column on
gv$sql_shared_cursor).
27. MAIN report heading includes now a link to MOS SQLT document.
Written by Carlos Sierra
April 2, 2012 at 2:40 pm
Tracing an
Oracle Session
1) Check
instance parameters to make sure instance is able to
trace:
select name, value
from v$parameter
where name in
('timed_statistics','max_dump_file_size','user_dump_dest');
TIMED_STATISTICS - Should be TRUE.
MAX_DUMP_FILE_SIZE - Should be UNLIMITED or something really large (>
10000). USER_DUMP_DEST - Should
already be set.
2) Turn on SQL
trace for the user:
2a) If you have
direct access to the session:
This is good if you just need
basic trace info, no waits or binds.
ALTER SESSION SET
tracefile_identifier='MYTRACE';
ALTER SESSION SET SQL_TRACE=TRUE;
- or to include Wait and Bind info
-
ALTER SESSION SET EVENTS '10046 trace name
context forever, level 12';
4=Binds, 8=Waits, 12=Binds+Waits -
or -
exec
dbms_support.start_trace(waits=>true,binds=>false);
2b) Or use this
method if no direct access to the session:
SELECT OSUSER,
SID,
SERIAL#,
USERNAME,
STATUS,
'EXEC
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||sid||','||serial#||',true);' as
sql_start,
'EXEC
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||sid||','||serial#||',false);' as
sql_end
FROM v$SESSION where UPPER(OSUSER) =
'&osuser'
AND status = 'ACTIVE';
-orselect
status,'exec
DBMS_SUPPORT.START_TRACE_IN_SESSION('||sid||','||serial#||',true,false);' as
enable,
'exec
DBMS_SUPPORT.STOP_TRACE_IN_SESSION('||sid||','||serial#||');' as disable
From v$session where username='&osuser';
2c) If you are
using connection pooling
If you are dealing with transient
connections or connection pooling you may not know the SID because the user
hasn't logged in yet.
You can create a Login trigger
that will set tracing on for all new connections that
create or replace temp_sql_trace after logon
on database
begin
if user = UPPER('&USERNAME.') then
execute immediate 'alter session set events
''10046 trace name context forever, level 8''';
end if;
end;
begin
for i in (select sid, serial# from v$session
where username = 'UPPER('&USERNAME.')
LOOP
execute immediate 'execute
SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||i.sid||','||i.serial#||',TRUE)';
END LOOP;
end;
/
2d) Using
DBMS_MONITOR (Alternate Connection Pooling Method)
The following is a low impact
logon trigger that will set the v$session.client_identifier column after a user
connects.
CREATE OR REPLACE TRIGGER LOGON_TRIGGER
AFTER LOGON ON DATABASE
DECLARE
my_username varchar2(30);
my_osuser varchar2(30);
my_machine varchar2(64);
my_client_identifier varchar2(64);
BEGIN
-- Gather information used to build Client
Identifier string
my_username := user;
SELECT TRIM(UPPER(SYS_CONTEXT('USERENV',
'OS_USER'))),
TRIM(UPPER(SYS_CONTEXT('USERENV', 'HOST')))
INTO my_osuser,
my_machine
FROM dual;
-- Build the Client Identifier String
v_user_identifier := my_username;
-- Alternate client_identifier string if you
need more granularity
-- v_user_identifier := SUBSTR(my_username ||
'|' || my_osuser || '|' || my_machine, 1, 64);
-- Set the session's Client Identifier
DBMS_SESSION.SET_IDENTIFIER(v_client_identifier);
END;
/
The client identifier string can
be up to 64 bytes. Since just the v$session.client_identifier is set, you can
toggle tracing on and off
like this:
-- Turn tracing on
EXECUTE
DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE(client_id => 'TRACEME', waits =>
TRUE, binds => FALSE);
-- Turn tracing off
EXECUTE
DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE(client_id => 'TRACEME');
2e) SET_EV
method
exec
dbms_system.set_ev([sid],[serial#],10046,[Trace Level],'');
Where trace level is 4 (binds),8
(waits), or 12 (binds and waits).
Check to make
sure trace is running
select * From dba_enabled_traces;
3) Run the SQL
Statement
Wait, and watch disk space in
UDUMP. A trace file will appear and start growing.
4) Turn off SQL
Trace
4a) If you have
direct acess to the session:
ALTER SESSION SET SQL_TRACE=FALSE;
-or-
ALTER SESSION SET EVENTS '10046 trace name
context off';
-orexec
dbms_support.stop_trace;
4b) Or use this
method if no direct access to the session:
SELECT OSUSER,
SID,
SERIAL#,
STATUS,
'EXEC
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||sid||','||serial#||',false);' as
sql_end
FROM v$SESSION where UPPER(OSUSER) =
'&osuser'
AND status = 'ACTIVE';
4c) If you are
using connection pooling
If you are dealing with transient
connections or connection pooling you may not know the SID because the user
hasn't logged in yet.
You can create a Login trigger
that will set tracing on for all new connections that
drop trigger temp_sql_trace;
begin
for i in (select sid, serial# from v$session
where username = 'UPPER('&USERNAME.')
LOOP
execute immediate 'execute
SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||i.sid||','||i.serial#||',FALSE)';
END LOOP;
end;
/
4d) Using
DBMS_MONITOR package
-- Turn tracing off
EXECUTE DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE(client_id
=> 'TRACEME');
4e) SET_EV
method
exec
dbms_system.set_ev([sid],[serial#],10046,0,'');
Set Trace Level = 0 to stop
tracing.
5) Generate
TKProf output (OS)
Change to UDUMP directory
cd /u01/app/oracle/admin/qlatl/udump
You run tkprof manually:
tkprof [input file name] [output file name]
EXPLAIN=[user/password]
Here is a Unix command to TKProf
for the most recent trace file in the dump directory:
tkprof `ls -tr *.trc | tail -n -1` `ls -tr
*.trc | tail -n -1`.tkprof EXPLAIN=[user]/[password]
TKProf has command line parameters
to help narrow down results as well. Try adding this to the tkprof command to
limit results to
the five SQL statements containing
the most elapsed time:
sort=prsela,exeela,fchela print=5
Install
DBMS_SUPPORT at $ORACLE_HOME/rdbms/admin/dbmssupp.sql:
-- equivalent to level 8 tracing,
bind=>true would be equivalent to level 12 tracing.
execute
sys.dbms_support.start_trace_in_session(&&SID, &&SERIAL,
waits=>true, binds=>true);
execute sys.dbms_support.stop_trace_in_session(&&SID, &&SERIAL);
-- end tracing.
TKProf
Old Standby, supplied
by Oracle, installed with each RDBMS by default. 11.2 Documentation
TRCANLZR
Metalink Doc
224270.1 Oracle supplied utility that creates HTML report from a trace
file.
Here are some video overviews
of the latest Oracle tuning methodologies available to you, mind you Oracle Database
11gR2 has advanced auto tuning methods available and these methods can be used
as recommend by Oracle, although manual tuning is still possible and in certain
situations the preferred way to go. Tuning is a iterative process and will
probably never be finished during the life of a Database.
Oracle Performance Tuning using Sql Trace and TKPROF
Oracle Performance Tuning using Sql Trace and TKPROF examples
Oracle Performance Tuning Plan Stabilites and Stored Outlines
Oracle Performance Tuning - Wrapping it all Up
Some usefull links to
consider:
The Database market is
a highly competitive, and enterprise
database systems come packed with features from hot backups to
high-availability. These database systems range in price from free to tens of
thousands of dollars. There's no single correct answer for every data problem.
Nor is there a perfect database system; each has its own set of features and
shortcomings. Got data? Need a database server? Chances are you'll be
considering at least one of these 10 to meet your needs.
Here is a shortcut to
the research you need to determine which solution is best for you.
1. Oracle
Oracle began its
journey in 1979 as the first commercially available relational database
management system (RDBMS). Oracle's name is synonymous with enterprise database
systems, unbreakable data delivery and fierce corporate competition from CEO
Larry Ellison. Powerful but complex database solutions are the mainstay of this
Fortune 500 company (currently 105th but 27th in terms of profitability).
2. SQL Server
Say what you will
about Microsoft and its interesting collection of officers. It's profitability
exceeds all other tech companies, and SQL Server helped put it there. Sure,
Microsoft's desktop operating system is everywhere, but if you're running a
Microsoft Server, you're likely running SQL Server on it. SQL Server's ease of
use, availability and tight Windows operating system integration makes it an
easy choice for firms that choose Microsoft products for their enterprises.
Currently, Microsoft touts SQL Server 2008 as the platform for business
intelligence solutions.
3. DB2
Big Blue puts the big
into data centers with DB2. DB2 runs on Linux, UNIX, Windows and mainframes.
IBM pits its DB2 9.7 system squarely in competition with Oracle's 11g, via the
International Technology Group, and shows significant cost savings for those
that migrate to DB2 from Oracle. How significant? How does 34 percent to 39
percent for comparative installations over a three-year period sound?
4. Sybase
Sybase is still a
major force in the enterprise market after 25 years of success and improvements
to its Adaptive Server Enterprise product. Although its market share dwindled
for a few years, it's returning with powerful positioning in the
next-generation transaction processing space. Sybase has also thrown a
considerable amount of weight behind the mobile enterprise by delivering
partnered solutions to the mobile device market.
5. MySQL
MySQL began as a niche
database system for developers but grew into a major contender in the
enterprise database market. Sold to Sun Microsystems in 2008, MySQL is
currently part of the Oracle empire (January 2010). More than just a niche
database now, MySQL powers commercial websites by the hundreds of thousands and
a huge number of internal enterprise applications. Although MySQL's community
and commercial adopters had reservations about Oracle's ownership of this
popular open source product, Oracle has publicly declared its commitment to
ongoing development and support.
6. PostgreSQL
PostgreSQL, the
world's most advanced open source database, hides in such interesting places as
online gaming applications, data center automation suites and domain
registries. It also enjoys some high-profile duties at Skype, Yahoo! and
MySpace. PostgreSQL is in so many strange and obscure places that it might
deserve the moniker, "Best Kept Enterprise Database Secret." Version
9.0, currently in beta, will arrive for general consumption later this year.
7. Teradata
Have you ever heard of
Teradata? If you've built a large data warehouse in your enterprise, you
probably have. As early as the late 1970s, Teradata laid the groundwork for the
first data warehouse -- before the term existed. It created the first terabyte
database for Wal-Mart in 1992. Since that time, data warehousing experts almost
always say Teradata in the same sentence as enterprise data warehouse.
8. Informix
Another IBM product in
the list brings you to Informix. IBM offers several Informix versions -- from
its limited Developer Edition, to its entry-level Express Edition, to a
low-maintenance online transaction processing (OLTP) Workgroup Edition all the
way up to its high-performance OLTP Enterprise Edition. Often associated with
universities and colleges, Informix made the leap to the corporate world to
take a No. 1 spot in customer satisfaction. Informix customers often speak of
its low cost, low maintenance and high reliability.
9. Ingres
Ingres is the parent
open source project of PostgreSQL and other database systems, and it is still
around to brag about it. Ingres is all about choice and choosing might mean
lowering your total cost of ownership for an enterprise database system. Other
than an attractive pricing structure, Ingres prides itself on its ability to
ease your transition from costlier database systems. Ingres also incorporates
security features required for HIPPA and Sarbanes Oxley compliance.
10. Amazon's SimpleDB
Databases and
Amazon.com seem worlds apart, but they aren't. Amazon's SimpleDB offers
enterprises a simple, flexible and inexpensive alternative to traditional
database systems. SimpleDB boasts low maintenance, scalability, speed and
Amazon services integration. As part of Amazon's EC2 offering, you can get
started with SimpleDB for free.
Drs. Albert Spijkers
DBA Consulting
web: http://www.dbaconsulting.nl
blog: DBA Consulting blog
profile: DBA Consulting profile
Facebook : DBA Consulting on Facebook
email: info@dbaconsulting.nl
DBA Consulting
web: http://www.dbaconsulting.nl
blog: DBA Consulting blog
profile: DBA Consulting profile
Facebook : DBA Consulting on Facebook
email: info@dbaconsulting.nl
Amazon SimpleDB can be useful for those who need a non-relational database for storage of smaller, non-structural data. Amazon SimpleDB has restricted storage size to 10GB per domain. Amazon SimpleDB offers simplicity and flexibility. SimpleDB automatically indexes all data. Amazon SimpleDB pricing is based on your actual box usage. You can store any UTF-8 string data in Amazon SimpleDB.
ReplyDeleteSDB Explorer provides an industry-leading and intuitive Graphical User Interface (GUI) to explore Amazon SimpleDB service in a thorough manner, and in a very efficient and user friendly way.
http://www.sdbexplorer.com/