• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

26 April 2013

Modern file access ease Classic IT control 
File Access and Sharing

Novell Filr provides enterprise-level mobile file access and sharing. Novell Filr allows your users to access their home directories and network folders on any mobile device, as well as synchronize their files to their PC and Mac computers. They can also share the files internally and externally and comment on shared files.

Rather than move your data to the cloud, Filr lets you keep files and data where they are today and where they belong—in established and secure company file storage that already meets your regulatory requirements; your file servers.

Users love mobile file access and sharing, but most cloud solutions give the IT department a headache. Not Novell Filr.

Top Five Novell Filr Features That Will Make IT Smile:

·         IT doesn't have to provision new storage for Filr users. It's already there.

·         The backup and security systems that IT has built (and you've invested in) continue to protect your data on Filr.

·         Filr fits nicely into both Windows Server and Novell Open Enterprise Server environments.

·         IT can control file access and sharing so public material can spread while intellectual property stays put.

·         Filr is easy to use and will make users happy, so they'll stop bothering the helpdesk and be more productive.

Novell Filr will be released in the first half of 2013.

 Top Ten

File and Networking Services

The Top Ten Differences That Let Novell® Filr Offer True Enterprise

File Sharing and Access

There are many vendors claiming to offer “Dropbox for the enterprise,” but true mobile enterprise file sharing demands more than a pithy slogan. It requires a high level of security and accountability, thorough controls and other features that empower IT to protect an organization’s interests. Novell® Filr is a different approach to the problem, one whose differences means it offers true enterprise file sharing and mobile access.

We don’t have to explain the downsides to popular cloud file sharing or mobile access solu­tions. We also don’t have to explain their appeal. IT departments everywhere want to give their users simple mobile access without opening their organization up to the many downsides of these cloud solutions. Novell Filr offers the same user experience but through a completely different design. Users get what they want, while IT retains control and corporate assets remain well protected in your data center.

Because it is built differently, Novell Filr:

1. Gives users access to their home directories. Novell Filr gives your users ac­cess to their home directories and network folders from any device or location. That means users can access a familiar environment where they can quickly get to work.

2. Won’t make you move files around. You don’t need to move files into a special folder or onto a specific piece of hardware to push them to mobile devices. IT does not need to provision new storage and users don’t need to create or recreate fold­ers. You already have everything set up nicely. Filr simply helps you share it across devices.

3. Always looks familiar. Not only does a user’s file structure look the same in Filr, but Filr itself looks the same no matter what device the user chooses. From phone to desktop to tablet, users see the same thing.

4. Lets users choose the device. Novell Filr works with Windows and Mac desk­tops and mobile devices running iOS, Android or BlackBerry. That means whether they’re on a Dell laptop, an iPhone, an Android tablet or on another device, users can connect to their files from wherever they roam.

5. Helps users control their workflow. When everyone shows up to a meeting with different versions of a file, nothing gets done. Filr eliminates the email file versioning syndrome and also allows users to leave comments for each other so everyone stays on the same page…or file.

6. Deploys easily. Filr is a virtual appliance-based solution, which makes installation and deployment easy and fast for IT. Adoption is easy for users as well. They can download the Filr clients and mobile device apps in just seconds.

7. Leverages your current data protection. Because Filr connects to users’ home directories and shared network folders, the data backup and security systems your IT department has built and you’ve invested in remain in force.

8. Doesn’t make you to buy extra storage. Yes, many cloud services give away a few gigs of storage, but for the amount of storage most organizations need, you have to pay. Why buy cloud storage when you already have a data center? With Filr you get mobile access without an increase in storage expenses.

9. Extends your current user access rules. Filr works with Active Directory or NetIQ® eDirectoryto extend the user access controls you’ve already developed. Whatever group and user access rights govern your home and network folders will also gov­ern those folders and files as users access them on their mobile devices.

10. Provides control over sharing. In addition to the access control already provided by your identity management system and file systems, Filr also gives you the ability to determine which files and folders your users can share either internally or externally.

For more information please contact me at:Drs. Albert Spijkers
DBA Consulting
web:            http://www.dbaconsulting.nl
blog:            DBA Consulting blog
profile:         DBA Consulting profile
Facebook :   DBA Consulting on Facebook
email:          info@dbaconsulting.nl 

Cyber Attacks are more prominent and Criminal these days. Protect yourself wilt Citrix Netscaler 10!

A versatile cloud network for advanced app, desktop and data delivery services

Citrix NetScaler is the industry's most advanced cloud network platform. It enables the datacenter network to become an end-to-end service delivery fabric to optimize the delivery of all web applications, cloud-based services, virtual desktops, enterprise business apps and mobile services. It ensures the performance, availability and security SLAs for any service to any user, anywhere.

Download the NetScaler data sheet for a complete list of features and hardware options.

App delivery with advanced load balancing

NetScaler L4-7 load balancing brings 100% availability to all applications and services, while improving the efficiency of expensive server and network resources. Acceleration capabilities, such as AppCache, AppCompress and TCP optimizations, improve the user experience by making applications faster and more responsive. NetScaler load balancing capabilities include:

  • Advanced load balancing – Comprehensive L4-7 traffic management for web servers and application servers in a single datacenter.
  • Content and app caching – NetScaler AppCache provides high performance caching of both static content and dynamically generated web content.
  • Database load balancing – Intelligent, SQL-aware load balancing of database servers to scale the data tier and deliver better database performance.

Next-generation security

NetScaler secures applications and networks from a wide variety of threats and attacks, prevents the leakage of confidential data, and protects sensitive communications with SSL and SSL VPN capabilities.

  • Application Firewall – Blocks 100% of attacks targeting vulnerabilities in web and web services applications.
  • Secure Remote Access – Fully integrated SSL VPN protects mobile users.
  • FIPS Compliance – FIPS compliant appliances with 4.5+ Gbps of SSL throughput.



Built-in cloud connectivity

NetScaler transparently brings external and internal clouds together to maintain control and security even when apps move into the cloud. NetScaler acceleration capabilities can also extend into the cloud to optimize delivery of personalized app data to any user, anywhere.

  • CloudBridge – Seamlessly extends the enterprise datacenter to third-party clouds with IPSec VPN security, proven WAN optimization and advanced networking to ensure full compatibility.
  • CloudConnector – Leverage NetScaler datacenter benefits all the way to points-of-presence around the world to further speed app delivery.

Citrix TriScale technology brings cloud scale to enterprise networks

Citrix TriScale drives unprecedented network scalability, enabling IT teams to build enterprise cloud networks that can Scale Up performance 5x, Scale Out capacity by 32x, and Scale In consolidation by running up to 40 appliances in a single platform.

  • Scale Up for cloud elasticity – Buy only what you need.
  • Scale In for greater simplicity – Consolidate to end appliance sprawl.
  • Scale Out for expanded capacity – Start small — grow forever.

Full application visibility and control

Meeting strict performance SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence.

  • NetScaler Insight Center – Provides a 360-degree view for all web application and virtual desktop traffic, with unmatched visibility for Citrix HDX .
  • NetScaler ActionAnalytics – Integrated real-time performance analysis capabilities and adaptive policy controls for on-the-fly NetScaler policy optimization.

Centralized policy management

Application delivery and load balancing policies can be defined and managed using a simple declarative policy engine, with no programming expertise required.

  • NetScaler Command Center – Comprehensive management and monitoring solution for centralized configuration and control of all NetScaler platforms.
  • AppExpert Policy Management – Single policy management for all aspects of load balancing and app delivery, including pre-defined AppExpert templates for rapid deployment with popular enterprise business applications.


NetScaler Editions

Purpose-built with features and functions to meet any networking need

Citrix NetScaler application delivery controller appliances are available in three editions. Additionally, VPX virtual appliances are available in various configurations to meet any performance needs customers may have.

Download the NetScaler data sheet for a complete list of features and hardware options.

Standard Edition

Delivers 100 percent application availability with comprehensive L4-7 load balancing and optimizes performance to reduce expensive server and network costs.

Enterprise Edition

A powerful Web application delivery solution providing advanced traffic management and powerful application acceleration to provide the best application experience for users.

Platinum Edition

The industry’s most powerful web application delivery solution designed to deliver mission-critical applications with the best web application firewall security, fastest performance, and lowest cost.

Next: Explore NetScaler editions

For more information please contact me at:Drs. Albert Spijkers
DBA Consulting
web:            http://www.dbaconsulting.nl
blog:            DBA Consulting blog
profile:         DBA Consulting profile
Facebook :   DBA Consulting on Facebook
email:          info@dbaconsulting.nl 

15 April 2013

SAP HANA and IBM DB2 10 and SolidDB

Below you find a link to the video for SAP HANA and their Focus on SybaseIQ in memory Database as their new preferred platform.

SAP HANA overvview

SAP, Hana and a future without Oracle or DB2?

SAP’s flagship Business Suite software now runs on SAP's own Hana database in a move that enables customers to drop Oracle out of the picture.

“We’re dramatically challenging the database market with a new value proposition and a next-generation technology,”

said Jim Hagemann Snabe, co-Chief Executive Officer calling the new Business Suite the biggest breakthrough in applications since SAP released the R/3 software in 1992.

"Our target for this year is a high three-digit number of customers, but there is the potential for 40,000 Business Suite users," Snabe added.

Around  40,000 SAP customers run the firm’s applications on Oracle or IBM databases. But SAP co-founder and chairman Hasso Plattner insisted SAP customers don't have to switch to Hana and that SAP will not force anyone to move.

"We do not abandon the database vendors who carry us to success,"

he said. "Customers have a choice."

Clients that make use of in-memory analytics such as SAP HANA can benefit in a number of ways:

- They can analyze massive amounts of data almost immediately
- They can make decisions based on real-time information, AND
- They can keep costs down by leveraging systems with extreme memory and processing power that can easily scale.

But first, what is SAP HANA?

It's the next generation of SAP's in-memory computing technology. HANA is a multi-purpose appliance that combines SAP software components that are optimized on proven IBM hardware and it's delivered by IBM Global Services.

You might be wondering how it works!

This graphic shows the basic architecture.


On the left side, data from an SAP ERP system is replicated into an In-memory computing engine. This engine is SAP HANA. At the bottom you'll see that data from other sources can also be Extracted, Transformed, and Loaded into HANA. This includes data from an IBM DB2 database, which can easily replicate data into HANA. This all takes place in near real-time. At the top we see that several front-end tools can be used to explore the results, including tools from SAP Business Objects, IBM, and Microsoft.

Analytics at this level require the best performing hardware. This is where IBM System x servers come in. eX5 technology decouples memory from processors. This allows you to scale CPU and memory independent of the other without the need to buy additional hardware. In fact, IBM and SAP recently announced the first official performance results of SAP HANA.

Results show that the software easily handles 10,000 queries per hour against 1.3 terabytes of data, returning results within seconds. This test was performed by an independent third party and proves that IBM eX5 servers are the industry leader for HANA!

So all this technology is great. But how can you apply it in your business? How can you use it to compete better and win in the market?

This is where IBM Global Business Services can help! Our consultants understand SAP business processes and the technology underlying them. We offer rapid deployment solutions for HANA including:

• Discovery and assessment services to maximize business impact
• Architecture assessment and benchmark services
• Proof of concept services AND
• Express deployment offerings, including industry best practices

With IBM and SAP you can get a handle on your data and start using it to win in the marketplace.

Get an understanding of how IBM DB2 and HANA work together from Chetan Chaturvedi, IBM Worldwide Program Director for IBM DB2 and SAP Initiatives.

 And if DB2 10 is not fast enough for you take a look at IBM Solid DB:

 IBM solidDB Universal Cache feature demo

This illustrated demo shows you the business impact of accessing and capturing performance-critical data assets at extreme speed.

 SolidDB Demo

Overview of solidDB in-memory features

The solidDB® main memory engine combines the high performance of in-memory tables along with the nearly unlimited capacity of disk-based tables. Pure in-memory databases are fast, but strictly limited by the size of memory. Pure disk-based databases allow nearly unlimited amounts of storage, but their performance is dominated by disk access. Even if the computer has enough memory to store the entire database in memory buffers, database servers designed for disk-based tables can be slow because the data structures that are optimal for disk-based tables are far from being optimal for in-memory tables.

The solidDB solution is to provide a single database server that contains two optimized servers inside it: one server is optimized for disk-based access and the other is optimized for in-memory access. Both servers coexist inside the same process, and a single SQL statement may access data from both engines.

In-memory versus disk-based tables

If a table is an in-memory table (M-table), the entire contents of the table are stored in memory so that the data can be accessed as quickly as possible. If a table is disk-based (D-table), the data is stored primarily on disk, and usually the server copies only small pieces of data at a time into memory.

Types of in-memory tables

There are two basic types of in-memory tables: persistent tables and non-persistent tables.

Persistent tables provide recoverability of data; non-persistent tables provide fast access.
Considerations for developing applications with in-memory tables

Before starting to develop applications with in-memory tables, review the following considerations on performance, memory usage, transaction isolation, and using M-tables with HotStandby or shared memory (SMA) and linked library (LLA) access methods.

Table types and referential integrity

The persistent and non-persistent table differ in reference to referential integrity.
The following table shows which table types are allowed to refer to other types. For example, if a transient table is allowed to have a foreign key that references a persistent table, you will see "YES" in the cell at the intersection of the row "Transient Child" and the column "Persistent Parent". If the foreign key constraint is not allowed, you will see a dash (-).
Every type of table may reference itself. In addition, transient tables may reference persistent tables (but not vice-versa). All other combinations are invalid.

Figure here:

Nonetheless the announcement is a throwing down of a gauntlet to Oracle. “With this move SAP is extending its revenue potential by making an assault on the database market,” commented Angela Eager of research firm Techmarketview. “It is not disruptive but will make waves. It certainly raises the intensity in the long-running Oracle-SAP confrontation because customers will no longer have to look to Oracle (and other providers) for the database on which to run Business Suite.”

But he insisted that HANA offered a new model for business applications. “This will change how our customers do business and they are very excited,” he said. “We are even winning over start-ups, they are saying that they never thought they would be working with SAP.”

The need for speed

According to analysts, the HANA-enabled approach has appeal. "With the ever-increasing pace of business and ever shorter decision cycles, there is a growing need to reduce the time needed to capture, analyse and act on information," said Henry Morris, senior vice-president for IDC's Worldwide Software, Services, Big Data as well as Sales and Marketing Executive Advisory research groups.

"SAP Business Suite running on SAP HANA is a response to this need. By combining both transaction processing and analytics on a single platform, SAP HANA supports a blended system of record and of decisions. The integration of transaction management with real-time decision management eliminates the delays and inefficiencies inherent in parallel operational and business intelligence systems. This enables employees to make better tactical, operational and strategic decisions based on relevant, granular, up-to-the-moment data."

Fellow research firm Ovum also declared the move to be a potential game changer. ““Potential benefits of SAP Business suite on HANA are numerous. Obviously, HANA’s in-memory architecture accelerates routine reporting functions, such as the ability to run end of period reports in seconds or minutes instead of tying a database with a batch run for hours,” noted Analyst Tony Baer.

“But processing speed is the least of HANA’s potential benefits. The in-memory architecture allows data views to be generated on-the-fly, a benefit that not only reduces database footprint and storage requirements, but also potentially simplifies the modeling and deployment of data and the design of analytics or other complementary applications that run atop Business Suite.

“HANA’s in-memory architecture also allows analytics to be embedded with transaction processing, enabling companies to become more agile. For instance, SAP customer John Deere achieved positive ROI on its HANA investment based solely on the benefits of implementing it for pricing optimization.”

A difficult sell?

But while there are technology selling points, SAP will need to work on its communications around HANA, warned Ovum. “SAP’s challenge is brand and messaging,” noted Baer. “HANA has evolved over the last two years from a database to an analytics platform to simply a "platform." How SAP avoids confusing the market will be a key factor in driving competitive advantage.”

The strategy is unlikely to convince many existing SAP users to make a change in the near future, the firm argued.  “SAP Business Suite on HANA has the potential to be a game changer by making SAP much more relevant to its customers. But in the near term, SAP Business Suite on HANA should be seen as opportunistic upgrade for existing customers or greenfield opportunity for new ones,” suggested Baer. 

“Few if any enterprises currently rank replacement of enterprise systems as top priority. Convincing customers that the "transformative" benefits of Suite on HANA will be non-disruptive technically is the challenge SAP faces with an entrenched Business Suite customer base. Companies don't swap out their database and ERP investments overnight.”

HANA is set to be the eventual convergence point for all SAP offerings and an alternative to existing architectures. SAP is also building out a set of Platform as a Service (PaaS) capabilities for the database.  Over the next few months, SAP’s Success Factors and Ariba Cloud offerings will also run on Hana.


For more information please contact me at:Drs. Albert Spijkers
DBA Consulting
web:            http://www.dbaconsulting.nl
blog:            DBA Consulting blog
profile:         DBA Consulting profile
Facebook :   DBA Consulting on Facebook
email:          info@dbaconsulting.nl 

09 April 2013

R version 3 released

Big Data and Big Analytics open an entirely new opportunity for data-driven organizations.  Data sets that were previously unmanageable are now opening up worlds of possibility due to flexible and powerful platforms to manage the scale and complexity associated with crunching massive data. Now with the capacity, reliability and productivity enhancements of Revolution R Enterprise for PureData System for Analytics, organizations will be able to perform a wide-range of functions that others simply cannot.  Advanced R computations are available for rapid analysis of hundreds of terabyte-class data volumes – and can deliver 10-100x performance improvements at a fraction of the cost compared to traditional analytics vendors.

Download the Revolution R Enterprise for IBM PureData System for Analytics Datasheet:

Download a 90-Day Evaluation copy of Revolution R Enterprise for IBM PureData System


Here’s how these solutions work together:

  • Revolution R Enterprise consolidates all analytics activity into a single appliance.  Many algorithms have been optimized to run in parallel on the PureData System for Analytics.
  • Revolution R Enterprise brings high-performance, enterprise-readiness and support to R along with the ability to integrate advanced analytics into BI front-ends or Microsoft® Excel™. These extensions to the already-powerful R allow Revolution R Enterprise to have a significant impact across all functions in an organization.
  • PureData System for Analytics architecturally integrate database, server and storage into a single, purpose-built, easy-to-manage system that minimizes data movement, thereby accelerating the processing of analytical data, analytics modeling and data scoring. It delivers exceptional performance on large-scale data, while leveraging the latest innovations in analytics.

R version 3 released

The R language marks a major milestone today with the release of R 3.0.0 (codename: "Masked Marvel"). The increment in the version number reflects not a fundamental change in the R langauge itself, but a recognition that the R codebase has matured to a point where closing out the 2.x series makes sense. 
Nonetheless, this release does include some major behind the scenes updates, not least of which is the introduction of big vectors to R, which eliminates some big data restrictions in the core R engine by allowing R to better use the memory available on 64-bit systems. Tal Galili lists the new functionality available in R 3.0.0 and provides a guide to upgrading and re-installing packages.
From everyone here at Revolution Analytics, our thanks go to the members of the R Core development team, who have volunteered so much time and expertise to furthering the R Project. The world of statistical computing would be a much poorer place without their contributions.
If you build R yourself, R version 3.0.0 is available for download in source from CRAN, and pre-built binaries (for Windows, Mac and Linux) will be available in the next couple of days. For Revolution R Enterprise users, the next release (version 6.2) will be based on the recently-released final Rv2 engine (R 2.15.3). We're currently working on integration of Rv3 for inclusion in a major update to Revolution R Enterprise in late 2013.
R-announce mailing list: R 3.0.0 is released

Here below you find a replay of the Webinar on Revolution Analytics:
Analyzing Big Data on Netezza and Pure Analytics with Revolution-Analytics
Everyone involved in high-stakes analytics wants power, speed and flexibility regardless of the size of the data set and complexity of the analysis. Trailblazing organizations that have deployed IBM Netezza Analytics with their IBM Netezza data warehouse appliances (TwinFin) with Revolution R Enterprise are getting all three. Register for this webinar to find out how.
To set the stage, we’ll provide a brief overview of the “R” statistical analysis language, the Revolution R Enterprise framework (with R at its core) as well in-database analytics on IBM Netezza Analytics Appliances. We’ll be talking about what Revolution Analytics brings to IBM Netezza, and vice versa.
Then, we’ll complete a model-building exercise from start to finish using the combined Revolution R Enterprise and IBM Netezza solution and demonstrate the performance and flexibility of the integrated offer. Join us as we:
  • Begin with data visualizations and summary statistics to gain an understanding of our data 
  • Split the data into training and test sets for model building
  • Build models and
  • Measure accuracy on both our training and test set and visualize the results.

, and here is the demo:

, if youy are stressed for time, here are the slides:

For more information please contact me at:
Drs. Albert Spijkers
DBA Consulting
web:            http://www.dbaconsulting.nl
blog:            DBA Consulting blog
profile:         DBA Consulting profile
Facebook :   DBA Consulting on Facebook
email:          info@dbaconsulting.nl 

07 April 2013

SQLT is now available

SQLTXPLAIN (SQLT) was made available on April 2, 2012. Find this tool under MOS

215187.1. It contains 2 fixes and 37 enhancements:


1. Peeked and Captured Binds in Execution Plan of MAIN was showing :B1 where predicate was
“COL=:B10! (false positives). Fix display of bind peeking and bind capture when SQL contains
binds like :b1 and :b10.

2. Metadata script includes now creation of Statistics Extensions.


1. New HC when derived stats are detected on a Table or Index (GLOBAL_STATS = ‘NO’ and


2. New HC when SQL Plan Baseline contains non-reproducible Plans.

3. New HC indicating sizes of SQL Plan History and SQL Plan Baseline (enabled and accepted


4. New HC when there is an enabled SQL Profile and there are VPD policies affecting your SQL.

Plan may be unstable.

5. New HC when there is more than 1 CBO environment in memory or AWR for given SQL.

6. New HC when Indexes or their Partitions/Subpartitions have UNUSABLE segments.

7. New HC when Indexes are INVISIBLE.

8. New HC when an Index is referenced in a Plan and the index or its partitions/subpartitions

are now UNUSABLE.

9. New HC when an Index is referenced in a Plan and the index is now INVISIBLE.

10. New HC when Table has locked statistics.

11. New HC when INTERNAL_FUNCTION is present in a Filter Predicate since it may denote an

undesired implicit data_type conversion.

12. New HC when Plan Operations have a Cost of 0 and Cardinality of 1. Possible incorrect


13. New HC when SREADTIM differs from actual db file sequential read for more then 10%.

14. New HC when MREADTIM differs from actual db file scattered read for more then 10%.

15. New HC when BLEVEL has changed for an Index, an Index Partition or an Index Subpartition

according to statistics versions.

16. New HC when NUM_ROWS has changed more than 10% for a Table, a Table Partition or a

Table Subpartition according to statistics versions.

17. New HC when Index is redundant because its leading columns are a subset of the leading

columns of another Index on same Table.

18. New HC when leaf blocks on a normal non-partitioned index are greater than estimated leaf

blocks with a 70% efficiency.

19. Active Session History sections on MAIN report include now up to 20 sessions and 20

snapshots (it was 10 and 10).

20. Parameter _optimizer_fkr_index_cost_bias has been added to SQLT XPLORE.

21. SQLTPROFILE and script coe_xfr_sql_profile.sql take now SQL statement with SQL Text

larger than 32767 characters.

22. Add metrics similar to what we have now on summary tables/indexes on SQLHC.

23. Tables and Indexes sections on MAIN contain now links showing object counts instead of a

constant. Similar to SQLHC.

24. Execution Plans on SQLT to show with mouse-over, schema statistics for both: current and as

per plan timestamp.

25. Add new columns on for all V$, GV$ and DBA views accessed by SQLT.

26. Include reason WHY a cursor is not shared (out of XML “reason” column on


27. MAIN report heading includes now a link to MOS SQLT document.

Written by Carlos Sierra

April 2, 2012 at 2:40 pm

Tracing an Oracle Session

1) Check instance parameters to make sure instance is able to


select name, value

from v$parameter

where name in ('timed_statistics','max_dump_file_size','user_dump_dest');

TIMED_STATISTICS - Should be TRUE. MAX_DUMP_FILE_SIZE - Should be UNLIMITED or something really large (>

10000). USER_DUMP_DEST - Should already be set.


2) Turn on SQL trace for the user:


2a) If you have direct access to the session:

This is good if you just need basic trace info, no waits or binds.

ALTER SESSION SET tracefile_identifier='MYTRACE';


- or to include Wait and Bind info -

ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';

4=Binds, 8=Waits, 12=Binds+Waits - or -

exec dbms_support.start_trace(waits=>true,binds=>false);


2b) Or use this method if no direct access to the session:






'EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||sid||','||serial#||',true);' as sql_start,

'EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||sid||','||serial#||',false);' as sql_end

FROM v$SESSION where UPPER(OSUSER) = '&osuser'

AND status = 'ACTIVE';


status,'exec DBMS_SUPPORT.START_TRACE_IN_SESSION('||sid||','||serial#||',true,false);' as enable,

'exec DBMS_SUPPORT.STOP_TRACE_IN_SESSION('||sid||','||serial#||');' as disable

From v$session where username='&osuser';


2c) If you are using connection pooling

If you are dealing with transient connections or connection pooling you may not know the SID because the user hasn't logged in yet.

You can create a Login trigger that will set tracing on for all new connections that

create or replace temp_sql_trace after logon on database


if user = UPPER('&USERNAME.') then

execute immediate 'alter session set events ''10046 trace name context forever, level 8''';

end if;



for i in (select sid, serial# from v$session where username = 'UPPER('&USERNAME.')


execute immediate 'execute SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||i.sid||','||i.serial#||',TRUE)';




2d) Using DBMS_MONITOR (Alternate Connection Pooling Method)

The following is a low impact logon trigger that will set the v$session.client_identifier column after a user connects.




my_username varchar2(30);

my_osuser varchar2(30);

my_machine varchar2(64);

my_client_identifier varchar2(64);


-- Gather information used to build Client Identifier string

my_username := user;



INTO my_osuser,


FROM dual;

-- Build the Client Identifier String

v_user_identifier := my_username;

-- Alternate client_identifier string if you need more granularity

-- v_user_identifier := SUBSTR(my_username || '|' || my_osuser || '|' || my_machine, 1, 64);

-- Set the session's Client Identifier




The client identifier string can be up to 64 bytes. Since just the v$session.client_identifier is set, you can toggle tracing on and off

like this:

-- Turn tracing on


-- Turn tracing off



2e) SET_EV method

exec dbms_system.set_ev([sid],[serial#],10046,[Trace Level],'');

Where trace level is 4 (binds),8 (waits), or 12 (binds and waits).


Check to make sure trace is running

select * From dba_enabled_traces;


3) Run the SQL Statement

Wait, and watch disk space in UDUMP. A trace file will appear and start growing.


4) Turn off SQL Trace


4a) If you have direct acess to the session:



ALTER SESSION SET EVENTS '10046 trace name context off';




4b) Or use this method if no direct access to the session:





'EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||sid||','||serial#||',false);' as sql_end

FROM v$SESSION where UPPER(OSUSER) = '&osuser'

AND status = 'ACTIVE';

4c) If you are using connection pooling

If you are dealing with transient connections or connection pooling you may not know the SID because the user hasn't logged in yet.

You can create a Login trigger that will set tracing on for all new connections that

drop trigger temp_sql_trace;


for i in (select sid, serial# from v$session where username = 'UPPER('&USERNAME.')


execute immediate 'execute SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION('||i.sid||','||i.serial#||',FALSE)';




4d) Using DBMS_MONITOR package

-- Turn tracing off



4e) SET_EV method

exec dbms_system.set_ev([sid],[serial#],10046,0,'');

Set Trace Level = 0 to stop tracing.


5) Generate TKProf output (OS)

Change to UDUMP directory

cd /u01/app/oracle/admin/qlatl/udump

You run tkprof manually:

tkprof [input file name] [output file name] EXPLAIN=[user/password]

Here is a Unix command to TKProf for the most recent trace file in the dump directory:

tkprof `ls -tr *.trc | tail -n -1` `ls -tr *.trc | tail -n -1`.tkprof EXPLAIN=[user]/[password]

TKProf has command line parameters to help narrow down results as well. Try adding this to the tkprof command to limit results to

the five SQL statements containing the most elapsed time:

sort=prsela,exeela,fchela print=5


Install DBMS_SUPPORT at $ORACLE_HOME/rdbms/admin/dbmssupp.sql:

-- equivalent to level 8 tracing, bind=>true would be equivalent to level 12 tracing.

execute sys.dbms_support.start_trace_in_session(&&SID, &&SERIAL, waits=>true, binds=>true);

execute sys.dbms_support.stop_trace_in_session(&&SID, &&SERIAL); -- end tracing.



Old Standby, supplied by Oracle, installed with each RDBMS by default. 11.2 Documentation



Metalink Doc 224270.1 Oracle supplied utility that creates HTML report from a trace file.





Here are some video overviews of the latest Oracle tuning methodologies available to you, mind you Oracle Database 11gR2 has advanced auto tuning methods available and these methods can be used as recommend by Oracle, although manual tuning is still possible and in certain situations the preferred way to go. Tuning is a iterative process and will probably never be finished during the life of a Database.



Oracle Performance Tuning using Sql Trace and TKPROF



Oracle Performance Tuning using Sql Trace and TKPROF examples



Oracle Performance Tuning Plan Stabilites and Stored Outlines



Oracle Performance Tuning - Wrapping it all Up




Some usefull links to consider:

Top 10 Enterprise Database Systems to Consider

The Database market is a  highly competitive, and enterprise database systems come packed with features from hot backups to high-availability. These database systems range in price from free to tens of thousands of dollars. There's no single correct answer for every data problem. Nor is there a perfect database system; each has its own set of features and shortcomings. Got data? Need a database server? Chances are you'll be considering at least one of these 10 to meet your needs.

Here is a shortcut to the research you need to determine which solution is best for you.

1. Oracle

Oracle began its journey in 1979 as the first commercially available relational database management system (RDBMS). Oracle's name is synonymous with enterprise database systems, unbreakable data delivery and fierce corporate competition from CEO Larry Ellison. Powerful but complex database solutions are the mainstay of this Fortune 500 company (currently 105th but 27th in terms of profitability).

2. SQL Server

Say what you will about Microsoft and its interesting collection of officers. It's profitability exceeds all other tech companies, and SQL Server helped put it there. Sure, Microsoft's desktop operating system is everywhere, but if you're running a Microsoft Server, you're likely running SQL Server on it. SQL Server's ease of use, availability and tight Windows operating system integration makes it an easy choice for firms that choose Microsoft products for their enterprises. Currently, Microsoft touts SQL Server 2008 as the platform for business intelligence solutions.

3. DB2

Big Blue puts the big into data centers with DB2. DB2 runs on Linux, UNIX, Windows and mainframes. IBM pits its DB2 9.7 system squarely in competition with Oracle's 11g, via the International Technology Group, and shows significant cost savings for those that migrate to DB2 from Oracle. How significant? How does 34 percent to 39 percent for comparative installations over a three-year period sound?

4. Sybase

Sybase is still a major force in the enterprise market after 25 years of success and improvements to its Adaptive Server Enterprise product. Although its market share dwindled for a few years, it's returning with powerful positioning in the next-generation transaction processing space. Sybase has also thrown a considerable amount of weight behind the mobile enterprise by delivering partnered solutions to the mobile device market.

5. MySQL

MySQL began as a niche database system for developers but grew into a major contender in the enterprise database market. Sold to Sun Microsystems in 2008, MySQL is currently part of the Oracle empire (January 2010). More than just a niche database now, MySQL powers commercial websites by the hundreds of thousands and a huge number of internal enterprise applications. Although MySQL's community and commercial adopters had reservations about Oracle's ownership of this popular open source product, Oracle has publicly declared its commitment to ongoing development and support.

6. PostgreSQL

PostgreSQL, the world's most advanced open source database, hides in such interesting places as online gaming applications, data center automation suites and domain registries. It also enjoys some high-profile duties at Skype, Yahoo! and MySpace. PostgreSQL is in so many strange and obscure places that it might deserve the moniker, "Best Kept Enterprise Database Secret." Version 9.0, currently in beta, will arrive for general consumption later this year.

7. Teradata

Have you ever heard of Teradata? If you've built a large data warehouse in your enterprise, you probably have. As early as the late 1970s, Teradata laid the groundwork for the first data warehouse -- before the term existed. It created the first terabyte database for Wal-Mart in 1992. Since that time, data warehousing experts almost always say Teradata in the same sentence as enterprise data warehouse.

8. Informix

Another IBM product in the list brings you to Informix. IBM offers several Informix versions -- from its limited Developer Edition, to its entry-level Express Edition, to a low-maintenance online transaction processing (OLTP) Workgroup Edition all the way up to its high-performance OLTP Enterprise Edition. Often associated with universities and colleges, Informix made the leap to the corporate world to take a No. 1 spot in customer satisfaction. Informix customers often speak of its low cost, low maintenance and high reliability.

9. Ingres

Ingres is the parent open source project of PostgreSQL and other database systems, and it is still around to brag about it. Ingres is all about choice and choosing might mean lowering your total cost of ownership for an enterprise database system. Other than an attractive pricing structure, Ingres prides itself on its ability to ease your transition from costlier database systems. Ingres also incorporates security features required for HIPPA and Sarbanes Oxley compliance.

10. Amazon's SimpleDB

Databases and Amazon.com seem worlds apart, but they aren't. Amazon's SimpleDB offers enterprises a simple, flexible and inexpensive alternative to traditional database systems. SimpleDB boasts low maintenance, scalability, speed and Amazon services integration. As part of Amazon's EC2 offering, you can get started with SimpleDB for free.


 For more information please contact me at:

Drs. Albert Spijkers
DBA Consulting
web:             http://www.dbaconsulting.nl
blog:            DBA Consulting blog
profile:         DBA Consulting profile
Facebook :   DBA Consulting on Facebook
email:           info@dbaconsulting.nl