• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

29 July 2011

Why Choose Oracle RAC

Mostly RAC is the preferred choice for enterprises with scalability problems. This means that an installation with one DB Server with the Oracle DB server software and a SAN with the tablespace files and configuration files and redologs is not capable of servicing all users. Neither will connection pooling solve the problem.
That is mostly a scenario where you might consider Oracle RAC with Oracle Clusterware software. This mostly means that a few extra nodes (extra iron) are bought and a few extra Oracle instance are created on the extra servers (see figure)

But is this always the only reason to choose for a RAC installation?  I have been listening to a presentation by Riyaj Shamsudeen who came up with a few more good reasons to choose for RAC:

 Good reasons
·       Hardware fault tolerance

o   RAC protects from non-shared hardware failures.
o   For example, CPU board failure in a node doesn’t affect other
node availability.
o   But, failure in interconnect hardware can still cause cluster wide
failures, unless interconnect path is fault-tolerant.
o   Path to Storage also must be fault-tolerant.

·       Workload segregation
o   If you are planning to (or able to ) segregate the workload to
different nodes, then RAC may be a good option.
o   Long running reports generating huge amount of I/O (with
higher percent of single block I/O) can pollute the buffer cache,
causing performance issues to critical parts of the application.
o   For example, separation of OLTP and RPT to separate instances.
o   Of course, you should consider Active Data Guard for offloading
Reporting activity.
·       Application affinity
o   Application node affinity is a great way to improve performance
in RAC databases.
o   For example, if you have three applications, say PO, FIN, and SC,
running in the same database, then consider node affinity.
o   Node affinity should also translate in to segment level affinity.
o   Say, if the application PO is accessing mostly *PO* tables and
the application SC accessing mostly *SC* tables, then the node
affinity might be helpful.
·       To manage excessive redo generation
o   Each instance has its own redo thread and LGWR process.
o   If your application generates huge amount of redo, and if single
instance LGWR can not handle the load, RAC might be a
solution to effectively scale up LGWR throughput.
o   For example, by converting to a 3 node cluster and balancing
application workload, you can increase LGWR throughput,
approximately by 3.
o   Still, this doesn’t solve the problem completely, if excessive redo
generation is in conjunction with excessive commits.
·       To avoid SMP bottlenecks
o   Access to memory and System bus becomes a bottleneck, in
SMP architecture.
o   Increasing number of CPUs in an SMP architecture doesn’t scale
o   Big Iron machines now uses NUMA architecture to alleviate this
o   If you can’t afford big iron machines to increase scalability, RAC
is a good option to consider.

 , and not so good reasons are:

 Not-So-Good reasons
·       General Performance improvements
o   However when one uses it in combination with connection pooling
,and uses enough extra nodes with an extra DB instance it will help
you achieve a better performance.
·       To combat poor application design and coding practices
o   There is no hardware architecture one can think of that will improve bad coding and very bad programming practices. Learn how to code well.
·       RAC as a stand-alone Disaster Recovery solution
o   RAC + Data Guard is a good disaster recovery solution, but:
o   RAC alone is not a good DR solution.
·        To maximize use of hardware
 Good inventarisation, what you will use your RAC implementation for, will help you choose a good hardware architecture.
o   Fault tolerant hardware is a key to successful RAC
o   Use a multipathing solution to avoid loss of access to disks
rebooting the server.
o   Multiple voting disks is a must. Odd number of voting disks and
3 is a good place to start.
o   Remember that a node must have access to at least 50% of
visibility to the voting disks to servive.
o   LGWR performance is critical for Global cache performance.
o   Global cache transmission requires a log flush sync for Current
blocks and “busy” CR blocks.
·        Stretch cluster to enhance hardware usage

But are this the only good reasons and not so good reasons? 

These days Oracle tries to sell lots and lots of ExaData servers to customers. There are good valid reasons for that. With Oracle Maximum Availabilty Architecture on ExaData one can achieve the best performancve available and get rid of a lot of stress about the question how secure and safe is your data. Here is an example of Oracle MAA:

Like said before with Data Guard and MAA architecture RAC can be the preferred choice for you!

You can start with a quarter rack and extend on that to half a rack and when needed a full rack and up to 8 racks for a full Exadata install with the best performance and scalability available so far these days.

The Exadata MAA architecture consists of the following major building blocks:
1. A production Exadata system (primary). The production system may consist of one or more interconnected Exadata Database Machines as needed to address performance and scale-out requirements for data warehouse, OLTP, or consolidated application environments.
2. A standby Exadata system that is a replica of the primary. Oracle Data Guard is used to maintain synchronized standby databases that are exact, physical replicas of production databases hosted on the primary system. This provides optimal data protection and high availability if an unplanned outage makes the primary system unavailable. A standby Exadata system is most often located in a different data center or geography to provide disaster recovery (DR) by isolating the standby from primary site failures. Configuring the standby system with identical capacity as the primary also guarantees that performance service-level agreements can be met after a switchover or failover operation. Note that Data Guard is able to support up to 30 standby databases in a single configuration. An increasing number of customers use this flexibility to deploy both a local Data Guard standby for HA and a remote Data Guard standby for DR. A local Data Guard standby database complements the internal HA features of Exadata Database Machine by providing an additional layer of HA should unexpected events or human error make the production database unavailable even though the primary site is still operational. Low network latency enables synchronous redo transport to a local standby resulting in zero data loss if a failover is required. A local standby database is also useful for offloading backups from the primary database, for use as a test system, or for implementing planned maintenance in rolling fashion (e.g. database rolling upgrades). The close proximity of the local standby to the application tier also enables fast redirection of application clients to the new primary database at failover time. Following a failover or switchover to a local standby database, the remote standby database in such a configuration will recognize that a role transition has occurred and automatically begin receiving redo from the new primary database - maintaining disaster protection at all times. While the term ‘standby’ is used to describe a database where Data Guard maintains synchronization with a primary database, standby databases are not idle while they are in standby role. High return on investment is achieved by utilizing the standby database for purposes in addition to high availability, data protection, and disaster recovery. These include: o Active Data Guard enables users to move read-only queries, reporting, and fast incremental backups from the primary database, and run them on a physical standby database instead. This improves performance for all workloads by bringing the standby online as a production system in its own right. Active Data Guard also improves availability by performing automatic repair should a corrupt data block be detected at either the primary or standby database, transparent to the user.

·      Data Guard Snapshot Standby enables standby databases on the secondary system to be used for final pre-production testing while they also provide disaster protection. Oracle Real Application Testing can be used in conjunction with Snapshot Standby to capture actual production workload on the primary and replay on the standby database. This creates the ideal test scenario, a replica of the production system that uses real production workload – enabling thorough testing at production scale.
·      Oracle Patch Assurance using Data Guard standby-first patching (My Oracle Support Note 1265700.1) or Data Guard Database Rolling Upgrades are two methods of reducing downtime and risk during periods of planned maintenance. This is a key element of Exadata MAA Operational Best Practices discussed later in this paper.

3. A development/test Exadata system that is independent of the primary and standby Exadata systems. This system will host a number of development/test databases used to support production applications. The test system may even have its own standby system to create a test configuration that is a complete mirror of production. Ideally the test system is configured similar to the production system to enable:

·      Use of a workload framework (e.g. Real Application Testing) that can mimic the production workload.
·      Validation of changes in the test environment - including evaluating the impact of the change and the fallback procedure - before introducing any change to the production environment.
·      Validation of operational and recovery best practices.

Some users will try to reduce cost by consolidating these activities on their standby Exadata Database Machine. This is a business decision that represents a trade-off between cost and operational simplicity/flexibility. In the case where the standby Exadata Database Machine is also used to host other development and test databases, additional measures may be required at failover time to conserve system resources for production needs. For example, non-critical test and development activities may have to be deferred until failed system is repaired and back in production.
The Exadata MAA architecture provides the foundation needed to achieve high availability.

20 July 2011

Oracle BrainSurface Virtual Conference

Yesterday I looked at a RAC presentation from Arup Nanda on RAC tuning and he pointed out to some Linux articles of him, which are quite useful for Oracle Linux Admin. If you are a Linux enthusiast and want some more then just Oracle Linux Admin, this might be useful to you as well.

There is a Linux profressinal Administration project the will help you to better manage Linux. LPI Admin goals are:

The GNU/Linux Administration Manuals are designed to accompany practical courses preparing for the LPI examinations. While this material was generally structured to work with a course of 24-32 hours in consecutive 8-hour sessions, it is modularized to also work for shorter or longer sessions, consecutive or otherwise.
The LPIC-1 Manual's material assumes its users will already have:
Extensive experience (several years) using Intel x86 computers, including a strong knowledge of hardware components and their interaction with basic operating system (OS) components.
A general knowledge of computing and networking basics such as binary and hexadecimal maths, common units of measure (bytes, KB vs Kb, Mhz, etc), file-system structures, Ethernet and Internet networking operations and hardware, etc.
More than three cumulative months of practical experience using a GNU/Linux, BSD or Unix OS, logged in and working at the command-line (in a text terminal or console) either locally or remotely.
Those with less experience, however, should not be discouraged from using this manual, if (and only if) they are willing to spend extra time catching up on the prerequisite background skills and knowledge; a challenging task, but not an impossible one. Further references and examples are provided for the various uses of commands, as well as exercises and accompanying answers demonstrating exam-like problem-solving. All are optional with those most recommended either discussed or referenced in the manual's body. Naturally, LPIC-2 builds upon the knowledge gained from successful completion of LPIC-1.

First release (version 0.0) October 2003. Reviewed by Adrian Thomasset.
Revised January 2004 after review by Andrew Meredith.
November 2004. Section on expansion cards added in 'Hardware Configuration' chapter by Adrian Thomasset
December 2004. Index and mapped objectives added by Adrian Thomasset.
January 2005. Glossary of terms, command and file review added at end of chapters by Adrian Thomasset
June 2005. Added new entries in line with recommendations from Sernet LATP process, by Andrew Meredith with additional text supplied by Andrew D Marshall and review by Adrian Thomasset. Section on Debian tools supplied by Duncan Thomson.
Project Goals
The GNU/Linux Administration Manuals primary aim is to provide explanations, examples and exercises for those preparing for the Linux Professional Institute (LPI) Certification. Three core sources of criteria guide the project to its primary goals:
LPI's Exam "Objectives".
LPI-Approved Training Materials (LATM) criteria.
The Linux Documentation Project (LDP or TLDP) Author Guide (AG).
Immediate goals for 2010-2011:
Update GNU Free Documentation License to GFDL v1.3.
Move source materials to Texinfo format.
Work with LinuxIT and LPI to identify current requirements of LPIC-1 and LPIC-2.
Perform gap analysis on 2005 release of GNU/Linux LPI-2 and LPI-2 Administration Manuals.
Start development of LPIC-3 materials.
Release updated GNU/Linux LPI-2 and LPI-2 Administration Manuals and a new LPIC-3 Administration Manual.

You can find the manuals at:    http://www.nongnu.org/lpi-manuals/manual/

09 July 2011

Sad Story!

Why doesn’t the economy grow with jobs and investments?
I studied Cognitive Psychology and majored as a Psycho-physiologist with two minor subjects computer science and Medical Sociology. I was called a Cognitive Psychologist and allowed to teach Psychology. It was a fun study, but learned at that time we were the so called lost generation. Unemployment was high and is a agin today. Prospects were low and are again today. We did fundamental research as to how a human processes information and how the brain works. Most of it is still a mystery and will probably remain for may ore years. But since the first Electro Magnetic Resonance scans, and EMG’s a lot was learned. 
I however was not lucky enough to get a job as a research assistant, and went to work for a very large cigaret manufactory who did sponsor formula 1 racing. These days they are know from the barcode. But instead of working in Psychology field were we did fun stuff with EEG signals and evoked potentials and P300 and N150 and P3a and P3b, I went into statistics, somewhat boring and somewhat fun.. 
We were using computers at the research department, the first one was a PDP8 computer from digital and a PDP 11:

A PDP 8 Computer.
A PDP 11 Computer.

A Dec MicroVAX 40.

It impressive at that time, a room with two large computers that also had tape drives. These computers where from the faculty, the University did own a mainframe from IBM and with Multivariate statistics class I managed to crash it with a few co-variant analysis models. To many models the machine could handle. Fun.
After the PDP computers were obsolete they bought a DEC VAC, with Open VMS. That was new and very state of art that time for a university. Nothing compared to cray hydrogen cooled computers, but never the less.
I managed to finish the study in the proper time with double the points needed in old style study (the longer time, these days they have 4 years and become Master, I became DRS.). 

My Candidate Bull:
My Doctoral Bull:

I learned my first Oracle Database skills on an Open VMS DEC 40 MicroVAX at my second employer. From there I used all kinds of platforms and computers, but after a jet lag no platform can be handled properly by me I learned. 
These days I have a lot more Oracle skills, but I learn I am treated in a strange way by a lot of people and jobs are still scares. I guess it is the shortage of jobs and the slow economy which makes people respond a a weird and strange way these days. I hope the economy starts rolling again and people start acting normal again. 

06 July 2011

Internet Explorer Preview 10 is here

Why Internet explorer will NEVER be Obsolete!

Since a couple of years there is the choice of browsers application on Windows 7 due to EU guidelines, which gives people the FREEDOM to choose their own Browser.

Since the release of IE 9, work continued on the Internet Explorer Browser and the Preview platform for IE10 is now available.

I am personally a fan of Open Source OS platforms and Microsoft Server 2008, Microsoft has nice new stuff like Silverlight and a Browser with a lot of HTML5 support and like IBM is also a sponsor for some open source projects like Silverlight on open source Linux (called Moonlight).

When you look at websites these days, a lot of Web 2.0 content can only run if your browser supports all the right techniques used in the website. When you use abode techniques like Flash and fals video there is no problem. When you use HTML5 techniques there are all kinds of claims from Browser vendors. But it is save to assume that all browser support HTML5, none is yet 100 percent compliant with HTML5, but IE9 and soon IE10 come closest, when you set all your filters and smart filters correct.

But to give a short and quick answer how to best browse the web? Use all the major Browsers (IE9, IE10 preview, Firefox, Safari, Opera, Chrome) they all are needed if you want to support all video formats and all HTML5 features! This is mainly because some video formats are only supported in Browser A and not in Browser B, etc. It is about time that a total 100 percent HTML5 support Browser is here, and like it or not IE10 comes closest to that goal at the moment. That despite the fact that i like Firefox. Here is a small chart of browser market shares sofar (source CNNMoney).

Here is a nice new video of what IE10 preview platform improves for you.

Internet Explorer 10 platform preview, can use web workers to achieve new scenarios and make web applications feel more responsive by offloading complex JavaScript algorithms to run in the background

03 July 2011

Why Novell is your preferred choice!

Novell your choice of preference, for these reasons:

With Novell Identity Manager 4 Family, you can keep your documents and apps save for only those who need to see it. Check out the three videos below to learn more about it.

It is also easy to manage as you can see in the video below:

And with Zenworks it is as easy as one, two, three to migrate to the Novell platform.

Checkout this small intro into Zenworks Configuration Management: