IBM Consulting

DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

Oracle Consulting

For Oracle related consulting and Database work and support and Migration call DBA Consulting.

Novell and RedHatConsulting

For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat product like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

Microsoft Consulting

For all Microsoft related consulting services.

Citrix Consulting

Citrix VDI in a box, Desktop vertualizations and Citrix Netscaler security.

Welcome to DBA Consulting Blog. The specialist for IBM, Oracle, Novell, RedHat, Citrix and Microsoft.

DBA Consulting is a consultancy services specialist who can help you with OS related support and migration and install. Also BI implementations from IBM like Cognos 10 and Microsoft BI like SQL Server 2008, 2012 and the CMS systems releated to it like Microsoft Sharepoint and Drupal and Oracle OBIEE 11R1. We focus on quality and service and customer whishes are central, just like quality of services and customer satisfaction. Our focus on cost savings and no vendor lock in are central in business values.

Oracle related video's


Tuesday, September 15, 2015

Oracle Solaris 11.3 Beta Security. Speed. Simplicity.

Oracle Solaris 11.3 Beta

In case you missed it, and how could you, the Oracle Solaris 11.3 Beta was made available 7th July 2015. 

You can find the details here.   

As always the Oracle Solaris Virtualization team has been very busy and there is a great list of things that they have delivered. Here are a few highlights:

    Secure Live Migration: Kernel Zones can now be moved around the datacenter without causing an outage. End users can be unaffected and moved onto other systems whilst administrators perform key system maintenance. The secure part of the migration means that data is protected while man in the middle attacks are prevented.
    Zones on Shared Storage (NFS): Now in addition to SAN and iSCSI you can put your zoneroot on NFS shared storage. Administrators can continue to benefit from snapshots, cloning and zone boot environments while choosing the appropriate storage for their environment (right now this is for Kernel Zones only)
    Live Zone Reconfiguration: Originally introduced in Oracle Solaris 11.2, with the latest beta release the benefits of Live Zone Reconfiguration come to Kernel Zones with the ability to reconfigure the network and attached devices without the need for a reboot.
    Virtualized Clocks for Oracle Solaris Zones: We've just had a leap second, with virtualized clocks support in Oracle Solaris Native Zones you can now test how a system will behave in advance. Of course you could always do this with Kernel Zones but now this functionality is extended across the entire zones family.

Here's hoping you enjoy running the new features in Oracle Solaris Zones - do let us know via the comments how you get on. 

The Oracle Solaris 11.3 Beta program is an opportunity for developers, administrators and architects to evaluate a pre-release version of the Oracle Solaris enterprise cloud platform. Get a head start on implementing technology that will transform your business and save you money. This is also your opportunity to provide feedback on the latest Oracle Solaris release.

Key Benefits:

  • Oracle Solaris’ advanced and easy to use, built-in security features help you prevent hacking and avoid malware for the lifetime of your applications, from install to runtime and prove it simply! Through the bundled compliance tools, tailored reports can be generated easily and rapidly saving money and time.
  • Oracle Solaris virtualization technologies give you all the flexibility of a hypervisor with the performance and density of a container, enabling you to deploy your enterprise workloads safely and securely, in traditional or OpenStack based cloud environments.
  • Simplified and fast lifecycle management provides for large gains in productivity and lower cost of operations, enabling you to build new products and services and deliver on your business strategy faster.

How to Get Started Creating Oracle Solaris Kernel Zones in Oracle Solaris 11

Converting native zone as Kernel zone – Solaris 11.2

Built-in Virtualization for OpenSolaris- Containers, Sun Logical Domains (LDOMs), and xen

See What's New in Oracle Solaris 11.3

Oracle Solaris Overview and Roadmap 

Software-in-Silicon : Oracle SPARC Roadmap 

Oracle Solaris 11.3: Securing and Simplifying the Enterprise Cloud
Courtesy of Larry Wake -Oracle on Jul 07, 2015

Oracle opened up access to the beta release of Oracle Solaris 11.3.  If you’ve been following along (and if not, why not?), you know there have been some big advances in Oracle Solaris 11, including lightning-fast intelligent provisioning and maintenance, and some key additions to our already highly-regarded “defense in depth” security.

Oracle Database, Java, and Applications and Oracle Solaris 

Most notable for the latter is the work we’ve done to simplify compliance checking and mitigation, making it possible for administrators to quickly and easily check system security configurations against industry standards, and get a “report card” showing compliance, with guidance for any areas that may need addressing.

Oh, and did I mention fully-integrated OpenStack?  This is a win in two ways: it not only brings access to the fastest growing open cloud platform to Oracle Solaris users, it brings the incredible breadth of Oracle Solaris enterprise capabilities to the fingertips of OpenStack users.

…and that of course brings to mind some of the above-mentioned Oracle Solaris 11 features, such as built-in, zero-overhead virtualization, a new and super-powerful Unified Archive capability for rapid, safe, compliant deployment, and all the things we’ve done in terms of performance and administrative ease to make Oracle Solaris the best platform for deploying both Oracle’s own and 3rd party software.

So what’s left to do?  In this beta release, you'll find we're taking those capabilities and making them even better.

Solaris Linux Performance, Tools and Tuning 

OpenStack: Oracle moved forward to the OpenStack Juno release, and also done some behind-the-scenes work to make it easy to continue to bring you new OpenStack goodness quickly and reliably.

    Learn more at the Oracle OpenStack blog

Virtualization: In Oracle Solaris 11.2, Oracle introduced “Kernel Zones”, making it possible to deliver hypervisor agility while still maintaining the low overhead and ease of administration you expect from Zones.  In 11.3, Oracle introduces secure live migration for Kernel Zones,  live zone reconfiguration, and verified boot.

And, Oracle extended the new Zones on Shared Storage (ZOSS) capabilities.  You can now place zones on FC-SAN, iSCSI, or NFS devices.

Oracle Solaris 11.3 is all about “more”, so now you have more flexibility and more security in your virtualization.

Database: If you’re an Oracle Database fan, you know we’ve been giving you “more” for years: more observability, more performance, more flexible administration.  Now we’re also giving you “less” — less down time.  We’ve slashed database startup and shutdown times.  These are not only faster than it’s ever been on Solaris; it’s faster—a lot faster—than on any other platform.

Security: We’ve extended the compliance capabilities mentioned above, so that you can more easily tailor compliance policy configurations to suit your site’s requirements.  Oracle Solaris and Oracle Solaris Studio are also ready for Software in Silicon application data integrity (ADI).  To learn more about this and start working with it today, visit the SWiS Cloud.

Data Management: You've already come to know and love what Oracle Solaris brings to the table with the first 21st century filesystem, ZFS.  In Oracle Solaris 11.3, we extend its built-in compression capabilities to include LZ4 support, we give you the ability to compare snapshots recursively, and have introduced a wealth of scalability and performance improvements to make it faster than ever. We’ve also enhanced its monitoring features, and upgraded its built-in SMB support.

There’s more than this, but this should give you a taste of what we’ve got in store for you today.  You can download it now, and take a look at the “What’s New” document to see what we’re doing to make your data center cloud-ready, secure, fast, and simple.

7th July 2015 is a very exciting day for the Solaris team: 7th July 2015 Oracle released a beta for Solaris 11.3, less than a year after Oracle released Solaris 11.2.

Introducing Oracle Solaris 11.2 

Solaris 11.2 What's New  

With Solaris 11.2 Oracle turned Solaris into a comprehensive cloud platform which includes virtualization, SDN and OpenStack. Since the release of Solaris 11.2 Oracle seen a rapid uptake of these new capabilities: A lot of customers a using Unified Archives for deploying their images, are taking advantage of the immutable root-file system, have started to deploy Kernel Zones and OpenStack and take advantage of the automated compliance reporting. The latter has become a lot easier since we are tracking the CVE meta data with IPS. We are getting a lot of good feedback on OpenStack: We recently even received a big compliment from one of the senior architects of one of our OpenStack competitors: He said that Solaris provides the best integration for OpenStack since the OpenStack services are mapped to the Solaris service management facility (smf) which provide automated restarting capabilities for all of the OpenStack services and since we've also tightly integrated it with our role-based access control capabilities in order to limit the privileges required for administering OpenStack.

Solaris 11.3 is taking things to the next level by making Solaris the most advanced enterprise cloud platform. We are introducing a number of critical enhancements in the following areas: (courtacy of Markusflierl-Oracle)

1. Security and Compliance:
- Verified boot for Kernel Zones
- BSD Packet Filter
- Tailoring of compliance policies

2. Virtualization:
- Secure (encrypted) live migration of Kernel Zones
- Zones on Shared Storage via NFS
- Live reconfiguration of I/O resources

3. OpenStack: There are a number of major enhancements:
- Automated upgrades to the newer versions of OpenStack
- Support for orchestration of services (Heat)
- Support for bare metal provisioning (Ironic): This is already in S11.3 but not yet in the beta

Partner Webcast – Platform as a Service with Oracle WebLogic and OpenStack 

Oracle Solaris OpenStack 

Oracle also is working on integrating DBaaS with Trove and Murano. On the latter I had provided an overview at the Vancouver OpenStack Summit back in May.

4. Networking:
- Private VLANs
- Flows support for DSCP Marking

5. Deep integration with the Oracle stack, for instance:
- Up to 6x faster DB restart and shut-down by leveraging the latest Virtual Memory Management(VM2) capabilities

In addition to that Oracle are providing early access to the Free and Open Source Software (FOSS) components that we are shipping with Solaris.

Of course these are just some of the highlights, there are a ton of other enhancements.

Have fun exploring Solaris 11.3!

Here's Your Oracle Solaris 11.3 List of Blog Posts, thanks to Larry Wake -Oracle 

Here are some videos on how to manage and maintain Oracle Solaris

Oracle Solaris Studio 12.4 

Oracle Solaris Hands-On Labs 01/2014 

Immutable Service Containers 

More Information:

What's New in Oracle® Solaris 11.2

Independent and Isolated Environments With Kernel Zones

What's New in Oracle® Solaris 11.3

Oracle Solaris 11.3 Beta

Live Migration for Kernel Zones

New Oracle Solaris Zones features

Monday, August 10, 2015

Microsoft Announced New Container Technologies for the Next Generation Cloud

Windows-based containers: Modern app development with enterprise-grade control 

Microsoft on October 15th 2014 announced that it will deliver new container technologies in the upcoming wave of Windows Server releases. In addition, a new partnership between Microsoft Corp. and Docker Inc. will bring Windows Server support to Docker tools. MS Open Tech will contribute to this partnership, and will build upon our existing support for Linux hosted containers on Microsoft Azure.

As part of this announcement, MS Open Tech is contributing code to the Docker client that supports the provisioning of multi-container Docker applications on Azure. This code removes the need for our cross-platform CLI to bootstrap the Docker host. In other words, we have taken a simple process and made it even simpler. A demonstration of this new capability will be a part of Docker’s Global Hack Day as well as the Microsoft TechEd Europe conference. For more information on other aspects of this partnership, see the Azure blog.

Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Docker and Microsoft: How Azure is Bringing the World of Windows and Linux Together 

What are containers?

They are an isolated, resource controlled and portable operating environment.

Basically, a container is an isolated place where an application can run without affecting the rest of the system and without the system affecting the application. Containers are the next evolution in virtualization.

If you were inside a container, it would look very much like you were inside a physical computer or a virtual machine. And, to Docker, a Windows Server Container looks like any other container.
Containers for Developers

When you containerize an app, only the app and the components needed to run the app are combined into an "image". Containers are then created from this image as you need them. You can also use an image as a baseline to create another image, making image creation even faster. Multiple containers can share the same image, which means containers start very quickly and use fewer resources. For example, you can use containers to spin up light-weight and portable app components – or ‘micro-services’ – for distributed apps and quickly scale each service separately.

Windows Containers: What, Why and How 

Because the container has everything it needs to run your application, they are very portable and can run on any machine that is running Windows Server 2016. You can create and test containers locally, then deploy that same container image to your company's private cloud, public cloud or service provider. The natural agility of Containers supports modern app development patterns in large scale, virtualized and cloud environments.

- With containers, developers can build an app in any language. These apps are completely portable and can run anywhere - laptop, desktop, server, private cloud, public cloud or service provider - without any code changes.

- Containers helps developers build and ship higher-quality applications, faster.

Containers for IT Professionals

- IT Professionals can use containers to provide standardized environments for their development, QA, and production teams. They no longer have to worry about complex installation and configuration steps. By using containers, systems administrators abstract away differences in OS installations and underlying infrastructure.

- Containers help admins create an infrastructure that is simpler to update and maintain.
What else do I get?

- Containers and the container ecosystem provide agility, productivity, and freedom-of-choice in building, deploying, and managing modern apps.

- When combined with Docker, Visual Studio, and Azure, containers become an important part of a robust ecosystem. Read more about the Windows Server Container ecosystem.

Docker containers simplify the development of software applications that consist of micro-services. Each service then operates as an isolated execution unit on the host. Common use cases for Docker include:

    Automating the packaging and deployment of applications
    Creation of lightweight, private PaaS environments
    Automated testing and continuous integration/deployment
    Deploying and scaling web apps, databases and backend services

Docker’s container technology aims to drive developer productivity and agility. Containers do not include a full operating system, consequently rapid development and scaling of container-based applications is possible through very quick boot and restart operations. Furthermore, highly efficient creation of modified container images, by virtue of only capturing the differences between the original and new containers, enables improved management and distribution of containerized applications; the resulting images are both small and highly portable across almost any platform.

This partnership brings the .NET and Windows Server ecosystem together with Docker's expertise and open source community to deliver uniform container functionality across Linux and Windows Server containers.

In June MS Open Tech announced the availability of Docker Engine on Microsoft Azure, to coincide with the 1.0 release of the Docker tools. That work provided the ability to create Azure virtual machines with the Docker Engine already installed. The resulting virtual machines become hosts for Docker containers, the standard Docker tooling then provides management of containers on those hosts. Our goal with this project is to make it as simple as possible to get started with Docker on Azure. Since June, we have continued to work with the Docker community to make things even simpler.

Last October, Microsoft and Docker, Inc. jointly announced plans to bring containers to developers across the Docker and Windows ecosystems via Windows Server Containers, available in the next version of Windows Server. We will be unveiling the first live demonstration in a few weeks, starting at the BUILD conference. Today, we are taking containerization one step further by expanding the scenarios and workloads developers can address with containers:

• Hyper-V Containers, a new container deployment option with enhanced isolation powered by Hyper-V virtualization
• Nano Server, a minimal footprint installation of Windows Server that is highly optimized for the cloud, and ideal for containers.

First-of-Their-Kind Hyper-V Containers

Leveraging our deep virtualization experience, Microsoft will now offer containers with a new level of isolation previously reserved only for fully dedicated physical or virtual machines, while maintaining an agile and efficient experience with full Docker cross-platform integration. Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host.

While Hyper-V containers offer an additional deployment option between Windows Server Containers and the Hyper-V virtual machine, you will be able to deploy them using the same development, programming and management tools you would use for Windows Server Containers. In addition, applications developed for Windows Server Containers can be deployed as a Hyper-V Container without modification, providing greater flexibility for operators who need to choose degrees of density, agility, and isolation in a multi-platform, multi-application environment.

Microsoft Containers in the Docker Ecosystem

Windows Server Containers

Docker plays an important part in enabling the container ecosystem across Linux, Windows Server and the forthcoming Hyper-V Containers. We have been working closely with the Docker community to leverage and extend container innovations in Windows Server and Microsoft Azure, including submitting the development of the Docker engine for Windows Server Containers as an open contribution to the Docker repository on GitHub. In addition, we’ve made it easier to deploy the latest Docker engine using Azure extensions to setup a Docker host on Azure Linux VMs and to deploy a Docker-managed VM directly from the Azure Marketplace. Finally, we’ve added integration for Swarm, Machine and Compose into Azure and Hyper-V.

“Microsoft has been a great partner and contributor to the Docker project since our joint announcement in October of 2014,” said Nick Stinemates, Head of Business Development and Technical Alliances. “They have made a number of enhancements to improve the developer experience for Docker on Azure, while making contributions to all aspects of the Docker platform including Docker orchestration tools and Docker Client on Windows. Microsoft has also demonstrated its leadership within the community by providing compelling new content like dockerized .NET for Linux. At the same time, they’ve been working to extend the benefits of Docker containers- application portability to any infrastructure and an accelerated development process--to its Windows developer community.”

Introducing Nano Server: The Nucleus of Modern Apps and Cloud

Nano Server: The Future of Windows Server Starts Now 

The operating system has evolved dramatically with the move to the cloud. Many customers today need their OS for the primary purpose of powering born-in-the-cloud applications. Leveraging our years of experience building and running hyper-scale datacenters, Microsoft is uniquely positioned to provide a purpose-built OS to power modern apps and containers.

The result is Nano Server, a minimal footprint installation option of Windows Server that is highly optimized for the cloud, including containers. Nano Server provides just the components you need – nothing else, meaning smaller server images, which reduces deployment times, decreases network bandwidth consumption, and improves uptime and security. This small footprint makes Nano Server an ideal complement for Windows Server Containers and Hyper-V Containers, as well as other cloud-optimized scenarios. A preview will be available in the coming weeks, and you can read more about the technology on the Windows Server blog.

Containers are bringing speed and scale to the next level in today’s cloud-first world. Microsoft is uniquely positioned to propel more organizations forward into the next era of containerization, by offering flexibility and choice through Windows Server containers, Linux containers, and Hyper-V containers both in the cloud and on-premises. Today’s announcements are just the beginning of what’s to come, as we continue to fuel both the growth of containers in the industry, and new levels of application innovation for all developers.

More Information:

Monday, July 13, 2015

Immutable Service Containers

Immutable Service Containers and Oracle Solaris Studio 12.4

Economics of Oracle Solaris

Solaris Immutable Service Containers

While the need for security and integrity is well-recognized, it is less often well-implemented. Security assessments and industry reports regularly show how sporadic and inconsistent security configurations become for organizations both large and small. Published recommended security practices and settings remain unused in many environments and existing, once secured, deployments suffer from atrophy due to neglect.

Why is this? There is no one answer. Some organizations are simply unaware of the security recommendations, tools, and techniques available to them. Others lack the necessary skill and experience to implement the guidance and maintain secured configurations. It is not uncommon for these organizations to feel overwhelmed by the sheer number of recommendations, settings and options. Still others may feel that security is not an issue in their environment. The list goes on and on, yet the need for security and integrity has never been more important.

Interestingly, the evolution and convergence of technology is cultivating new ideas and solutions to help organizations better protect their services and data. One such idea is being demonstrated by the Immutable Service Container (ISC) project. Immutable Service Containers are an architectural deployment pattern used to describe a platform for highly secure service delivery. Building upon concepts and functionality enabled by operating systems, hypervisors, virtualization, and networking, ISCs provide a secured container into which a service or set of services is deployed. Each ISC embodies at its core the key principles inherent in the Sun Systemic Security framework including: self-preservation, defense in depth, least privilege, compartmentalization and proportionality. Further, ISC design borrows from Cloud Computing principles such as service abstraction, micro-virtualization, automation, and "fail in place".

By designing service delivery platforms using the Immutable Service Containers mode, a number of significant security benefits:

For application owners:

  • ISCs help to protect applications and services from tampering
  • ISCs provide a consistent set of security interfaces and resources for applications and services to use

For system administrators:

  • ISCs isolate services from one another to avoid contamination
  • ISCs separate service delivery from security enforcement/monitoring
  • ISCs can be (mostly) pre-configured by security experts

For IT managers:

  • ISCs creation can be automated, pre-integrating security functionality making them faster and easier to build and deploy
  • ISCs leverage industry accepted security practices making them easier to audit and support

It is expected that Immutable Service Containers will form the most basic architectural building block for more complex, highly dynamic and autonomic architectures. The goal of the ISC project is to more fully describe the architecture and attributes of ISCs, their inherent benefits, their construction as well as to document practical examples using various software applications.

While the notion of ISCs is not based upon any one product or technology, an instantiation has been recently developed using OpenSolaris 2009.06. This instantiation offers a pre-integrated configuration leveraging OpenSolaris security recommended practices and settings. With ISCs, you are not starting from a blank slate, but rather you can now build upon the security expertise of others. Let's look at the OpenSolaris-based ISC more closely.

In an ISC configuration, the global zone is treated as a system controller and exposed services are deployed (only) into their own non-global zones. From a networking perspective, however, the entire environment is viewed as a single entity (one IP address) where the global zone acts as a security monitoring and arbitration point for all of the services running in non-global zones.

As a foundation, this highly optimized environment is pre-configured with:

non-executable stack
encrypted swap space (w/ephemeral key)
encrypted scratch space (w/ephemeral key)
security hardened operating system (global and non-global zones)

Further, the default OpenSolaris ISC uses:

Non-Global Zone. Exposed services are deployed in a non-global zone. There they can take advantage of the core security benefits enabled by OpenSolaris non-global zones such as restricted access to the kernel, memory, devices, etc. For more information on non-global zone security capabilities, see the Sun BluePrint titled "Understanding the Security Capabilities of Solaris Zones Software". Using a fresh ISC, you can simply install your service into the provided non -global zone as you normally would.
Further in the ISC model, each non-global zone has its own encrypted scratch space (w/its own ephemeral key), its own persistent storage location, as well as a pre-configured auditing and networking configuration that matches that of the global zone. You do not need to use the encrypted scratch space or persistent storage, but it is there if you want to take advantage of it. Obviously, additional resource controls (CPU, memory, etc.) can be added as necessary. These are not pre-configured due to the variability of service payloads.

Solaris Auditing. A default audit policy is implemented in the global zone and all non-global zones that tracks login and logout events, administrative events as well as all commands (and command line arguments) executed on the system. The audit configuration and audit trail are kept in the global zone where they cannot be accessed by any of the non-global zones. The audit trail is also pre-configure d to be delivered by SYSLOG (by default this information is captured in /var/log/auditlog).

Private Virtual Network. A private virtual network is configured by default for all of the non-global zones. This network isolates each non-global zone to its own virtual NIC. By default, the global and non-global zones can freely initiate external communications, although this can be restricted if needed. A non-global zone is not permitted to accept connections, by default. Non-global zone service s can be exposed through the global zone IP address by adjusting the IP Filter and IP NAT policies (below).

Solaris IP NAT. Each non-global zone is pre-configured to have a private address assigned to its virtual NIC. To allow the non -global zone to communicate with external systems and networks, an IP NAT policy is implemented. Outgoing connections are masked using the IP address of the global zone. Incoming connections are redirected based upon the port used to communicate. Beyond simple hardening of the non-global zone (a state which can be altered from within the non-global zone itself), this mechanism ensures that the global zone can control which services are exposed by the non-global zone and on which ports.

Solaris IP Filter. A default packet filtering policy is implemented in the global zone allowing only DHCP (for the exposed network interface) and SSH (to the global zone). Additional rules are available (but disabled) to allow access to non-global zones on an as-needed basis. Further, rules are implemented to deny external access to any non-global zone that has changed its pre-assigned (private) IP address. Packet filtering is pre-configured to log packets to SYSLOG (by default this information is captured in /var/log/ipflog).

So what does all of this really mean? Using the ISC model, you can deploy your services in a micro-virtualized environment that offers protection against kernel-based root kits (and some forms of user-land root kits), offers flexible file system immutability (based upon read-only file systems mounted into the non-global zone), can take advantage of process least privilege and resource controls, and is operated in a hardened environment where there is a packet filtering, NAT and auditing policy that is effectively out of the reach of the deployed service. This means that should a service be compromised in a non-global zone, it will not be able to impact the integrity or validity of the auditing, packet filtering, and NAT configuration or logs. While you may not be able to stop every form of attack, having reliable audit trails can significantly help to determine the extent of the breach and facilitate recovery.

The following diagram puts all of the pieces together:

Solaris 11 – Immutable Zones

Immutable zones are read-only zones, but still contain “whole root” file systems.  The immutable zone can be configured as a completely read-only zone or it can be partially read-only.  The immutable zone is controlled by a mandatory write access control (MWAC) kernel policy.  This MWAC policy enforces the zone’s root file system write privilege through a zonecfg file-mac-profile property. The policy is enabled at zone boot.

By default, a zone’s file-mac-profile property is not set in a non-global zone. The default policy for a nonglobal zone is to have a writable root file system. In a Solaris read-only zone, the file-mac-profile property is used to configure a read-only zone root. A read-only root restricts access to the run-time environment from inside the zone. Through the zonecfg utility, the file-mac-profile can be set to one of the following values.

- See more at:

Oracle Virtualization  

Oracle Software in Silicon Cloud

Speed and Simplify Your Business  (

Accelerate application performance and significantly improve security.

Oracle’s revolutionary Software in Silicon technology hardwires key software processes directly onto the processor. Because accelerated functions are run on special engines on the processor's silicon, yet kept separate from its cores, the technology speeds up performance of an application while retaining the overall functionality of the processor.

Oracle’s revolutionary Software in Silicon technology hardwires key software processes directly onto the processor.

Because accelerated functions are run on special engines on the processor's silicon, yet kept separate from its cores, the technology speeds up performance of an application, and implements data security in hardware, while retaining the overall functionality of the processor.

At Oracle OpenWorld 2014 John Fowler announced Oracle Software in Silicon Cloud, which provides early access to revolutionary Software in Silicon technology that dramatically improves reliability and security, and accelerates application performance.

Introducing Oracle Solaris Studio 12.4

Oracle Solaris Studio 12.4 - Technical Mini-Casts

Oracle Solaris OpenStack

Network Virtualization Using Crossbow Technology

More Information:

Tuesday, June 16, 2015

SQL Server 2016

SQL Server 2016 Evolution and Azure DataWarehouse

At this year’s inaugural Ignite Conference in held in Chicago Microsoft announced that the next release of SQL Server, previously referred to as SQL Server vNext, will officially be SQL Server 2016. There’s no doubt that SQL Server has been on a fast track release program and the upcoming SQL Server 2016 release will be just two short years after the last SQL Server 2014 release. For business critical enterprise software this is a torrid release cycle that many businesses will have trouble keeping up with. But Microsoft fully intends to make the SQL Server 2016 release worth getting. You can find out more about the upcoming SQL Server 2016 features at the SQL Server 2016 Preview page  , and the SQL Server Blog  . You might also check out the Ignite session SQL Server Evolution on this Blog.

Get an early look at the next Microsoft data platform

The first public preview of SQL Server 2016 is now available for download. It is the biggest leap forward in Microsoft's data platform history with real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technology, and new hybrid cloud scenarios.

SQL Server 2016 delivers breakthrough mission-critical capabilities with in-memory performance and operational analytics built-in. Comprehensive security features like new Always Encrypted technology help protect your data at rest and in motion, and a world-class high availability and disaster recovery solution adds new enhancements to AlwaysOn technology.

Organizations will gain deeper insights into all of their data with new capabilities that go beyond business intelligence to perform advanced analytics directly within their database and present rich visualizations for business insights on any device.

You can also gain the benefits of hyper-scale cloud with new hybrid scenarios enabled by new Stretch Database technology that lets you dynamically stretch your warm and cold transactional data to Microsoft Azure in a secured way so your data is always at hand for queries, no matter the size. In addition, SQL Server 2016 delivers a complete database platform for hybrid cloud, enabling you to easily build, deploy and manage solutions that span on-premises and cloud.


  •  Enhanced in-memory performance provides up to 30x faster transactions, more than 100x faster queries than disk-based relational databases and real-time operational analytics
  • New Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without application changes
  • Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner without application changes
  • Built-in advanced analytics provide the scalability and performance benefits of building and running your advanced analytics algorithms directly in the core SQL Server transactional database
  • Business insights through rich visualizations on mobile devices with native apps for Windows, iOS and Android
  • Simplify management of relational and non-relational data by querying both with T-SQL using PolyBase
  • Faster hybrid backups, high availability and disaster recovery scenarios to back up and restore your on-premises databases to Microsoft Azure and place your SQL Server AlwaysOn secondaries in Azure

Here are eight great features to look for in SQL Server 2016.

1. Always Encrypted

Always Encrypted is designed to protect data at rest or in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and the encryption key can reside with the application. Encryption and decryption of data happens transparently inside the application. This means the data stored in SQL Server will be encrypted which can secure it from DBA and administrators but that also has considerations for ad-hoc queries, reporting and exporting the data.

2. Stretch Database

The idea behind this feature is certainly interesting. The upcoming stretch database feature will allow you to dynamically stretch your on-premise database to Azure. This would enable your frequently accessed or hot data to stay on-premise and your infrequently accessed cold data to be moved to the cloud. This could enable you to take advantage of low cost Azure store and still have high performance applications. However, this is one trick where Microsoft really needs to get the partitioning right to keep your queries from straying into the cloud and killing your performance.

3. Real-time Operational Analytics

This feature uses the dynamic duo of SQL Server’s in-memory technologies; it combines In-Memory OLTP with the in-memory columnstore for real-time operational analytics. Its purpose is to tune your system for optimal transactional performance as well as increase workload concurrency. This sounds like a great combination and applying analytics to your system’s performance is something a lot of customers have asked for a long time but you will certainly need to have the memory to take advantage of it.

4. PolyBase into SQL Server

Big Data continues to grow in strategic importance but unless you had the SQL Server Parallel Data Warehouse (PDW) connecting SQL Server to Dig Data and Hadoop in particular was limited and difficult. In previous releases, PDW was the only version of SQL Server that came with PolyBase – a technology that bridged SQL Server and Hadoop by enabling you to construct and run SQL queries over Hadoop data stores eliminating the need to understand HDFS or MapReduce. SQL Server 2016 promises to bring the PolyBase technology mainstream into the primary SQL Server SKUs (probably the Enterprise edition).

5. Native JSON Support

JSON (JavaScript Object Notation) is a standardized data exchange format that is currently not supported natively by SQL Server. To perform JSON imports and exports you need to hand-code complex T-SQL, SQLCLR or JavaScript. SQL Server 2016 promises to simply this by incorporating JSON support directly into SQL Server much like XML. SQL Server 2016 will natively parse and store JSON as relational data and will support exporting relational data to JSON.

6. Enhancements to AlwaysOn

SQL Server 2016 will also continue to advance high availability and disaster recovery with several enhancements to AlwaysOn. The upcoming SQL Server 2016 release will enhance AlwaysOn with the ability to have up to three synchronous replicas. Additionally, it will include DTC (Distributed Transaction Coordinator) support as well as support for round-robin load balancing of the secondaries replicas. There will also be support for automatic failover based on database health.

7. Enhanced In-Memory OLTP

First introduced with SQL Server 2014, In-Memory OLTP will continue to mature in SQL Server 2016. Microsoft will enhance In-Memory OLTP by extending the functionality to more applications while also enhancing concurrency. This means they will be expanding the T-SQL surface area, increasing the total amount of memory supported into the terabyte range as well as supporting a greater number of parallel CPUs.

8. Revamped SQL Server Data Tools

Another welcome change in SQL Server 2016 is the reconsolidation of SQL Server Data Tools (SSDT). As Microsoft worked to supplant the popular and useful Business Development Studio (BIDS) with SQL Server Data Tools they wound up confusing almost everyone by creating not one but two versions of SQL Server Data Tools both of which needed to be downloaded separately from installing SQL Server itself. With the SQL Server 2016 release Microsoft has indicated that they intend to reconsolidate SQL Server Data Tools.

SQL Server Evolution 2016 Part 1

SQL Server 2016 Evolution Part2 

Microsoft Azure SQL Data Warehouse Overview 

Azure SQL Data Warehouse: Deep Dive 

More Information:|Bing|SEM|DI|SQL%20Server|Brand|US_MSFT

Saturday, May 9, 2015

IBM BLU Accelerators for IBM DB2 10.5 and 11

IBM BLU Accelerators


IBM DB2 with BLU : The In-memory database for Power Systems

BLU Acceleration is a new collection of technologies for analytic queries that are introduced in DB2 for Linux, UNIX, and Windows Version 10.5 (DB2 10.51). At its heart, BLU Acceleration is about providing faster answers to more questions and analyzing more data at a lower cost. DB2 with BLU Acceleration is about providing order-of-magnitude benefits in performance, storage savings, and time to value.
These goals are accomplished by using multiple complementary technologies, including:

The data is in a column store, meaning that I/O is performed only on those columns and values that satisfy a particular query.

The column data is compressed with actionable compression, which preserves order so that the data can be used without decompression, resulting in huge storage and CPU savings and a significantly higher density of useful data held in memory.

Parallel vector processing, with multi-core parallelism and single instruction, multiple data (SIMD) parallelism, provides improved performance and better utilization of available CPU resources.

Data skipping avoids the unnecessary processing of irrelevant data, thereby further reducing the I/O that is required to complete a query.

DB2 BLU Acceleration and more

These and other technologies combine to provide an in-memory, CPU-optimized, and I/O-optimized solution that is greater than the sum of its parts.
BLU Acceleration is fully integrated into DB2 10.5, so that much of how you leverage DB2 in your analytics environment today still applies when you adopt BLU Acceleration. The simplicity of BLU Acceleration changes how you implement and manage a BLU-accelerated environment. Gone are the days of having to define secondary indexes or aggregates, or having to make SQL or schema changes to achieve adequate performance.

What's new in IBM DB2 BLU?  

Four key capabilities make BLU Acceleration a next generation solution for in-memory computing:

1. BLU Acceleration does not require the entire dataset to fit in memory while still processing at lightning-fast speeds.
Instead, BLU Acceleration uses a series of patented algorithms that nimbly handle in-memory data processing. This includes the ability to anticipate and “prefetch” data just before it’s needed and to automatically adapt to keep necessary data in or close to the CPU. Add some additional CPU acceleration techniques, and you get highly efficient in-memory computing at lightning-speed.
2. BLU Acceleration works on compressed data- saving time and money.
Why waste time and CPU resources on decompressing data, analyzing it and recompressing it? Instead of all these extra steps, BLU Acceleration preserves the order of data and performs a broad range of operations—including joins and predicate evaluations—on compressed data without the need for decompression. This is another next-generation technique to speed processing, skip resource-intensive steps and add agility.
3. BLU Acceleration intelligently skips processing of data it doesn’t need to get the answers you want.
With a massive data set, chances are good that you don’t need all of the data to answer a particular query. BLU Acceleration employs a series of metadata management techniques to automatically determine which data would not qualify for analysis within a particular query, enabling large chunks of data to be skipped. This results in a more agile computing, including storage savings and system hardware efficiency. What’s more this metadata is kept updated on a real-time basis so that data changes are continually reflected in the analytics. Less data to analyze in the first place means faster, simpler and more agile in-memory computing. We call this data skipping.

4. BLU Acceleration is simple to use.
As your business users demand more analytics faster, you need in-memory computing that keeps the pace. BLU Acceleration delivers optimal performance out of the box—no need for indexes, tuning, or time-consuming configuration efforts. You simply convert your row-based data to columns and run your queries. Because BLU Acceleration is seamlessly integrated with DB2, you can manage both row-based and column-based data from a single proven system, thus reducing complexity. This helps free the technical team to deliver value to the business – less routine maintenance and more innovation.

Simplicity in DB2 10.5 with BLU Acceleration

Fast and simple in-memory computing

Fast answers

DB2 with BLU Acceleration includes six advances for fast in-memory computing:

•In the moment business answers from within the transaction environment, new with DB2 10.5 “Cancun Release”, utilizes BLU Shadow Tables to automatically maintain a column-based version of the row-based operational data. Analytic queries are seamlessly routed to these column organized BLU Shadow Tables that are ideal for fast analytic processing.

•Next-generation in-memory computing  delivers the benefits of in-memory columnar processing without the limitations or cost of in-memory only systems that require all data to be stored in system memory to achieve breakthrough performance. BLU Acceleration dynamically optimizes movement of data from storage to system memory to CPU memory (cache).  This patented IBM innovation enables BLU Acceleration to maintain in-memory performance even when active data sets are larger than system memory.

•Actionable compression preserves the order of the data, enabling compressed data in BLU Acceleration tables to be used without decompression. A broad range of operations like predicates and joins are completed on compressed data. The most frequent values are encoded with fewer bits to optimize the compression.

•CPU acceleration is designed to process a huge volume of data simultaneously by multiplying the power of the CPU. Multi-core processing, SIMD processor support and parallel data processing are all used to deeply exploit the CPU and process data with less system latency and fewer bottlenecks.

•Data skipping eliminates processing of irrelevant and duplicate data. This is accomplished by examining small sections of data to determine if it contains information that is relevant to the analytics problem at hand. Deciding on these “hot” portions of data in more granular sections means that less irrelevant data is being processed in the first place.

Oracle SQL compatibility streamlines and reduces risk in moving data from Oracle database to DB2 with BLU Acceleration. This leverages existing skills and investments, while taking advantage of the speed and simplicity of BLU Acceleration to deliver fast business insights.

Simply delivered

IBM believes that in-memory computing should be easy on IT resources:

•Load and go set-up allows you to start deriving value from your data in a couple simple steps.  Simply create the table, load the data and go. It’s fast out of the gate – no tuning, no tweaking required.  This means you can more quickly satisfy business needs even as they change and evolve.

•One administration environment for analytics or transactional data helps ease management. BLU Acceleration is built seamlessly into DB2 10.5 for Linux, UNIX and Windows, a proven enterprise-class database. A single set of enterprise-class administration functions for either row- or column-organized data reduces complexity, while a series of automation capabilities help free IT talent for higher value projects.

IBM Accelerating Analytics with BLU

Flexible multi-platform deployment for Linux on Intel, zLinux, AIX on Power and Windows  makes the most of IT resources whether you are using existing hardware or the latest technology.  This is the only in-memory computing technology to deploy on the cloud or on multiple platforms, offering greater flexibility in meeting the need for business answers.

IBM DB2 10.5 with BLU Acceleration vs Oracle Exadata

DB2 and BLU Acceleration on Cloud Tech Talk

BLU Acceleration: Delivering Speed of Thought Analytics 

Big data poses big challenges for accessing and analyzing information. BLU Acceleration from IBM delivers speed of thought analytics that help you make better decisions faster. See BLU Acceleration’s innovative dynamic in-memory processing, actionable compression, parallel vector processing and data skipping. Learn how to get started using your existing infrastructure and skills.

New in-memory capabilities help you capitalize on business answers even more easily

Technology never stands still and BLU Acceleration is no exception! This product has been enhanced in key areas so you can:

•Gain access to the fast answers BLU Acceleration delivers on Windows and zLinux to support a broader range of organizations, as well as data mart consolidation on these new platforms

•Protect data at rest while saving administration time with native application-transparent data encryption

•Deliver in the moment business answers from within the transaction environment

•Leverage Oracle skills with SQL compatibility to enable simple, low-risk migration from Oracle database to DB2 with BLU Acceleration

•Reduce risk and improve performance of SAP environments with

•Significant enhancements to SAP Business Warehouse support

IBM DB2 11 SQL Improvements

Take advantage of faster query processing and better data reliability by using BLU Acceleration on the POWER8 processor

Bigdata Webcast on Blu acceleration

Best practices for DB2 pureScale performance and monitoring

Let's Get Hands-on: 60 Minutes in the Cloud—Predictive Analytics Made Easy

IBM dashDB - Keeping data warehouse infrastructure out of your way

IBM DB2 with BLU Acceleration & Cognos BI - A great combo!

More Information:

DB2 Tech Talk: Deep Dive BLU Acceleration Super Easy In-memory Analytics

Join DB2 expert Sam Lightstone for an in-depth discussion of the all-new BLU Acceleration features in DB2 10.5 for Linux, UNIX and Windows. BLU Acceleration in-memory computing is designed to deliver results from data-intensive analytic workloads with speed and precision that is termed "speed of thought" analytics.

In this Tech Talk, Sam will explain the details of this ground-breaking technology such as:

•Dynamic in-memory analytics that do not require all of the data to fit in memory in order to perform analytics processing

•Parallel vector processing, driving spectacular CPU exploitation

DB2 Tech Talk: Introduction and Technical Tour of DB2 with BLU Acceleration

Join Distinguished Engineer and DB2 expert Berni Schiefer and host Rick Swagerman for a technical tour of the all new DB2 10.5 with BLU Acceleration in-memory technology. You will learn about new features such as:

•BLU Acceleration, for “Speed of Thought” analytics
Designed to handle data-intensive analytics workloads, BLU Acceleration extends the capabilities of traditional in-memory systems by providing in-memory performance even when the data set size exceeds the size of the memory. Learn about RAM data loading capabilities, plus “data skipping”; parallel data analysis; actionable compression for analysis without decompressing data and more.

• New DB2 pureScale capabilities that enable online rolling maintenance updates and capacity growth with no planned downtime, plus new integration HADR capabilities to help ensure always available transactions.

• SQL and Oracle Database compatibility refinements in DB2 10.5, helping to ensure fast, easy moves to DB2 as well as increased flexibility for DB2 applications.

• Enhancements to NoSQL technologies that are now business-ready in DB2 10.5. Although not part of the DB2 10.5 announcement, we will fill you in on other No SQL technology introduction plans as well.

• New packaging editions of DB2 that that handle either OLTP or data warehousing needs.

• DB2 tools advances that support these new functions.

Join us for this Tech Talk to find out about these exciting enhancements and how they can help you deliver the data analytics your organization needs while providing tools to keep your OLTP systems in top shape.

A deeper dive into dashDB - know more in a dash

dashDB is a newly announced data warehouse as a service deployed in the cloud that leverages technologies like BLU Acceleration, in-database analytics and Cloudant to allow you to focus more on the business and less on the business of IT. In this DB2 Tech Talk you will learn a little more about IBM’s cloud initiatives and the value proposition around dashDB as well as...
-dashDB’s architecture and use cases
-pricing and offerings as as service
-competitive differentiations and customer feedback

DB2 with BLU Acceleration on Power Systems

Best practices Optimizing analytic workloads using DB2 10.5 with BLU Acceleration

IBM Videos