IBM Consulting

DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

Oracle Consulting

For Oracle related consulting and Database work and support and Migration call DBA Consulting.

Novell and RedHatConsulting

For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat product like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

Microsoft Consulting

For all Microsoft related consulting services.

Citrix Consulting

Citrix VDI in a box, Desktop vertualizations and Citrix Netscaler security.

Welcome to DBA Consulting Blog. The specialist for IBM, Oracle, Novell, RedHat, Citrix and Microsoft.

DBA Consulting is a consultancy services specialist who can help you with OS related support and migration and install. Also BI implementations from IBM like Cognos 10 and Microsoft BI like SQL Server 2008, 2012 and the CMS systems releated to it like Microsoft Sharepoint and Drupal and Oracle OBIEE 11R1. We focus on quality and service and customer whishes are central, just like quality of services and customer satisfaction. Our focus on cost savings and no vendor lock in are central in business values.

Oracle related video's


Monday, November 9, 2015

Spare M7 Innovation - Leverages “Software in Silicon”

SPARC technology plans emphasize innovation leadership, Oracle's commitment to advanced silicon and Engineered Systems

M7 - Breakthrough Processor and System Design with SPARC M7

Innovation is the life's blood of the technology industry. And technology innovation is a critical factor in business, government, and culture. Oracle is keenly aware of this innovation imperative, not only in theory but in practice, investing considerable time, effort, and resources in driving information technology and its effective implementation forward, first in software then in storage, networking, and hardware.

A significant result of this effort came to light at the Hot Chips conference in Cupertino, Calif., where Oracle disclosed technology details of its upcoming SPARC processor, known as the SPARC M7. The venue is appropriate: This year is the 26th anniversary of the semiconductor industry's showcase for innovative technology, sponsored by the IEEE's technical committee on microprocessors and microcomputers and in cooperation with the ACM's SIGARCH (Special Interest Group on Computer Architecture). This is a milestone for Oracle. With the disclosure of the M7, Oracle will have introduced six new SPARC processors in the four years since it acquired Sun Microsystems. That aggressive timeline reinforces Oracle's commitment to the SPARC architecture, to maintaining its relevance in the technology environment.

Larry Ellison Introduces Breakthrough New SPARC M7 Systems

Software in Silicon

The innovations in the new SPARC processor are of a piece with the design philosophy at the heart of Oracle Engineered Systems. It's an approach to enterprise IT architecture that fits together servers, software, and storage into a single, finely-tuned integrated system that runs applications at their optimum performance capability.

That optimization strategy is reflected in the new processor. The M7's most significant innovations revolve around what is known as "software in silicon," a design approach that places software functions directly into the processor. Because specific functions are performed in hardware, a software application runs much faster. And because the cores of the processor are freed up to perform other functions, overall operations are speeded up as well.

Zoran Radovic: Software. Hardware. Complete. M7: Next Generation Oracle Processor

The SPARC M7 design features 32 CPU cores for faster performance.

Oracle’s new SPARC M7 systems feature:

Security in Silicon, with two key new enhancements in systems design.

Silicon Secured Memory – For the first time, Silicon Secured Memory adds real-time checking of access to data in memory to help protect against malicious intrusion and flawed program code in production for greater security and reliability. Silicon Secured Memory protection is utilized by Oracle Database 12c by default and is simple and easy to turn on for existing applications. Oracle is also making application programming interfaces available for advanced customization.

Hardware-Assisted Encryption – New breakthrough performance with hardware-assisted encryption built into all 32 cores enables uncompromised use without performance penalty. This gives customers the ability to have secure runtime and data for all applications even when combined with wide key usage of AES, DES, SHA, and more. Existing applications that use encryption will be automatically accelerated by this new capability including Oracle, third party, and custom applications.

SQL in Silicon: Adds co-processors to all 32 cores of the SPARC M7 that offload and accelerate important data functions, dramatically improving efficiency and performance of database applications.

Critical functions accelerated by these new co-processors include memory de-compression, memory scan, range scan, filtering, and join assist. Offloading these functions to co-processors greatly increases the efficiency of each CPU core, lowers memory utilization, and enables up to 10x better database query performance. Oracle Database 12c In-Memory option fully supports this new capability in the current release. In addition, this new functionality is slated to be available to advanced developers to build the next generation of big data analytics platforms.

World Record Performance: Powered by the world’s fastest microprocessor, Oracle’s new SPARC M7-based systems deliver proven performance superiority with world record results in over 20 benchmarks. In addition to superior performance for database, middleware, Java, and enterprise applications from Oracle and third party ISV’s, the new SPARC M7-based systems achieve incredible performance compared to the competition for big data and cloud workloads.

“Until now, no computing platform has been able to tackle security without significantly impacting application performance and efficiency,” said John Fowler, executive vice president, Systems, Oracle. “Today Oracle is delivering breakthrough technology for memory intrusion protection and encryption, while accelerating in-memory analytics, databases and Java. Oracle’s SPARC T7 and M7 systems and Oracle SuperCluster M7 are starting a new era in delivering secure computing while increasing efficiency.”

“Oracle's core investments in SPARC M7 are delivering breakthrough capabilities for information security, database efficiency, and performance that go beyond enterprise workloads to big data and cloud. This is the most significant advancement in SPARC microprocessor and systems design in the last decade,” said Matthew Eastwood, senior vice president, Enterprise Infrastructure and Datacenter Group, IDC.

Balanced Design Principles: The new SPARC M7 processor is the design center of the new line of SPARC M7 systems that scale from 32 to 512 cores, 256 to 4,096 threads and up to 8 TB of memory. Oracle’s SPARC M7 chip is a 4.1 GHz 32-core/256-thread processor that addresses the most demanding workloads with a balanced high performance design across all factors of memory, IO, and scalability. In addition, Oracle has improved every other aspect of the design compared to previous generation designs resulting in increased single-thread performance and reduced latency.

Technology That Delivers: Oracle’s new SPARC M7 systems deliver outstanding security and performance as demonstrated by a new world record result for the SPECjEnterprise2010 benchmark for database and Java(1). Oracle has run this benchmark fully encrypted to demonstrate the levels of security, efficiency, and performance that SPARC M7 delivers. Two SPARC T7-1 servers, fully encrypted, are faster than the second best result from a pair of four-processor IBM Power8 systems, running the same workload unencrypted. Oracle’s SPARC M7 TeraSort benchmark results prove superiority over IBM for running Hadoop, while also utilizing SPARC M7 encryption acceleration with negligible performance impact. One SPARC T7-4 with 128 cores using an AES-256-GCM encrypted file system is 3.8x faster than an unsecure 8-node IBM S822L Power8 cluster with 192 cores(2). Customers can now run workloads fully encrypted with greater efficiency and without performance penalty.

Software-in-Silicon : Oracle SPARC Roadmap

Oracle Sparc Roadmap : Sept 2015 Update

Learn more about SPARC
For example, one of the most exciting innovations in the M7 processor is known as its in-memory query acceleration engines. These design-specific units take over certain data-search functions from a database query, and those functions then get processed at a very high rate of speed. This dedicated functionality makes database queries perform much faster.

SPARC Server Strategy and Roadmap

Oracle M7  (playlist)

Such query acceleration "is done in a different way than anyone has done it before," said David Lawler, Oracle senior vice president for system product management and strategy. The M7 incorporates up to eight in-memory query acceleration engines.

Another significant M7 innovation is a feature known as application data integrity. This software-in-silicon functionality ensures that an application is able to access only its own dedicated memory region. This lets software programmers identify issues with memory allocation, which is advantageous in several ways.

Oracle expects it to dramatically improve the speed of Oracle's software development, and the resulting product quality, and that customers will benefit by running applications with memory that is always protected in production.

Also, it serves as a security feature. "If one particular piece of code is trying to read the data from another, the chip would stop it," said Renato Ribeiro, Oracle director of product management for SPARC Systems.

And because it is hardwired into the processor, the data integrity functionality does not affect the performance of the application. "It has next to no overhead".

Ideal for Exadata X5-2

Oracle has been shipping an Oracle Exadata configuration that runs Oracle’s T and M-series (SPARC) microprocessors for more than 2 years. This database machine is called Oracle SuperCluster.

Technically, SuperCluster has always included every single Exadata feature of note. This is because every SuperCluster configuration is built around the same Exadata Storage Servers and InfiniBand switches that are used in every other Exadata system configuration.

Exadata X5-2: Extreme Flash and Elastic Configurations

Oracle Database Appliance X5-2

Oracle Linux: Maximize Your Value and Optimize Your Stack

Performance Boosts

Another innovation available on the new processor involves the ability to decompress data at very high speed (100 GB/sec). This is important especially in connection with Oracle's innovative in-memory database functionality.

Database performance is improved when the data being used can be loaded directly into server memory, which eliminates the latency in transferring data from external storage. However, to fit a large amount of data into server memory it must be compressed, and then decompressed on every database query. That decompression takes time and sucks up valuable processor resources—a classic bottleneck.

To address that constriction, Oracle engineers have incorporated a decompression acceleration engine onto the M7 processor. This hardwired unit runs data decompression at the full speed of the in-memory database: 100 GB/sec. That's equivalent to 16 decompression PCI cards, or 200 CPU cores.

Another improvement in the M7 related to performance involves communication between two computers. Known as extreme low latency fabric for memory sharing, this hardware interconnection provides for messaging with sub-microsecond latency, which translates to "memory access across two machines as if it were local" . This helps the performance of computers in a cluster.

Finally, the M7 processor features 32 cores in its design, which ups the processing horsepower from its predecessor, the M6, which has 12 cores. Less an innovation than a process improvement, it nonetheless affirms Oracle's commitment to making SPARC the most powerful processor in the industry.

Creating a Maximum Availability Architecture with SPARC SuperCluster

Co-engineering Advantage

With its SPARC architecture, Oracle has an advantage over other enterprise vendors in that it can do engineering work at all levels of the computing stack: processor, operating system, middleware, database, applications, even software tools, specifically Java.

The SPARC M7 processor benefitted from that co-engineering, designed from the start with input from both Oracle's hardware engineers and its software developers. That approach is what enabled the innovative "software in silicon" strategy to come to fruition. "We looked at all of our software and identified the things that were the hardest" and then incorporated those into the processor.

The SPARC M7 is scheduled to be available sometime in calendar year 2015. Oracle intends the industry at large to benefit from its work. "We plan to make these functions available to other software vendors that would like to take advantage of them".

Highly Efficient Oracle Servers for the Modern Data Center

Why Oracle For Enterprise Big Data?

More Information:

Monday, October 26, 2015

Linux on IBM Power Systems

IBM Power Systems

Welcome to the waitless world

Imagine if we were living in a world without wait. That’s the waitless world. The world ushered in by IBM Power Systems™. Where terabytes of data can be handled in milliseconds—5 to 10 times faster than today’s supercomputers. It’s a world where everything is open. And cloud economics makes it all affordable.

What's New with POWER8  

April 23, 2014—more than a year ago—was a seismic day for IBM Power Systems*. That’s when IBM debuted new POWER* servers built from the ground up to harness big data with the new IBM POWER8* processor. And it was when the OpenPOWER Foundation—then composed of about 25 member companies (and now with more than 130 in 22 countries)—unveiled the first innovations from the initiative to make POWER hardware and software available for open development.

IBM Power8 announce HighLights

Fast forward to today and the muscle just keeps growing behind the effort to provide higher-value, open technologies to manage today’s unprecedented data demands with the speed and agility that proprietary business models can’t.

IBM recently made announcements that showcase POWER technology’s capability to deliver waitless computing. At the core of POWER is an open server ecosystem that’s providing a superior alternative to x86 for big data workloads.

Power Systems With Power8 Enterprise

First, IBM unveiled the Power Systems E850. The Power E850 represents a leap forward in four-socket servers for big data insights and superior cloud economics. The Power E850 is the industry’s only four-socket system that offers flexible capacity, guaranteed utilization and superior scale.

Second, IBM doubled the top-end capacity of our largest server. The E880 now features up to 16 TB of memory, up to 192 cores and delivers ideal linear scaling as demonstrated with IBM’s DB2* with BLU Acceleration—big muscle that improves clients’ capabilities to respond to the peaks and valleys of workload demand and creates a more efficient and cost-effective path to business insights.

With IBM POWER8 and DB2 technologies together, clients gain a rich set of Power Systems technology-based analytics solutions with the IBM DB2 Solution for BLU Acceleration–Power Systems Edition, IBM’s next- generation in-memory computing platform for speeding analytics.

Third, IBM and SAP teamed to help organizations win the data challenge. IBM announced Power Systems Solution Editions for SAP HANA— integrated solutions that combine the latest-generation POWER8 systems with SAP HANA software. The new collaboration accompanied SAP’s announcement of full support of the SAP HANA Business Warehouse in-memory database management system on Power Systems as well as plans to bring SAP HANA Business Suite applications to the Power Systems portfolio by the end of 2015.

With this partnership, IBM and SAP are expanding the enterprise-class capabilities available to mutual clients for quickly extracting business insights from data. POWER8 enables SAP HANA to run more queries in parallel faster, across multiple cores with more threads per core than possible on commodity-based servers. Clients now have more choice in leveraging the POWER8 processor—built specifically to manage and gain rapid insights from data.

With these Power Systems announcements, businesses are afforded greater performance based on new, differentiated and open innovations, and a waitless experience in gaining insights from data.

Linux on Power for AIX/IBM i guys - Doing it the Easy Way

Why open innovation is the future of business

Information is more widely available than ever. Collaboration, co-creation, and knowledge sharing are quickening the pace of invention. Linux on IBM Power Systems offers competitive advantage and unique benefits to organizations - open technology, performance, portability and scalability.

Power Systems: Open Innovation to Put Data to Work

Leverage open technology

Benefit from Power Systems open technology and an open ecosystem delivering community created innovation.

Learn more about OpenPOWER

Redefine Linux performance

Power Systems run industry-standard Linux from Red Hat, SUSE or Canonical. Linux exploits the advanced hardware and software capabilities of POWER8 technology, which provides economic advantages that scale as your business grows.

Linux on Power Best Practices

Port applications with ease

Clients can confidently run highly scalable, highly reliable, and highly flexible Linux environments on POWER processor-based servers. The benefits of Power Systems are realized whether a client is deploying new applications or wants to improve the performance of existing applications. With POWER8, moving Linux applications to Power has never been easier.

Optimize emerging workloads

Watson was built on Linux on Power Systems for the big data and analytics advantages POWER provides. IBM is investing heavily in building solution catalogs for emerging areas where POWER offers unique advantages to clients.

Workloads on Linux

More benefits of Linux on IBM Power

Power up your Linux  

Consolidate to rein in sprawling x86 server farms

Linux-only Power Systems combined with PowerKVM or PowerVM offerings help you to consolidate workloads from hundreds of servers to a few servers.

Run Linux with Java™, IBM middleware and open source components

Power Systems offer double the performance of competitive platforms for Java-based solutions.

The Power to choose

1U, single-core to 256-core servers, Power Systems offering choice and are ready for your business-critical Linux applications.

Industry proven and acclaimed

Supporting embedded systems to large supercomputers, POWER technology is pervasive today with a strong roadmap for the future.

RAS that grows with you

Gain reliability, availability and serviceability uncommon in severs running Linux OS. Toolkits and extensions are available to enhance your choice of Linux operating systems to support the RAS characteristics of POWER processor-based servers at a higher level.

A range of computational power on which to standardize

Supports processing loads from deep computing to high transaction oriented commercial systems. By offering multiple processors rates and SMP systems up to 256 cores and memory up to 2 terabytes, Power provides the architecture you need.

End to end support

IBM offers extensive assistance in the form of education, migration tools, services, and support to enable a low risk implementation or migration to Linux solutions.

Features and benefits:

Rapid Deployment
  • Complete, pre-assembled & tested infrastructure with big data and analytics software preloaded
  • On-site services for fast configuration & data center integration
  • Intelligent cluster management & automation for effective deployment
  • Easily set-up & manage workloads for multiple tenants
  • Adjustable resource allocation to meet diverse line of business demands
  • Scalable & extendable as needs change and as the enterprise grows
  • Reliability without data duplication
  • Tailored big data and analytics optimizations
  • Lays the foundation for consolidating traditional data analytics with new workloads such as Hadoop and Spark

Overview - IBM Big Data Platform

Analytics solutions

Unlock the value of data with an IT infrastructure that provides speed and availability to deliver accelerated insights to the people and processes that need them.

IBM Data Engine for Analytics - Power Systems Edition

A customized infrastructure solution with integrated software optimized for both big data and analytics workloads.

IBM Data Engine for NoSQL – Power Systems Edition

Unique technology from IBM delivers dramatic reductions in the cost of large NoSQL databases.

Big Data in Real World by Chandra Kallur, IBM

SAP HANA benefits from the enterprise capabilities of Power Systems

SAP HANA runs on all POWER8 servers. Power Systems Solution Editions for SAP HANA BW are easy to order and tailored for quick deployment and rapid-time-to value, while offering flexibility to meet individual client demands.

DB2 with BLU Acceleration on Power Systems

Enable faster insights using analytics queries and reports from data stored in any data warehouse, with a dynamic in-memory columnar solution.

Big Data: SQL on Hadoop from IBM

IBM Solution for Analytics – Power Systems Edition

This flexible integrated solution for faster insights includes options for business intelligence and predictive analytics with in-memory data warehouse acceleration.

IBM Solution for Hadoop – Power Systems Edition

An integrated Hadoop platform optimized to simplify and accelerate unstructured big data analytics.

IBM Big Data - IBM Marriage of Hadoop and Data Warehousing

IBM PureData System for Operational Analytics

Easily deploy, optimize and manage data intensive workloads for operational analytics with an expert integrated system.

Big Data Solution with InfoSphere BigInsights and Streams

Analyze data at scale with Apache Hadoop, InfoSphere BigInsights and InfoSphere Streams on Power Systems.

Big Data & Analytics Architecture

IBM i for Business Intelligence

Increase time to value with easy to implement packaged solution to turn information into actionable insights.

IBM DB2 Web Query for i

Help ensure every decision maker across the organization can easily find, analyze and share the information needed to make better, faster decisions.

More Information:

IBM LinuxONE™: Linux Without Limits   

Tuesday, September 15, 2015

Oracle Solaris 11.3 Beta Security. Speed. Simplicity.

Oracle Solaris 11.3 Beta

In case you missed it, and how could you, the Oracle Solaris 11.3 Beta was made available 7th July 2015. 

You can find the details here.   

As always the Oracle Solaris Virtualization team has been very busy and there is a great list of things that they have delivered. Here are a few highlights:

    Secure Live Migration: Kernel Zones can now be moved around the datacenter without causing an outage. End users can be unaffected and moved onto other systems whilst administrators perform key system maintenance. The secure part of the migration means that data is protected while man in the middle attacks are prevented.
    Zones on Shared Storage (NFS): Now in addition to SAN and iSCSI you can put your zoneroot on NFS shared storage. Administrators can continue to benefit from snapshots, cloning and zone boot environments while choosing the appropriate storage for their environment (right now this is for Kernel Zones only)
    Live Zone Reconfiguration: Originally introduced in Oracle Solaris 11.2, with the latest beta release the benefits of Live Zone Reconfiguration come to Kernel Zones with the ability to reconfigure the network and attached devices without the need for a reboot.
    Virtualized Clocks for Oracle Solaris Zones: We've just had a leap second, with virtualized clocks support in Oracle Solaris Native Zones you can now test how a system will behave in advance. Of course you could always do this with Kernel Zones but now this functionality is extended across the entire zones family.

Here's hoping you enjoy running the new features in Oracle Solaris Zones - do let us know via the comments how you get on. 

The Oracle Solaris 11.3 Beta program is an opportunity for developers, administrators and architects to evaluate a pre-release version of the Oracle Solaris enterprise cloud platform. Get a head start on implementing technology that will transform your business and save you money. This is also your opportunity to provide feedback on the latest Oracle Solaris release.

Key Benefits:

  • Oracle Solaris’ advanced and easy to use, built-in security features help you prevent hacking and avoid malware for the lifetime of your applications, from install to runtime and prove it simply! Through the bundled compliance tools, tailored reports can be generated easily and rapidly saving money and time.
  • Oracle Solaris virtualization technologies give you all the flexibility of a hypervisor with the performance and density of a container, enabling you to deploy your enterprise workloads safely and securely, in traditional or OpenStack based cloud environments.
  • Simplified and fast lifecycle management provides for large gains in productivity and lower cost of operations, enabling you to build new products and services and deliver on your business strategy faster.

How to Get Started Creating Oracle Solaris Kernel Zones in Oracle Solaris 11

Converting native zone as Kernel zone – Solaris 11.2

Built-in Virtualization for OpenSolaris- Containers, Sun Logical Domains (LDOMs), and xen

See What's New in Oracle Solaris 11.3

Oracle Solaris Overview and Roadmap 

Software-in-Silicon : Oracle SPARC Roadmap 

Oracle Solaris 11.3: Securing and Simplifying the Enterprise Cloud
Courtesy of Larry Wake -Oracle on Jul 07, 2015

Oracle opened up access to the beta release of Oracle Solaris 11.3.  If you’ve been following along (and if not, why not?), you know there have been some big advances in Oracle Solaris 11, including lightning-fast intelligent provisioning and maintenance, and some key additions to our already highly-regarded “defense in depth” security.

Oracle Database, Java, and Applications and Oracle Solaris 

Most notable for the latter is the work we’ve done to simplify compliance checking and mitigation, making it possible for administrators to quickly and easily check system security configurations against industry standards, and get a “report card” showing compliance, with guidance for any areas that may need addressing.

Oh, and did I mention fully-integrated OpenStack?  This is a win in two ways: it not only brings access to the fastest growing open cloud platform to Oracle Solaris users, it brings the incredible breadth of Oracle Solaris enterprise capabilities to the fingertips of OpenStack users.

…and that of course brings to mind some of the above-mentioned Oracle Solaris 11 features, such as built-in, zero-overhead virtualization, a new and super-powerful Unified Archive capability for rapid, safe, compliant deployment, and all the things we’ve done in terms of performance and administrative ease to make Oracle Solaris the best platform for deploying both Oracle’s own and 3rd party software.

So what’s left to do?  In this beta release, you'll find we're taking those capabilities and making them even better.

Solaris Linux Performance, Tools and Tuning 

OpenStack: Oracle moved forward to the OpenStack Juno release, and also done some behind-the-scenes work to make it easy to continue to bring you new OpenStack goodness quickly and reliably.

    Learn more at the Oracle OpenStack blog

Virtualization: In Oracle Solaris 11.2, Oracle introduced “Kernel Zones”, making it possible to deliver hypervisor agility while still maintaining the low overhead and ease of administration you expect from Zones.  In 11.3, Oracle introduces secure live migration for Kernel Zones,  live zone reconfiguration, and verified boot.

And, Oracle extended the new Zones on Shared Storage (ZOSS) capabilities.  You can now place zones on FC-SAN, iSCSI, or NFS devices.

Oracle Solaris 11.3 is all about “more”, so now you have more flexibility and more security in your virtualization.

Database: If you’re an Oracle Database fan, you know we’ve been giving you “more” for years: more observability, more performance, more flexible administration.  Now we’re also giving you “less” — less down time.  We’ve slashed database startup and shutdown times.  These are not only faster than it’s ever been on Solaris; it’s faster—a lot faster—than on any other platform.

Security: We’ve extended the compliance capabilities mentioned above, so that you can more easily tailor compliance policy configurations to suit your site’s requirements.  Oracle Solaris and Oracle Solaris Studio are also ready for Software in Silicon application data integrity (ADI).  To learn more about this and start working with it today, visit the SWiS Cloud.

Data Management: You've already come to know and love what Oracle Solaris brings to the table with the first 21st century filesystem, ZFS.  In Oracle Solaris 11.3, we extend its built-in compression capabilities to include LZ4 support, we give you the ability to compare snapshots recursively, and have introduced a wealth of scalability and performance improvements to make it faster than ever. We’ve also enhanced its monitoring features, and upgraded its built-in SMB support.

There’s more than this, but this should give you a taste of what we’ve got in store for you today.  You can download it now, and take a look at the “What’s New” document to see what we’re doing to make your data center cloud-ready, secure, fast, and simple.

7th July 2015 is a very exciting day for the Solaris team: 7th July 2015 Oracle released a beta for Solaris 11.3, less than a year after Oracle released Solaris 11.2.

Introducing Oracle Solaris 11.2 

Solaris 11.2 What's New  

With Solaris 11.2 Oracle turned Solaris into a comprehensive cloud platform which includes virtualization, SDN and OpenStack. Since the release of Solaris 11.2 Oracle seen a rapid uptake of these new capabilities: A lot of customers a using Unified Archives for deploying their images, are taking advantage of the immutable root-file system, have started to deploy Kernel Zones and OpenStack and take advantage of the automated compliance reporting. The latter has become a lot easier since we are tracking the CVE meta data with IPS. We are getting a lot of good feedback on OpenStack: We recently even received a big compliment from one of the senior architects of one of our OpenStack competitors: He said that Solaris provides the best integration for OpenStack since the OpenStack services are mapped to the Solaris service management facility (smf) which provide automated restarting capabilities for all of the OpenStack services and since we've also tightly integrated it with our role-based access control capabilities in order to limit the privileges required for administering OpenStack.

Solaris 11.3 is taking things to the next level by making Solaris the most advanced enterprise cloud platform. We are introducing a number of critical enhancements in the following areas: (courtacy of Markusflierl-Oracle)

1. Security and Compliance:
- Verified boot for Kernel Zones
- BSD Packet Filter
- Tailoring of compliance policies

2. Virtualization:
- Secure (encrypted) live migration of Kernel Zones
- Zones on Shared Storage via NFS
- Live reconfiguration of I/O resources

3. OpenStack: There are a number of major enhancements:
- Automated upgrades to the newer versions of OpenStack
- Support for orchestration of services (Heat)
- Support for bare metal provisioning (Ironic): This is already in S11.3 but not yet in the beta

Partner Webcast – Platform as a Service with Oracle WebLogic and OpenStack 

Oracle Solaris OpenStack 

Oracle also is working on integrating DBaaS with Trove and Murano. On the latter I had provided an overview at the Vancouver OpenStack Summit back in May.

4. Networking:
- Private VLANs
- Flows support for DSCP Marking

5. Deep integration with the Oracle stack, for instance:
- Up to 6x faster DB restart and shut-down by leveraging the latest Virtual Memory Management(VM2) capabilities

In addition to that Oracle are providing early access to the Free and Open Source Software (FOSS) components that we are shipping with Solaris.

Of course these are just some of the highlights, there are a ton of other enhancements.

Have fun exploring Solaris 11.3!

Here's Your Oracle Solaris 11.3 List of Blog Posts, thanks to Larry Wake -Oracle 

Here are some videos on how to manage and maintain Oracle Solaris

Oracle Solaris Studio 12.4 

Oracle Solaris Hands-On Labs 01/2014 

Immutable Service Containers 

More Information:

What's New in Oracle® Solaris 11.2

Independent and Isolated Environments With Kernel Zones

What's New in Oracle® Solaris 11.3

Oracle Solaris 11.3 Beta

Live Migration for Kernel Zones

New Oracle Solaris Zones features

Monday, August 10, 2015

Microsoft Announced New Container Technologies for the Next Generation Cloud

Windows-based containers: Modern app development with enterprise-grade control 

Microsoft on October 15th 2014 announced that it will deliver new container technologies in the upcoming wave of Windows Server releases. In addition, a new partnership between Microsoft Corp. and Docker Inc. will bring Windows Server support to Docker tools. MS Open Tech will contribute to this partnership, and will build upon our existing support for Linux hosted containers on Microsoft Azure.

As part of this announcement, MS Open Tech is contributing code to the Docker client that supports the provisioning of multi-container Docker applications on Azure. This code removes the need for our cross-platform CLI to bootstrap the Docker host. In other words, we have taken a simple process and made it even simpler. A demonstration of this new capability will be a part of Docker’s Global Hack Day as well as the Microsoft TechEd Europe conference. For more information on other aspects of this partnership, see the Azure blog.

Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Docker and Microsoft: How Azure is Bringing the World of Windows and Linux Together 

What are containers?

They are an isolated, resource controlled and portable operating environment.

Basically, a container is an isolated place where an application can run without affecting the rest of the system and without the system affecting the application. Containers are the next evolution in virtualization.

If you were inside a container, it would look very much like you were inside a physical computer or a virtual machine. And, to Docker, a Windows Server Container looks like any other container.
Containers for Developers

When you containerize an app, only the app and the components needed to run the app are combined into an "image". Containers are then created from this image as you need them. You can also use an image as a baseline to create another image, making image creation even faster. Multiple containers can share the same image, which means containers start very quickly and use fewer resources. For example, you can use containers to spin up light-weight and portable app components – or ‘micro-services’ – for distributed apps and quickly scale each service separately.

Windows Containers: What, Why and How 

Because the container has everything it needs to run your application, they are very portable and can run on any machine that is running Windows Server 2016. You can create and test containers locally, then deploy that same container image to your company's private cloud, public cloud or service provider. The natural agility of Containers supports modern app development patterns in large scale, virtualized and cloud environments.

- With containers, developers can build an app in any language. These apps are completely portable and can run anywhere - laptop, desktop, server, private cloud, public cloud or service provider - without any code changes.

- Containers helps developers build and ship higher-quality applications, faster.

Containers for IT Professionals

- IT Professionals can use containers to provide standardized environments for their development, QA, and production teams. They no longer have to worry about complex installation and configuration steps. By using containers, systems administrators abstract away differences in OS installations and underlying infrastructure.

- Containers help admins create an infrastructure that is simpler to update and maintain.
What else do I get?

- Containers and the container ecosystem provide agility, productivity, and freedom-of-choice in building, deploying, and managing modern apps.

- When combined with Docker, Visual Studio, and Azure, containers become an important part of a robust ecosystem. Read more about the Windows Server Container ecosystem.

Docker containers simplify the development of software applications that consist of micro-services. Each service then operates as an isolated execution unit on the host. Common use cases for Docker include:

    Automating the packaging and deployment of applications
    Creation of lightweight, private PaaS environments
    Automated testing and continuous integration/deployment
    Deploying and scaling web apps, databases and backend services

Docker’s container technology aims to drive developer productivity and agility. Containers do not include a full operating system, consequently rapid development and scaling of container-based applications is possible through very quick boot and restart operations. Furthermore, highly efficient creation of modified container images, by virtue of only capturing the differences between the original and new containers, enables improved management and distribution of containerized applications; the resulting images are both small and highly portable across almost any platform.

This partnership brings the .NET and Windows Server ecosystem together with Docker's expertise and open source community to deliver uniform container functionality across Linux and Windows Server containers.

In June MS Open Tech announced the availability of Docker Engine on Microsoft Azure, to coincide with the 1.0 release of the Docker tools. That work provided the ability to create Azure virtual machines with the Docker Engine already installed. The resulting virtual machines become hosts for Docker containers, the standard Docker tooling then provides management of containers on those hosts. Our goal with this project is to make it as simple as possible to get started with Docker on Azure. Since June, we have continued to work with the Docker community to make things even simpler.

Last October, Microsoft and Docker, Inc. jointly announced plans to bring containers to developers across the Docker and Windows ecosystems via Windows Server Containers, available in the next version of Windows Server. We will be unveiling the first live demonstration in a few weeks, starting at the BUILD conference. Today, we are taking containerization one step further by expanding the scenarios and workloads developers can address with containers:

• Hyper-V Containers, a new container deployment option with enhanced isolation powered by Hyper-V virtualization
• Nano Server, a minimal footprint installation of Windows Server that is highly optimized for the cloud, and ideal for containers.

First-of-Their-Kind Hyper-V Containers

Leveraging our deep virtualization experience, Microsoft will now offer containers with a new level of isolation previously reserved only for fully dedicated physical or virtual machines, while maintaining an agile and efficient experience with full Docker cross-platform integration. Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host.

While Hyper-V containers offer an additional deployment option between Windows Server Containers and the Hyper-V virtual machine, you will be able to deploy them using the same development, programming and management tools you would use for Windows Server Containers. In addition, applications developed for Windows Server Containers can be deployed as a Hyper-V Container without modification, providing greater flexibility for operators who need to choose degrees of density, agility, and isolation in a multi-platform, multi-application environment.

Microsoft Containers in the Docker Ecosystem

Windows Server Containers

Docker plays an important part in enabling the container ecosystem across Linux, Windows Server and the forthcoming Hyper-V Containers. We have been working closely with the Docker community to leverage and extend container innovations in Windows Server and Microsoft Azure, including submitting the development of the Docker engine for Windows Server Containers as an open contribution to the Docker repository on GitHub. In addition, we’ve made it easier to deploy the latest Docker engine using Azure extensions to setup a Docker host on Azure Linux VMs and to deploy a Docker-managed VM directly from the Azure Marketplace. Finally, we’ve added integration for Swarm, Machine and Compose into Azure and Hyper-V.

“Microsoft has been a great partner and contributor to the Docker project since our joint announcement in October of 2014,” said Nick Stinemates, Head of Business Development and Technical Alliances. “They have made a number of enhancements to improve the developer experience for Docker on Azure, while making contributions to all aspects of the Docker platform including Docker orchestration tools and Docker Client on Windows. Microsoft has also demonstrated its leadership within the community by providing compelling new content like dockerized .NET for Linux. At the same time, they’ve been working to extend the benefits of Docker containers- application portability to any infrastructure and an accelerated development process--to its Windows developer community.”

Introducing Nano Server: The Nucleus of Modern Apps and Cloud

Nano Server: The Future of Windows Server Starts Now 

The operating system has evolved dramatically with the move to the cloud. Many customers today need their OS for the primary purpose of powering born-in-the-cloud applications. Leveraging our years of experience building and running hyper-scale datacenters, Microsoft is uniquely positioned to provide a purpose-built OS to power modern apps and containers.

The result is Nano Server, a minimal footprint installation option of Windows Server that is highly optimized for the cloud, including containers. Nano Server provides just the components you need – nothing else, meaning smaller server images, which reduces deployment times, decreases network bandwidth consumption, and improves uptime and security. This small footprint makes Nano Server an ideal complement for Windows Server Containers and Hyper-V Containers, as well as other cloud-optimized scenarios. A preview will be available in the coming weeks, and you can read more about the technology on the Windows Server blog.

Containers are bringing speed and scale to the next level in today’s cloud-first world. Microsoft is uniquely positioned to propel more organizations forward into the next era of containerization, by offering flexibility and choice through Windows Server containers, Linux containers, and Hyper-V containers both in the cloud and on-premises. Today’s announcements are just the beginning of what’s to come, as we continue to fuel both the growth of containers in the industry, and new levels of application innovation for all developers.

More Information:

Monday, July 13, 2015

Immutable Service Containers

Immutable Service Containers and Oracle Solaris Studio 12.4

Economics of Oracle Solaris

Solaris Immutable Service Containers

While the need for security and integrity is well-recognized, it is less often well-implemented. Security assessments and industry reports regularly show how sporadic and inconsistent security configurations become for organizations both large and small. Published recommended security practices and settings remain unused in many environments and existing, once secured, deployments suffer from atrophy due to neglect.

Why is this? There is no one answer. Some organizations are simply unaware of the security recommendations, tools, and techniques available to them. Others lack the necessary skill and experience to implement the guidance and maintain secured configurations. It is not uncommon for these organizations to feel overwhelmed by the sheer number of recommendations, settings and options. Still others may feel that security is not an issue in their environment. The list goes on and on, yet the need for security and integrity has never been more important.

Interestingly, the evolution and convergence of technology is cultivating new ideas and solutions to help organizations better protect their services and data. One such idea is being demonstrated by the Immutable Service Container (ISC) project. Immutable Service Containers are an architectural deployment pattern used to describe a platform for highly secure service delivery. Building upon concepts and functionality enabled by operating systems, hypervisors, virtualization, and networking, ISCs provide a secured container into which a service or set of services is deployed. Each ISC embodies at its core the key principles inherent in the Sun Systemic Security framework including: self-preservation, defense in depth, least privilege, compartmentalization and proportionality. Further, ISC design borrows from Cloud Computing principles such as service abstraction, micro-virtualization, automation, and "fail in place".

By designing service delivery platforms using the Immutable Service Containers mode, a number of significant security benefits:

For application owners:

  • ISCs help to protect applications and services from tampering
  • ISCs provide a consistent set of security interfaces and resources for applications and services to use

For system administrators:

  • ISCs isolate services from one another to avoid contamination
  • ISCs separate service delivery from security enforcement/monitoring
  • ISCs can be (mostly) pre-configured by security experts

For IT managers:

  • ISCs creation can be automated, pre-integrating security functionality making them faster and easier to build and deploy
  • ISCs leverage industry accepted security practices making them easier to audit and support

It is expected that Immutable Service Containers will form the most basic architectural building block for more complex, highly dynamic and autonomic architectures. The goal of the ISC project is to more fully describe the architecture and attributes of ISCs, their inherent benefits, their construction as well as to document practical examples using various software applications.

While the notion of ISCs is not based upon any one product or technology, an instantiation has been recently developed using OpenSolaris 2009.06. This instantiation offers a pre-integrated configuration leveraging OpenSolaris security recommended practices and settings. With ISCs, you are not starting from a blank slate, but rather you can now build upon the security expertise of others. Let's look at the OpenSolaris-based ISC more closely.

In an ISC configuration, the global zone is treated as a system controller and exposed services are deployed (only) into their own non-global zones. From a networking perspective, however, the entire environment is viewed as a single entity (one IP address) where the global zone acts as a security monitoring and arbitration point for all of the services running in non-global zones.

As a foundation, this highly optimized environment is pre-configured with:

non-executable stack
encrypted swap space (w/ephemeral key)
encrypted scratch space (w/ephemeral key)
security hardened operating system (global and non-global zones)

Further, the default OpenSolaris ISC uses:

Non-Global Zone. Exposed services are deployed in a non-global zone. There they can take advantage of the core security benefits enabled by OpenSolaris non-global zones such as restricted access to the kernel, memory, devices, etc. For more information on non-global zone security capabilities, see the Sun BluePrint titled "Understanding the Security Capabilities of Solaris Zones Software". Using a fresh ISC, you can simply install your service into the provided non -global zone as you normally would.
Further in the ISC model, each non-global zone has its own encrypted scratch space (w/its own ephemeral key), its own persistent storage location, as well as a pre-configured auditing and networking configuration that matches that of the global zone. You do not need to use the encrypted scratch space or persistent storage, but it is there if you want to take advantage of it. Obviously, additional resource controls (CPU, memory, etc.) can be added as necessary. These are not pre-configured due to the variability of service payloads.

Solaris Auditing. A default audit policy is implemented in the global zone and all non-global zones that tracks login and logout events, administrative events as well as all commands (and command line arguments) executed on the system. The audit configuration and audit trail are kept in the global zone where they cannot be accessed by any of the non-global zones. The audit trail is also pre-configure d to be delivered by SYSLOG (by default this information is captured in /var/log/auditlog).

Private Virtual Network. A private virtual network is configured by default for all of the non-global zones. This network isolates each non-global zone to its own virtual NIC. By default, the global and non-global zones can freely initiate external communications, although this can be restricted if needed. A non-global zone is not permitted to accept connections, by default. Non-global zone service s can be exposed through the global zone IP address by adjusting the IP Filter and IP NAT policies (below).

Solaris IP NAT. Each non-global zone is pre-configured to have a private address assigned to its virtual NIC. To allow the non -global zone to communicate with external systems and networks, an IP NAT policy is implemented. Outgoing connections are masked using the IP address of the global zone. Incoming connections are redirected based upon the port used to communicate. Beyond simple hardening of the non-global zone (a state which can be altered from within the non-global zone itself), this mechanism ensures that the global zone can control which services are exposed by the non-global zone and on which ports.

Solaris IP Filter. A default packet filtering policy is implemented in the global zone allowing only DHCP (for the exposed network interface) and SSH (to the global zone). Additional rules are available (but disabled) to allow access to non-global zones on an as-needed basis. Further, rules are implemented to deny external access to any non-global zone that has changed its pre-assigned (private) IP address. Packet filtering is pre-configured to log packets to SYSLOG (by default this information is captured in /var/log/ipflog).

So what does all of this really mean? Using the ISC model, you can deploy your services in a micro-virtualized environment that offers protection against kernel-based root kits (and some forms of user-land root kits), offers flexible file system immutability (based upon read-only file systems mounted into the non-global zone), can take advantage of process least privilege and resource controls, and is operated in a hardened environment where there is a packet filtering, NAT and auditing policy that is effectively out of the reach of the deployed service. This means that should a service be compromised in a non-global zone, it will not be able to impact the integrity or validity of the auditing, packet filtering, and NAT configuration or logs. While you may not be able to stop every form of attack, having reliable audit trails can significantly help to determine the extent of the breach and facilitate recovery.

The following diagram puts all of the pieces together:

Solaris 11 – Immutable Zones

Immutable zones are read-only zones, but still contain “whole root” file systems.  The immutable zone can be configured as a completely read-only zone or it can be partially read-only.  The immutable zone is controlled by a mandatory write access control (MWAC) kernel policy.  This MWAC policy enforces the zone’s root file system write privilege through a zonecfg file-mac-profile property. The policy is enabled at zone boot.

By default, a zone’s file-mac-profile property is not set in a non-global zone. The default policy for a nonglobal zone is to have a writable root file system. In a Solaris read-only zone, the file-mac-profile property is used to configure a read-only zone root. A read-only root restricts access to the run-time environment from inside the zone. Through the zonecfg utility, the file-mac-profile can be set to one of the following values.

- See more at:

Oracle Virtualization  

Oracle Software in Silicon Cloud

Speed and Simplify Your Business  (

Accelerate application performance and significantly improve security.

Oracle’s revolutionary Software in Silicon technology hardwires key software processes directly onto the processor. Because accelerated functions are run on special engines on the processor's silicon, yet kept separate from its cores, the technology speeds up performance of an application while retaining the overall functionality of the processor.

Oracle’s revolutionary Software in Silicon technology hardwires key software processes directly onto the processor.

Because accelerated functions are run on special engines on the processor's silicon, yet kept separate from its cores, the technology speeds up performance of an application, and implements data security in hardware, while retaining the overall functionality of the processor.

At Oracle OpenWorld 2014 John Fowler announced Oracle Software in Silicon Cloud, which provides early access to revolutionary Software in Silicon technology that dramatically improves reliability and security, and accelerates application performance.

Introducing Oracle Solaris Studio 12.4

Oracle Solaris Studio 12.4 - Technical Mini-Casts

Oracle Solaris OpenStack

Network Virtualization Using Crossbow Technology

More Information:

IBM Videos