IBM Consulting

DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

Oracle Consulting

For Oracle related consulting and Database work and support and Migration call DBA Consulting.

Novell and RedHatConsulting

For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat product like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

Microsoft Consulting

For all Microsoft related consulting services.

Citrix Consulting

Citrix VDI in a box, Desktop vertualizations and Citrix Netscaler security.

Welcome to DBA Consulting Blog. The specialist for IBM, Oracle, Novell, RedHat, Citrix and Microsoft.

DBA Consulting is a consultancy services specialist who can help you with OS related support and migration and install. Also BI implementations from IBM like Cognos 10 and Microsoft BI like SQL Server 2008, 2012 and the CMS systems releated to it like Microsoft Sharepoint and Drupal and Oracle OBIEE 11R1. We focus on quality and service and customer whishes are central, just like quality of services and customer satisfaction. Our focus on cost savings and no vendor lock in are central in business values.

Oracle related video's


Tuesday, February 9, 2016

Red Hat Enterprise Linux 7.2 deployment of container-based applications

Red Hat Drives Networking, Linux Container Innovation in Latest Version of Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7.2 boosts network performance and delivers additional enhancements to support the development and deployment of container-based applications

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 7.2, the latest release of Red Hat Enterprise Linux 7. Red Hat Enterprise Linux 7.2 continues Red Hat's goal of redefining the enterprise operating system by providing a trusted path towards the future of information technology without compromising the needs of the modern enterprise.

New features and capabilities focus on security, networking, and system administration, along with a continued emphasis on enterprise-ready tooling for the development and deployment of Linux container-based applications. In addition, Red Hat Enterprise Linux 7.2 includes compatibility with the new Red Hat Insights, an add-on operational analytics offering designed to increase IT efficiency and reduce downtime through the proactive identification of known risks and technical issues.

Red Hat Enterprise Linux 7 Atomic Host & Containers

Retaining Red Hat's commitment to security, including meeting the needs of financial, government, and military customers, Red Hat Enterprise Linux 7.2 continues to provide new security capabilities and features. True security requires both a secure foundation and secure configuration of systems. OpenSCAP is an implementation of the Security Content Automation Protocol that analyzes a system for security compliance. The new OpenSCAP Anaconda plug-in allows use of SCAP based security and configuration analysis during the installation process, ensuring a secure starting point for system deployment.

Container security: Do containers actually contain? Should you care? - 2015 Red Hat Summit

A critical part of secure distributed systems is being able to trust the address resolution performed by DNS servers. DNSSEC extends DNS to provide a secure chain of trust for address resolution. The Red Hat Identity Management system (IdM) now supports DNSSEC for DNS zones.

Beyond Containers: Agility and Security in Docker Delivery

Networking performance in Red Hat Enterprise Linux 7.2 has been significantly improved -- with throughput doubled in many network function virtualization (NFV) and software defined networking (SDN) use cases. Other enhancements to the kernel networking subsystem, include:

Tuning the network kernel stack to dramatically improve packet processing time, enable Red Hat Enterprise Linux 7.2 to perform at physical line rates in advanced (virtual and containerized) workloads.
Inclusion of the Data Plane Development Kit (DPDK), which makes it possible to rapidly develop low-latency and high throughput custom applications capable of direct packet processing in user space for NFV and other use cases. Prior to this enhancement, systems were limited to running only one type of application (DPDK-enabled or traditional-network enabled.) Enhancements in Red Hat Enterprise LInux 7.2, specifically the introduction of a new bifurcated driver, now allow for both types of applications to be hosted on the same system thus consolidating physical hardware.
The addition of TCP (DCTCP), a feature for solving TCP congestion problems in data centers that works smoothly across Windows- and Red Hat Enterprise Linux-based hosts to maximize throughput and efficiency.

Linux Containers
Red Hat Enterprise Linux 7.2 features many improvements to the underlying container support infrastructure. Updates are included for the docker engine, Kubernetes, Cockpit and the Atomic command. In addition, Red Hat Enterprise Linux Atomic Host 7.2, the latest version of Red Hat's container workload-optimized host platform, is available with most Red Hat Enterprise Linux 7.2 subscriptions.

Super privileged containers - 2015 Red Hat Summit

Also available today is the beta of the Red Hat Container Development Kit 2, a collection of images, tools, and documentation to help application developers simplify the creation of container-based applications that are certified for deployment on Red Hat container hosts, including Red Hat Enterprise Linux 7.2, Red Hat Enterprise Linux Atomic Host 7.2 and OpenShift Enterprise 3.

System Administration
As managing the modern datacenter at scale becomes increasingly complex, Red Hat Enterprise Linux 7.2 includes new and improved tools to deliver a more streamlined system administration experience. Highlighting these updates is the inclusion of Relax-and-Recover, a system archiving tool that enables administrators to create local backups in ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations.

Red Hat Insights
Red Hat Enterprise Linux 7.2 is compatible with Red Hat Insights, an operational analytics service designed for the proactive management of Red Hat Enterprise Linux environments. Available for up to 10 Red Hat Enterprise Linux 7 systems at no additional cost, the offering is designed to help customers detect technical issues before they impact business operations by analyzing infrastructure assets and identifying key risks and vulnerabilities through continuous monitoring and analysis. Red Hat Insights provides resolution steps to help IT managers and administrators respond to these issues and potentially prevent future problems.

Red Hat Enterprise Linux Server for ARM 7.2 Development Preview
Red Hat is also making available Red Hat Enterprise Linux Server for ARM 7.2 Development Preview, which was first made available to partners and their customers in June 2015. This Development Preview enables new partner hardware and additional features for the ARM architecture.

Process-driven application development using Red Hat JBoss BPM Suite - 2015 Red Hat Summit

Supporting Quote
Jim Totton, vice president and general manager, Platforms Business Unit, Red Hat
“With the launch of Red Hat Enterprise Linux 7 in June 2014, Red Hat redefined the enterprise open source operating system. Red Hat Enterprise Linux 7.2 continues this effort, delivering new capabilities for containerized application deployments and significant networking enhancements while retaining our focus on delivering a stable, reliable and more secure platform for the most critical of business applications.”
With the launch of Red Hat Enterprise Linux 7 in June 2014, Red Hat redefined the enterprise open source operating system. Red Hat Enterprise Linux 7.2 continues this effort, delivering new capabilities for containerized application deployments and significant networking enhancements while retaining our focus on delivering a stable, reliable and more secure platform for the most critical of business applications.

More Information:

Tuesday, January 19, 2016

SQL Server 2016 for real time operational analytics

SQL Server, an industry leader, now packs an even bigger punch

With the upcoming release of SQL Server 2016, our best SQL Server release in history, and the recent availability of the Cortana Analytics Suite, Microsoft is offering unmatched innovation across on-premises and the cloud to help you turn data into intelligent action.

What's New in SQL Server 2016

In the recent Gartner Magic Quadrant for Operational Database Management Systems Microsoft is positioned as a leader, highest in execution and furthest in vision. SQL Server 2016 builds on this leadership, and will come packed with powerful built-in features. As the least vulnerable database for six years in a row, SQL Server 2016 offers security that no other database can match. It also has the data warehouse with the highest price-performance, and offers end-to-end mobile BI solutions on any device at a fraction of the cost of other vendors. It provides tools to go beyond BI with in-database Advanced Analytics, integrating the R language and scalable analytics functions from our recent acquisition of Revolution Analytics.

Microsoft’s cloud-first product development model means that new features get hardened at scale in the cloud, delivering proven on-premises experience. In addition, we offer a consistent experience across on-premises and cloud with common development and management tools and common T-SQL.

Security with Always Encrypted

The Always Encrypted feature in SQL Server 2016 CTP 3.0, an industry-first, is based on technology from Microsoft Research and helps protects data at rest and in motion. Using Always Encrypted, SQL Server can perform operations on encrypted data and – best of all – the encryption key resides with the application in the customers’ trusted environment. It offers unparalleled security.

One example of a customer that’s already benefitting from this new feature is Financial Fabric, an ISV that offers a service called DataHub to hedge funds. The service enables a hedge fund to collect data ranging from transactions to accounting and portfolio positions from multiple parties such as prime brokers and fund administrators, store it all in one central location, and make it available via reports and dashboards.

“Data protection is fundamental to the financial services industry and our stakeholders, but it can cause challenges with data driven business models,” said Subhra Bose, CEO, Financial Fabric. “Always Encrypted enables the storage and processing of sensitive data within and outside of business boundaries, without compromising data privacy in both on-premises and cloud databases. At Financial Fabric we are providing DataHub services with “Privacy by Design” for our client’s data, thanks to Always Encrypted in SQL Server 2016. We see this as a huge competitive advantage because this technology enables data science in Financial Services and gives us the tools to ensure we are compliant with jurisdictional regulations.”

Always Encrypted updates in CTP3 include the following; please see the SSMS team blog for additional detail.

  • Encrypting columns and key management made easy with new UI in SSMS
  • Encrypt Columns Wizard
  • Key management/rotation workflows
  • Azure Key Vault support
  • Integration with hardware security modules (.NET 4.6.1) and Azure Key Vault
  • Mission Critical Performance

With an expanded surface area, you can use the high performance In-Memory OLTP technology in SQL Server with a significantly greater number of applications. We are excited to introduce the unique capabilities of combine in-memory analytics (columnstore) with in-memory OLTP and traditional relational store in the same database to achieve real-time operational analytics. We have also made significant performance and scale improvements across all components in the SQL Server core engine.

Mission Critical features in SQL Server 2016 

Insights on All Your Data

You’ll find significant improvements in both SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) that help deliver business insights faster and improve productivity for BI developers and analysts. The enhanced DirectQuery enables high-performing access to external data sources like SQL Server Columnstore. This capability enhances the use of SSAS as a semantic model over your data for consistency across reporting and analysis without storing the data in Analysis Services.

SQL Server 2016

SQL Server Reporting Services 2016 offers a modernized experience for paginated reports and updated tools as well as new capabilities to more easily design stunning documents. To get more from your investments in SSRS and to provide easy access to on-premises reports to everyone in your organization, you can now pin paginated reports items to the Power BI dashboard. In coming months, we will add new Mobile BI capabilities to Reporting Services, allowing you to create responsive, interactive BI reports optimized for mobile devices.

PolyBase, available today with the Analytic Platform System, is now built into SQL Server, expanding the power to extract value from unstructured and structured data using your existing T-SQL skills. PolyBase CTP 3.0 improvements including better performance and scale out PolyBase nodes to use other SQL Server instances. see also: PolyBase in APS - Yet another SQL over Hadoop solution

Polybase in CTP3 includes the following new capabilities:

  • Improved PolyBase query performance with scale-out computation on external data (PolyBase scale-out groups)
  • Improved PolyBase query performance with faster data movement from HDFS to SQL Server and between PolyBase Engine and SQL Server
  • Support for exporting data to external data source via INSERT INTO EXTERNAL TABLE SELECT FROM TABLE
  • Support for push-down computation to Hadoop for string operations (compare, LIKE)
  • Support for ALTER EXTERNAL DATA SOURCE statement

SQL Server 2016 CTP Technical Deep Dive

PolyBase with Freedom of Choice


One important key differentiator of PolyBase compared to all of the existing competitive approaches is ‘openness’. We do not force users to decide on a single solution, like some Hadoop providers are pursuing. With PolyBase, you have the freedom to use an HDInsight region as a part of your APS appliance, to query an external Hadoop cluster connected to APS, or to leverage Azure services from your APS appliance (such as HDInsight on Azure).

To achieve this openness, PolyBase offers these three building blocks.

Advanced Analytics

Advanced Analytics (RRE integration)

With this release, we are very excited to announce the public availability SQL Server R Services in SQL Server 2016, an Advanced Analytics capability which supports enterprise-scale data science, significantly reducing the friction for adopting machine learning in your business. SQL Server R Services is all about helping customers embrace the highly popular open source R language in their business. R is the most popular programming language for Advanced Analytics.

SQL Server 2016 Business Intelligence

You can use it to analyze data, uncover patterns and trends and build predictive models. It offers an incredibly rich set of packages and a vibrant and fast-growing developer community. At the same time, embracing R in an enterprise setting presents certain challenges, especially as the volume of data rises and with the switch from modeling to production environments. Microsoft SQL Server R Services with in-database analytics helps customers embrace this technology by supporting several scenarios. Two of the key scenarios are:

One: Data Exploration and Predictive Modeling with R over SQL Server data

The data scientist can choose to analyze the data in-database or to pull data from SQL Server and analyze it on the client machine (or a separate server). Analyzing data in-database has the advantage of performance and speed by removing the need to move data around and leverage the strong compute resources on the SQL Server. RevoScaleR package and APIs contains a set of common functions and algorithms that were designed for performance and scale, overcoming R limitations of single-threaded execution and memory bound datasets.

Two: Operationalizing your R code using T-SQL

For SQL Server 2016 CTP3, Microsoft supports ad-hoc execution of R scripts via a new system stored procedure. This stored procedure will support pushing data from a single SELECT statement and multiple input parameters to the R side and return a single data frame as output from the R side.

SQL Server 2016 R Services

PASS 2015 Keynote: Accelerating your Business with a Modern Data Strategy

Transactional replicate from SQL Server to Azure SQL DB in new in CTP3. 

Now you can setup Azure SQL DB as a subscriber of transaction replication, allowing you to migrate data from SQL Server instance on-premises or in IaaS to Azure SQL database without downtime. The replication is one way in this release, and works with SQL Server 2016, SQL Server 2014 and SQL Server 2012. This is the same Transactional Replication technology you have been using for many years on premise. As you configure a subscriber (from SSMS or by script), instead of entering an instance name, you enter the name of your Azure SQL DB subscription along with the associated login and password. A snapshot (as in a Replication Snapshot) will used to initialize the subscription and subsequent data changes will be replicated to you Azure SQL DB in the same transactional consistent way you are used to. A transactional publication can deliver changes to subscribers both in Azure SQL DB and/or on premise/Azure VM. There is no Replication service hosted in Azure for this. Everything is driven from on-premise distribution agents. To use this feature, you just need to set it up the way you do to replicate on-premises: Install the Replication components, configure the Distributor, the Publisher and create the Publication, the Articles and you the Subscriptions. In this case, one of the subscriptions will be your Azure SQL DB.

In-Memory improvements in this release:

  • In-Memory OLTP
  • FOREIGN KEY constraints
  • CHECK constraints
  • UNIQUE constraints
  • DML triggers (AFTER only)
  • Inline table-values functions
  • Security built-ins and increased math function support
  • Real-time Operational Analytics
  • Support for in-memory tables
  • Existing nonclustered columnstore index (NCCI) are updateable without requiring index rebuild
  • Parallel Index build of nonclustered columnstore index (NCCI)
  • Performance improvements (INSERT, String pushdown, bypassing delete buffer when processing deleted rows)
  • In-Memory Analytics
  • You can upgrade databases with nonclustered columnstore index and have it updateable without requiring rebuild of the index
  • General performance improvements for analytics queries with columnstore index especially involving aggregates and string predicates
  • Improved supportability with DMVs and XEvents

SQL Unplugged Episode 12

New Hybrid Scenario using Stretch Database

Stretch Database enables stretching a single database between on-premises and Azure. This will enable our customers to take advantage of the cloud economics of lower cost compute and storage without being forced into an all-or-nothing database move. Stretch Database is transparent to your application, and the trickle of data to Azure can be paused and restarted without downtime. You can use Always Encrypted with Stretch Database to extend data in a more secure manner for greater peace of mind.

StretchDB - Stretch tables to Azure SQL DB with SQL Server 2016

Stretch Database updates in this release:

Engine Update

  • Create/Drop index support
  • AlwaysEncrypted support
  • Improved credential management for remote Stretch database stores
  • Improved performance for joins between stretched tables
  • New external data source integration

Expert summit SQL Server 2016

SSMS Wizard updates

  • Database and Table level fly out menu options were updated to reflect new Stretch functionality
  • Stretch monitor functionality added to allow users to monitor current migration status, including the ability to pause the migration at the table level
  • XEVENT support for diagnostics session support in monitor
  • Updated and simplified stretch wizard flow to reduce the amount of steps required to enable or reconfigure Stretch
  • Help icon links Updated to point to new MSDN content focusing specifically on wizard topic
  • Added functionality that allows users to pause or disable migration at the table level
  • Added ability to Stretch individual tables
  • Added database scoped credential support - for AlwaysOn
  • Ability to enabling stretch on the server using the wizard
  • Updated table level validation error/warning messaging
  • The ability to Stretch to new SQL Azure or existing SQL Azure server
  • Updated SSMS Object Explorer Stretch Databases icons
  • SMO model for Stretch status query and updates

SQL Server 2016 Reporting Services

Built-in JSON support improvements in this release include:

OPENJSON - Table value function that parses JSON text and returns rowset view of JSON. By default, OPENJSON returns properties of object or elements of array that is parsed. Advanced version of OPENJSON function with defined schema allows user to define schema of resulting rowset, and mapping rules that define where can be found values in the parsed JSON text that will be returned in the resulting rowset. It enables developers to easily parse JSON text and import it into relational tables.

JSON_VALUE - Scalar function that returns a value from JSON on the specified path. It can be used in any query, view, computed column. It can be also used to define indexes on properties of JSON text stored in table columns.

ISJSON - function that validates that JSON is properly formatted. It can be used to define check constraints on the columns that contain JSON text. It is not supported in check constraints defined on in-memory tables.

JSON_QUERY - Scalar function that returns a fragment from the JSON text. Unlike JSON_VALUE that returns scalar values, JSON_QUERY returns complex object (i.e. JSON arrays and objects).

Azure Data Lake Store and Analytics Service available in preview today

Last month we announced a new and expanded Azure Data Lake that makes big data processing and analytics simpler and more accessible. Azure Data Lake includes the Azure Data Lake Store, a single repository where you can easily capture data of any size, type and speed, Azure Data Lake Analytics, a new service built on Apache YARN that dynamically scales so you can focus on your business goals, not on distributed infrastructure, and Azure HDInsight, our fully managed Apache Hadoop cluster service. Azure Data Lake is an important part of the Cortana Analytics Suite and a key component of Microsoft’s big data and advanced analytics portfolio.

The Azure Data Lake service includes U-SQL, a language that unifies the benefits of SQL with the expressive power of user code. U-SQL’s scalable distributed query capability enables you to efficiently analyze data in the store and across SQL Servers in Azure, Azure SQL Database and Azure SQL Data Warehouse. Customers can use Azure Data Lake tools for Visual Studio, which simplifies authoring, debugging and optimization and provides an integrated development environment for analytics., the UK's largest independent online fashion and beauty retailer, has been using Azure Data Lake to improve customer experience on their website. “At ASOS we are committed to putting the customer first. As a global fashion destination for 20-somethings we need to stay abreast of customer behaviour on our site, enabling us to optimize their shopping experience across all platforms of and wherever they are in the world. Microsoft Azure Data Lake Analytics assists in processing large amounts of unstructured clickstream data to track and optimize their experience. We have been able to get productive immediately using U-SQL because it was easy to use, extend and view and monitor the jobs all within Visual Studio” said Rob Henwood, Enterprise Architect at

Azure SQL Database In-Memory OLTP and Operational Analytics

Today, we are releasing our next generation in-memory technologies to Azure with the public preview of In-Memory OLTP and real-time Operational Analytics in Azure SQL Database. In-Memory OLTP in the Azure SQL Database preview includes the expanded surface area available in SQL Server 2016, enabling more applications to benefit from higher performance. By bringing this technology to the cloud, customers will be able to take advantage of in-memory OLTP and Operational Analytics in a fully managed database-as-a-service with 99.99% SLA.

Foundation Session: Microsoft Business Intelligence

Temporal support improvements in this release include:

  • Support for using temporal system-versioning with In-Memory OLTP
  • Combining disk-based table for cost-effective storing of history data with memory-optimized tables for storing latest (actual) data
  • Super-fast DML and current data querying supported from natively compiled code
  • Temporal querying supported from interop mode
  • Internal in-memory table created to minimally impact performance of DML operations
  • Background process that flushes the data from internal in-memory to permanent disk-based history table
  • Direct ALTER for system-versioned temporal tables enables modifying table schema without introducing maintenance window
  • Support for adding/altering/dropping columns while SYSTEM_VERSIONING is ON
  • Support for ADD/DROP HIDDEN for period columns while SYSTEM_VERSIONING is ON
  • Support for temporal querying clause FOR SYSTEM_TIME ALL that enables users to query entire data history easily without specifying period boundaries
  • Optimized CONTAINED IN implementation with minimized locking on current table. If your main case is analysis on historical data only, use CONTAINED IN.

Combined with the releases earlier this month of Always Encrypted, Transparent Data Encryption, support for Azure Active Directory, Row-Level security, Dynamic Data Masking and Threat Detection, Azure SQL Database provides unparalleled data security in the cloud with fast performance. As part of our intelligent capabilities, SQL Database also has built-in advisors to help customers get started quickly with in-memory OLTP to optimize performance.

It’s never been easier to capture, transform, mash-up, analyze and visualize any data, of any size, at any scale, in its native format using familiar tools, languages and frameworks in a trusted environment, both on-premises and in the cloud.

In Summary:

SQL Server 2016 has many new features.  Some of features are enhancements to existing features, while others are entirely new features.   In this article I only explored some of the new functionality in SQL Server 2016.   When moving to SQL Server 2016 you should exploit those new features that provide value to your SQL Server environment.


- Enhanced in-memory performance provides up to 30x faster transactions, more than 100x faster queries than disk-based relational databases and real-time operational analytics

- New Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without application changes

- Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner without application changes

- Built-in advanced analytics provide the scalability and performance benefits of building and running your advanced analytics algorithms directly in the core SQL Server transactional database

- Business insights through rich visualizations on mobile devices with native apps for Windows, iOS and Android

- Simplify management of relational and non-relational data by querying both with T-SQL using PolyBase

- Faster hybrid backups, high availability and disaster recovery scenarios to back up and restore your on-premises databases to Microsoft Azure and place your SQL Server AlwaysOn secondaries in Azure

Where You Can Get Additional Information

Below are some additional resources that you can use to find out more information about SQL Server 2016.

SQL Server 2016 Early Access Web Site:

SQL Server 2016 data sheet:

SQL Server 2016 release notes:

What’s new in SQL Server, September Update:

Thursday, December 24, 2015

Hybrid Infrastructure Automation with Azure Resource Manager Templates

Back in May of this year Microsoft announced Azure Resource Manager Templates.

On today’s Microsoft Mechanics show, Corey Sanders who heads up the team for Microsoft Azure compute, addresses the tech advantages of Azure Resource Manager Templates for IT. If you've ever had to set up a test or production environment for something like SharePoint, you know there are several pieces to set up - like Active Directory, SQL for your backend data, and then your SharePoint servers.

Early look at containers in Windows Server, Hyper-V and Azure – with Mark Russinovich

While it's possible to automate this with advanced scripting or level 400 task sequencer skills, imagine just choosing a template or manifest file and clicking go to spin up a dozen or so VMs all wired together and talking to each other. That is effectively what Azure Resource Manager templates do and what Corey demonstrates on the show while taking a step back to illustrate how they actually work to automate resources in the Cloud, on-premises or both.

Hybrid Infrastructure Automation with Azure Resource Manager Templates

Unified model for infrastructure automation in the Cloud and on-premises

As we gear up to deliver you the upcoming Azure Stack which brings the operational and resource management model of Microsoft Azure right inside your own private data center, Azure Resource Manager Templates serve as a unified model for automating how you build the infrastructure for running your apps and services whether in the Cloud or on-premises.

Simply put, they give you a, tried and tested reference architecture of the infrastructure components required to run your apps.

Business Intelligence (BI) solutions need to move at the speed of business.  Unfortunately, roadblocks related to availability of resources and deployment often present an issue.  What if you could accelerate the deployment of an entire BI infrastructure to just a couple hours and start loading data into it by the end of the day.  In this session, we'll demonstrate how to leverage Microsoft tools and the Azure cloud environment to build out a BI solution and begin providing analytics to your team with tools such as Power BI.  By end of the session, you'll gain an understanding of the capabilities of Azure and how you can start building an end to end BI proof-of-concept today.

The templates themselves, offer a pre-configured and declarative representation of your solution, grouping together the resources that you need. This in turn, allows you to manage required resources as a logical unit and deploy your apps in a repeatable manner in your testing, staging and production environment whether on premises or in the Cloud. Further, as Corey demos on the show, you can also easily replicate the same security constructs today via Azure Active Directory synchronization to govern access to resources and even tag resources to track usage costs.

Azures next generation compute platform

Beyond PowerShell

While we still bring you the Cmdlets for PowerShell, this approach now gives you an alternative beyond the scripting and task sequencing that you might be used to. Deploying a template typically requires just calling three lines of PowerShell, saving you time as you don’t have to determine the logic and Cmdlets to automate and deploy the same infrastructure.

Infrastructure Automation Quickstart Templates

With our open model, we have an increasing number of partners building templates such as DataStax, Cloudera, Mesosphere and much more. There are now hundreds of templates supplied by Microsoft or curated from the IT community covering infrastructure automation scenarios.

An overview of Microsoft azure new networking capabilities 

More Information:

Exam preparation 70 534

Monday, December 14, 2015

IBM Linux One Systems

IBM Unveils Linux-Only Mainframe; Builds on Linux Success

IBM announced a significant expansion of the mainframe’s strategy of embracing open source-based technologies and open-source communities to provide clients with the most secure, highest performance capabilities for an era where mainframes increasingly anchor corporate analytics and hybrid clouds.

IBM LinuxONE™: Linux Without Limits

Linux may not have limits, but what you run it on does. That's why you need LinuxONE™. LinuxONE is a powerful suite of hardware, software, services and solutions from IBM that unleashes the full potential of Linux for business.

For more information, visit IBM on the web at

To get updates on LinuxONE In real-time, follow the hashtag #LinuxONE on Twitter.

Unveiling the most secure Linux servers in the industry – The company is introducing two Linux mainframe servers – called LinuxONE – that are the industry’s most powerful[2] and secure enterprise servers designed for the new application economy and hybrid cloud era.

Deepening open source software enablement – IBM will enable open source and industry tools and software including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Docker on z Systems to provide clients with choice and flexibility for hybrid cloud deployments. SUSE, which provides Linux distribution for the mainframe, will now support KVM, giving clients a new hypervisor option. Canonical and IBM also announced plans to create an Ubuntu distribution for LinuxONE and z Systems. The collaboration with Canonical brings Ubuntu’s scale-out and cloud expertise to the IBM z Systems platform, further expanding its reach and support.

Contributing the single largest amount of mainframe code to open source community – The code, designed to fuel digital transformation, includes technology from IBM’s mainframe to help enterprises identify issues and help prevent failures before they happen, help improve performance across platforms and enable better integration with the broader network and cloud.

"Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux,” said Tom Rosamilia, senior vice president, IBM Systems. “We are deepening our commitment to the open source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale."


"Open source is about choice for customers, in software, hardware infrastructure, support and services. SUSE continues to innovate with IBM on new solutions and initiatives including LinuxONE, KVM for IBM z Systems and the Open Mainframe Project to expand enterprise horizons and multiply opportunities. Extending and adapting the open source ecosystem can only help clients succeed." – Michael Miller, SUSE vice president of global alliances and marketing.

LinuxONE – Industry’s Most Advanced Enterprise-Grade Linux Platform

IBM is launching LinuxONE, a new portfolio of hardware, software and services solutions, providing two distinct Linux systems for large enterprises and mid-size businesses. LinuxONE Emperor, based on the IBM z13, is the world’s most advanced Linux system with the fastest processor in the industry.

The system is capable of analyzing transactions in “real time” and can be used to help prevent fraud as it is occurring. The system can scale up to 8,000 virtual machines or hundreds of thousands of containers – currently the most of any single Linux system. LinuxONE Rockhopper, an entry into the portfolio, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

IBM’s LinuxONE systems, available starting today, are the most secure Linux systems with advanced encryption features built into both the hardware and software to help keep customer data and transactions confidential and secure. Protected-key, available on LinuxONE, provides significantly enhanced security over clear-key technology and offers up-to 28X improved performance over standard secure-key technology.

IBM Enables New Open Software and Industry Tools for Mainframe

Significantly broadening options for enterprises, IBM has enabled key open source and industry software for LinuxONE and IBM z Systems, including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Docker. These technologies work seamlessly on the mainframe just as they do with other platforms, with compelling performance advantages while requiring no special skills.

IBM helped pioneer virtualization on the mainframe and is now offering more choices for virtualization by enabling the new LinuxONE systems to be provisioned as a virtual machine through the open standards-based KVM hypervisor, just like any Linux server. SUSE, a leading distributor of Linux, will provide initial support for KVM for the mainframe.

Canonical and IBM also announced an initiative to encourage the growth of Ubuntu Linux on z Systems. Canonical plans to distribute Ubuntu for LinuxONE and z Systems, adding a third Linux distribution. SUSE and Red Hat already support distribution. Canonical also plans to support KVM for the mainframe.

Introducing IBM LinuxONE™: Linux without Limits Announcement

It’s time to unleash the full potential of Linux. With IBM LinuxONE, you’ll have one portfolio of hardware, software and service solutions for business critical Linux applications.

For more information on LinuxONE, visit IBM on the web at

For real-time updates on LinuxONE, follow the hashtag #LinuxONE on Twitter.

Join us for the complimentary webinar, as we celebrate the launch of this groundbreaking new system, bringing you Linux without limits.

Learn about:
- Moving forward in the digital economy
- Consolidating virtual servers and containers on one system
- Eliminating risk with Linux
- Linux your way, with ultimate choice and flexibility
- Expansion of the open community

- Ross Mauri, General Manager, IBM z Systems
- Deon Newman, Vice President, Marketing, IBM z Systems
- Mark Shuttleworth, Founder, Canonical
- Kevin Barber, CIO, SinfoniaRx

IBM Joins New Linux Foundation Project as Demand Grows for Mainframe in Open Source Community

Enabling greater access in the developer community, IBM’s contribution of mainframe technologies is the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions are IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. The code can be used by developers to build similar sense and respond resiliency capabilities on other systems.

The contributions will help fuel the new “Open Mainframe Project,” formed by the Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux and collaborative development.  In collaboration with the Linux Foundation, IBM will support the Open Mainframe Project, a collaboration of nearly a dozen organizations across academia, government and corporate sectors to advance development and adoption of Linux on the mainframe.

“Linux on the mainframe has reached a critical mass such that vendors, users and academia need a neutral forum where they can work together to advance Linux tools and technologies and increase enterprise innovation,” said Jim Zemlin, the Linux Foundation executive director.  “The Open Mainframe Project is a direct response to the demands of Linux users and the supporting open source ecosystem to address unique features and requirements built into mainframes for security, availability and performance.”

IBM Provides Access to LinuxONE Developer Cloud at No Cost

With today’s announcement, IBM is also providing unprecedented access to the mainframe to foster innovations by developers in the open source community. IBM is creating the LinuxONE Developer Cloud to provide open access to the development community. The cloud acts as a virtual R&D engine for the creation, testing and piloting of emerging applications including testing linkages to engagement systems, mobile applications and hybrid cloud applications.

Marist College and Syracuse University’s School of Information Studies plan to host clouds that provide developers access to a virtual IBM LinuxONE at no cost. As part of the program, IBM also will create a special cloud for independent software providers (ISVs) hosted at IBM sites in Dallas, Beijing and Boeblingen, Germany, that provide application vendors access and a free trial to LinuxONE resources to port, test and benchmark new applications for the LinuxONE and z Systems platform.

New financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allows enterprises to pay for what they use and scale up quickly when their business grows. The new LinuxONE systems are available today.

For more information on the IBM LinuxONE Systems Portfolio, visit: and follow the conversation at #IBMz, #LinuxONE and #LinuxCon, or contact DBA Consulting.

More Information:

Monday, November 9, 2015

Oracle Sparc M7 Innovation - Leverages “Software in Silicon”

SPARC technology plans emphasize innovation leadership, Oracle's commitment to advanced silicon and Engineered Systems

M7 - Breakthrough Processor and System Design with SPARC M7

Innovation is the life's blood of the technology industry. And technology innovation is a critical factor in business, government, and culture. Oracle is keenly aware of this innovation imperative, not only in theory but in practice, investing considerable time, effort, and resources in driving information technology and its effective implementation forward, first in software then in storage, networking, and hardware.

A significant result of this effort came to light at the Hot Chips conference in Cupertino, Calif., where Oracle disclosed technology details of its upcoming SPARC processor, known as the SPARC M7. The venue is appropriate: This year is the 26th anniversary of the semiconductor industry's showcase for innovative technology, sponsored by the IEEE's technical committee on microprocessors and microcomputers and in cooperation with the ACM's SIGARCH (Special Interest Group on Computer Architecture). This is a milestone for Oracle. With the disclosure of the M7, Oracle will have introduced six new SPARC processors in the four years since it acquired Sun Microsystems. That aggressive timeline reinforces Oracle's commitment to the SPARC architecture, to maintaining its relevance in the technology environment.

Larry Ellison Introduces Breakthrough New SPARC M7 Systems

Software in Silicon

The innovations in the new SPARC processor are of a piece with the design philosophy at the heart of Oracle Engineered Systems. It's an approach to enterprise IT architecture that fits together servers, software, and storage into a single, finely-tuned integrated system that runs applications at their optimum performance capability.

That optimization strategy is reflected in the new processor. The M7's most significant innovations revolve around what is known as "software in silicon," a design approach that places software functions directly into the processor. Because specific functions are performed in hardware, a software application runs much faster. And because the cores of the processor are freed up to perform other functions, overall operations are speeded up as well.

Zoran Radovic: Software. Hardware. Complete. M7: Next Generation Oracle Processor

The SPARC M7 design features 32 CPU cores for faster performance.

Oracle’s new SPARC M7 systems feature:

Security in Silicon, with two key new enhancements in systems design.

Silicon Secured Memory – For the first time, Silicon Secured Memory adds real-time checking of access to data in memory to help protect against malicious intrusion and flawed program code in production for greater security and reliability. Silicon Secured Memory protection is utilized by Oracle Database 12c by default and is simple and easy to turn on for existing applications. Oracle is also making application programming interfaces available for advanced customization.

Hardware-Assisted Encryption – New breakthrough performance with hardware-assisted encryption built into all 32 cores enables uncompromised use without performance penalty. This gives customers the ability to have secure runtime and data for all applications even when combined with wide key usage of AES, DES, SHA, and more. Existing applications that use encryption will be automatically accelerated by this new capability including Oracle, third party, and custom applications.

SQL in Silicon: Adds co-processors to all 32 cores of the SPARC M7 that offload and accelerate important data functions, dramatically improving efficiency and performance of database applications.

Critical functions accelerated by these new co-processors include memory de-compression, memory scan, range scan, filtering, and join assist. Offloading these functions to co-processors greatly increases the efficiency of each CPU core, lowers memory utilization, and enables up to 10x better database query performance. Oracle Database 12c In-Memory option fully supports this new capability in the current release. In addition, this new functionality is slated to be available to advanced developers to build the next generation of big data analytics platforms.

World Record Performance: Powered by the world’s fastest microprocessor, Oracle’s new SPARC M7-based systems deliver proven performance superiority with world record results in over 20 benchmarks. In addition to superior performance for database, middleware, Java, and enterprise applications from Oracle and third party ISV’s, the new SPARC M7-based systems achieve incredible performance compared to the competition for big data and cloud workloads.

“Until now, no computing platform has been able to tackle security without significantly impacting application performance and efficiency,” said John Fowler, executive vice president, Systems, Oracle. “Today Oracle is delivering breakthrough technology for memory intrusion protection and encryption, while accelerating in-memory analytics, databases and Java. Oracle’s SPARC T7 and M7 systems and Oracle SuperCluster M7 are starting a new era in delivering secure computing while increasing efficiency.”

“Oracle's core investments in SPARC M7 are delivering breakthrough capabilities for information security, database efficiency, and performance that go beyond enterprise workloads to big data and cloud. This is the most significant advancement in SPARC microprocessor and systems design in the last decade,” said Matthew Eastwood, senior vice president, Enterprise Infrastructure and Datacenter Group, IDC.

Balanced Design Principles: The new SPARC M7 processor is the design center of the new line of SPARC M7 systems that scale from 32 to 512 cores, 256 to 4,096 threads and up to 8 TB of memory. Oracle’s SPARC M7 chip is a 4.1 GHz 32-core/256-thread processor that addresses the most demanding workloads with a balanced high performance design across all factors of memory, IO, and scalability. In addition, Oracle has improved every other aspect of the design compared to previous generation designs resulting in increased single-thread performance and reduced latency.

Technology That Delivers: Oracle’s new SPARC M7 systems deliver outstanding security and performance as demonstrated by a new world record result for the SPECjEnterprise2010 benchmark for database and Java(1). Oracle has run this benchmark fully encrypted to demonstrate the levels of security, efficiency, and performance that SPARC M7 delivers. Two SPARC T7-1 servers, fully encrypted, are faster than the second best result from a pair of four-processor IBM Power8 systems, running the same workload unencrypted. Oracle’s SPARC M7 TeraSort benchmark results prove superiority over IBM for running Hadoop, while also utilizing SPARC M7 encryption acceleration with negligible performance impact. One SPARC T7-4 with 128 cores using an AES-256-GCM encrypted file system is 3.8x faster than an unsecure 8-node IBM S822L Power8 cluster with 192 cores(2). Customers can now run workloads fully encrypted with greater efficiency and without performance penalty.

Software-in-Silicon : Oracle SPARC Roadmap

Oracle Sparc Roadmap : Sept 2015 Update

Learn more about SPARC
For example, one of the most exciting innovations in the M7 processor is known as its in-memory query acceleration engines. These design-specific units take over certain data-search functions from a database query, and those functions then get processed at a very high rate of speed. This dedicated functionality makes database queries perform much faster.

SPARC Server Strategy and Roadmap

Oracle M7  (playlist)

Such query acceleration "is done in a different way than anyone has done it before," said David Lawler, Oracle senior vice president for system product management and strategy. The M7 incorporates up to eight in-memory query acceleration engines.

Another significant M7 innovation is a feature known as application data integrity. This software-in-silicon functionality ensures that an application is able to access only its own dedicated memory region. This lets software programmers identify issues with memory allocation, which is advantageous in several ways.

Oracle expects it to dramatically improve the speed of Oracle's software development, and the resulting product quality, and that customers will benefit by running applications with memory that is always protected in production.

Also, it serves as a security feature. "If one particular piece of code is trying to read the data from another, the chip would stop it," said Renato Ribeiro, Oracle director of product management for SPARC Systems.

And because it is hardwired into the processor, the data integrity functionality does not affect the performance of the application. "It has next to no overhead".

Ideal for Exadata X5-2

Oracle has been shipping an Oracle Exadata configuration that runs Oracle’s T and M-series (SPARC) microprocessors for more than 2 years. This database machine is called Oracle SuperCluster.

Technically, SuperCluster has always included every single Exadata feature of note. This is because every SuperCluster configuration is built around the same Exadata Storage Servers and InfiniBand switches that are used in every other Exadata system configuration.

Exadata X5-2: Extreme Flash and Elastic Configurations

Oracle Database Appliance X5-2

Oracle Linux: Maximize Your Value and Optimize Your Stack

Performance Boosts

Another innovation available on the new processor involves the ability to decompress data at very high speed (100 GB/sec). This is important especially in connection with Oracle's innovative in-memory database functionality.

Database performance is improved when the data being used can be loaded directly into server memory, which eliminates the latency in transferring data from external storage. However, to fit a large amount of data into server memory it must be compressed, and then decompressed on every database query. That decompression takes time and sucks up valuable processor resources—a classic bottleneck.

To address that constriction, Oracle engineers have incorporated a decompression acceleration engine onto the M7 processor. This hardwired unit runs data decompression at the full speed of the in-memory database: 100 GB/sec. That's equivalent to 16 decompression PCI cards, or 200 CPU cores.

Another improvement in the M7 related to performance involves communication between two computers. Known as extreme low latency fabric for memory sharing, this hardware interconnection provides for messaging with sub-microsecond latency, which translates to "memory access across two machines as if it were local" . This helps the performance of computers in a cluster.

Finally, the M7 processor features 32 cores in its design, which ups the processing horsepower from its predecessor, the M6, which has 12 cores. Less an innovation than a process improvement, it nonetheless affirms Oracle's commitment to making SPARC the most powerful processor in the industry.

Creating a Maximum Availability Architecture with SPARC SuperCluster

Co-engineering Advantage

With its SPARC architecture, Oracle has an advantage over other enterprise vendors in that it can do engineering work at all levels of the computing stack: processor, operating system, middleware, database, applications, even software tools, specifically Java.

The SPARC M7 processor benefitted from that co-engineering, designed from the start with input from both Oracle's hardware engineers and its software developers. That approach is what enabled the innovative "software in silicon" strategy to come to fruition. "We looked at all of our software and identified the things that were the hardest" and then incorporated those into the processor.

The SPARC M7 is scheduled to be available sometime in calendar year 2015. Oracle intends the industry at large to benefit from its work. "We plan to make these functions available to other software vendors that would like to take advantage of them".

Highly Efficient Oracle Servers for the Modern Data Center

Why Oracle For Enterprise Big Data?

More Information:

IBM Videos