• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

19 December 2018

IBM Cloud and OpenShift

IBM's Cloud approach and how they will change the Cloud Game

The cloud game

The “cloud” marketplace is diverse and nuanced. In the public cloud, while IaaS is a competitive game, each of the big players has different strengths and strategies. AWS is the clear leader in the space, Google has strong data services driving AI/ML offerings, Microsoft has the most advanced hybrid strategy (Azure public & private cloud options) and strength in business productivity applications. IBM was fighting to be a top five public cloud contender, and while it has a strong portfolio of applications and solid IaaS offering through the Softlayer acquisition, it was failing to compete. Red Hat’s strength is playing across all ecosystems, both Linux (Red Hat Enterprise Linux RHEL is the foundation of its business) and OpenShift (container and application platform including Kubernetes) are available across a wide spectrum of public clouds and infrastructure platforms.

Buying Red Hat does not mean the end of IBM Cloud, rather it is a signal that IBM’s position in a multi-cloud world is one of partnership. I first heard the term “co-opetition” in relation to IBM many years ago; Big Blue has decades of managing the inherent tension of working with partners where there are also competitive angles. Cloud computing is only gaining in relevance, and open source is increasingly important to end-users. IBM’s relevance in multi-cloud is greatly enhanced with the Red Hat acquisition, and IBM’s positioning with the C-suite should help Red Hat into more strategic relationships. This is not a seismic shift in the cloud landscape.

Changing perceptions

It is very difficult for a 100 year old company to change how people think of it. It is not fair for people to think that IBM is a navy suit selling mainframes – while of course some people still wear suits, and IBM zSeries has been doing well, the company has gone through tons of changes. IBM is one of the few companies that has avoided being destroyed from disruptions in the technology industry.

IBM Cloud Innovation Day 

While many (including myself) are a sad to see the “leader in open source” acquired, there are few companies outside of Red Hat that have as good of a record with the communities as IBM. Microsoft rejuvenated its image with a new CEO who focuses on not only cloud and AI, but diversity and openness (Microsoft was on stage this year at Red Hat Summit). IBM says that they will keep the Red Hat brand and products; will they be able to leverage Red Hat’s culture and community strength to bolster both IBM sales and relevance in the marketplace? There is an opportunity for IBM and Red Hat to both rebrand and reposition how they are considered by customers and partners in the new digital world.

The Wikibon and SiliconANGLE teams will continue to examine all of the angles of this acquisition. Here’s a 20 minute video with Dave Vellante and me sharing our initial thoughts:

IBM $34B Red Hat Acquisition: Pivot To Growth But Questions Remain

Most companies are just getting started on their cloud journey.

They’ve maybe completed 10 to 20 percent of the trek, with a focus on cost and productivity efficiency, as well as scaling compute power. There’s a lot more though to unlock in that remaining 80 percent: shifting business applications to the cloud and optimizing supply chains and sales, which will require moving and managing data across multiple clouds.
To accomplish those things easily and securely, businesses need an open, hybrid multicloud approach. While most companies acknowledge they are embracing hybrid multicloud environments, well over half attest to not having the right tools, processes or strategy in place to gain control of them.
Here are four of the recent steps IBM has taken to help our clients embrace just that type of approach.

The Future of Data Warehousing, Data Science and Machine Learning

1. IBM to acquire Red Hat.
The IBM and Red Hat partnership has spanned 20 years. IBM was an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux and more recently to bring enterprise Kubernetes and hybrid multicloud solutions to customers. By joining together, we will be positioned to help companies create cloud-native business applications faster and drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management. Read the Q&A with Arvind Krishna, Senior Vice President, IBM Hybrid Cloud.

2. The launch of IBM Multicloud Manager.
When applications and data are distributed across multiple environments, it can be a challenge for enterprises to keep tabs on all their workloads and make sure they’re all in the right place. The new IBM Multicloud Manager solution helps organizations tackle that challenge by helping them improve visibility across all their Kubernetes environments using a single dashboard to maintain security and governance and automate capabilities. Learn why multicloud management is becoming critical for enterprises.

3. AI OpenScale improves business AI.
AI OpenScale will be available on IBM Cloud and IBM Cloud Private with the goal of helping businesses operate and automate artificial intelligence (AI) at scale, no matter where the AI was built or how it runs. AI OpenScale heightens visibility, detects bias and makes AI recommendations and decisions fully traceable. Neural Network Synthesis (NeuNetS), a beta feature of the solution, configures to business data, helping organizations scale AI across their workflows more quickly.

4. New IBM Security Connect community platform.
IBM Security Connect is a new cloud-based community platform for cyber security applications. With the support of more than a dozen other technology companies, it is the first cloud security platform built on open federated technologies. Built using open standards, IBM Security Connect can help companies develop microservices, create new security applications, integrate existing security solutions, and make use of data from open, shared services. It also enables organizations to apply machine learning and AI, including Watson for Cyber Security, to analyze and identify threats or risks.

Successful enterprises innovate. They listen, learn and experiment, and after doing so, they either lead or adapt. Or, for those who do not, they risk failure.

Building a Kubernetes cluster in the IBM Cloud

Possibly nowhere is this more evident than in cloud computing, an environment driven by user demand and innovation. Today the innovation focuses squarely on accelerating the creation and deployment of integrated, hybrid clouds. Whether on public, private or on-premises systems, more companies are demanding interoperability to enable scalability, agility, choice, performance – and no vendor lock-in. Simultaneously, they want to integrate all of it with their existing massive technology investments.

Paul Cormier, Executive Vice President, and President Products and Technologies, Red Hat, and Arvind Krishna, Senior Vice President, IBM Hybrid Cloud, at Red Hat Summit 2018 in San Francisco on May 7, 2018.

One of the fundamental building blocks of this burgeoning integrated, hybrid cloud environment is the software container – a package of software that includes everything needed to run it. Through containers, which are lightweight, easily portable and OS-independent, organizations can create and manage applications that can run across clouds with incredible speed.

This leads to a truly flexible environment that is optimized for automation, security, and easy scalability. IBM recognized the value of containers several years ago, and has been aggressively building out easy-to-use cloud services and capabilities with Docker, Kubernetes, and our own technologies.

In addition, IBM has moved to containerize all of its portfolio middleware, including everything from WebSphere to Db2 to API Connect to Netcool.

This strategy fits with our view that the core of the cloud is flexibility. There is no “one size fits all.” This idea is what led to the creation of the IBM Cloud Private platform, which delivers containers and services that help accelerate clients’ journey to the cloud.

And that’s why today IBM and Red Hat are coming together to enable the easy integration of IBM’s extensive middleware and data management with Red Hat’s well-entrenched Red Hat OpenShift open source container application platform. This is a major extension of our long-standing relationship with Red Hat and provides enterprises access to a complete and integrated hybrid cloud environment – from the operating system, to open source components, to Red Hat Enterprise Linux containers, to IBM middleware that has been re-engineered for the cloud era.

IBM Cloud App Management - Manage Kubernetes environments with speed and precision

Both of our companies believe in a hybrid cloud future and, with this partnership, we are ensuring seamless connectivity between private and public clouds so that clients get all the benefits of the cloud in a controlled environment.

With this news, we will be certifying our private cloud platform – IBM Cloud Private – as well as IBM middleware including WebSphere, MQ and Db2, and other key IBM software, to run on Red Hat Enterprise Linux via Red Hat OpenShift Container Platform. Red Hat OpenShift will also be available on IBM public cloud, as well as IBM Power Systems. This allows enterprises to get the best hybrid experience, including on the IBM Cloud, with Red Hat OpenShift and IBM middleware that they have trusted for years.

IBM Cloud Series - 002 - Multi Node Kubernetes Cluster on IBM Cloud

It’s about assisting our clients in their digital transformation journey – a journey that, for many, includes the execution of multiple approaches at the same time: A shift to a cloud-only strategy, the aggressive utilization of public cloud, adding more workloads, and simplification of hybrid data.
Armed with these capabilities, clients can make smarter decisions more quickly, engage with customers more intimately and manage their businesses more profitably.

IBM Cloud Private: The Next-Generation Application Server


IBM and Google announced the launch of Istio, an open technology that provides a way for developers to seamlessly connect, manage and secure networks of different microservices—regardless of platform, source or vendor.

Istio is the result of a joint collaboration between IBM, Google and Lyft as a means to support traffic flow management, access policy enforcement and the telemetry data aggregation between microservices. It does all this without requiring developers to make changes to application code by building on earlier work from IBM, Google and Lyft.
developerWorksTV report by Scott Laningham.

Istio currently runs on Kubernetes platforms, such as the IBM Bluemix Container Service. Its design, however, is not platform specific. The Istio open source project plan includes support for additional platforms, including CloudFoundry, VMs.

IBM, Google, and Lyft launch Istio

Why IBM built Istio

We continue to see an increasing number of developers turning to microservices when building their applications. This strategy allows developers to decompose a large application into smaller, more manageable pieces. Although decomposing big applications into smaller pieces is a practice we’ve seen in the field for as long as software has been written, the microservices approach is particularly well suited to developing large scale, continuously available software in the cloud.

We have personally witnessed this trend with our large enterprise clients as they move to the cloud. As microservices scale dynamically, problems such as service discovery, load balancing and failure recovery become increasingly important to solve uniformly. The individual development teams manage and make changes to their microservices independently, making it difficult to keep all of the pieces working together as a single unified application. Often, we see customers build custom solutions to these challenges that are unable to scale even outside of their own teams.

Before combining forces, IBM, Google, and Lyft had been addressing separate, but complementary, pieces of the problem.

IBM’s Amalgam8 project, a unified service mesh that was created and open sourced last year, provided a traffic routing fabric with a programmable control plane to help its internal and enterprise customers with A/B testing, canary releases, and to systematically test the resilience of their services against failures.

Google’s Service Control provided a service mesh with a control plane that focused on enforcing policies such as ACLs, rate limits and authentication, in addition to gathering telemetry data from various services and proxies.

Lyft developed the Envoy proxy to aid their microservices journey, which brought them from a monolithic app to a production system spanning 10,000+ VMs handling 100+ microservices. IBM and Google were impressed by Envoy’s capabilities, performance, and the willingness of Envoy’s developers to work with the community.

It became clear to all of us that it would be extremely beneficial to combine our efforts by creating a first-class abstraction for routing and policy management in Envoy, and expose management plane APIs to control Envoys in a manner that can be easily integrated with CI/CD pipelines. In addition to developing the Istio control plane, IBM also contributed several features to Envoy such as traffic splitting across service versions, distributed request tracing with Zipkin and fault injection. Google hardened Envoy on several aspects related to security, performance, and scalability.

How does Istio work?
Improved visibility into the data flowing in and out of apps, without requiring extensive configuration and reprogramming.

Istio converts disparate microservices into an integrated service mesh by introducing programmable routing and a shared management layer. By injecting Envoy proxy servers into the network path between services, Istio provides sophisticated traffic management controls such as load-balancing and fine-grained routing. This routing mesh also enables the extraction of a wealth of metrics about traffic behavior, which can be used to enforce policy decisions such as fine-grained access control and rate limits that operators can configure. Those same metrics are also sent to monitoring systems. This way, it offers improved visibility into the data flowing in and out of apps, without requiring extensive configuration and reprogramming to ensure all parts of an app work together smoothly and securely.

Once we have control of the communication between services, we can enforce authentication and authorization between any pair of communicating services. Today, the communication is automatically secured via mutual TLS authentication with automatic certificate management. We are working on adding support for common authorization mechanisms as well.

Key partnerships driving open collaboration
We have been working with Tigera, the Kubernetes networking folks who maintain projects like CNI, Calico and flannel, for several months now to integrate advanced networking policies into the IBM Bluemix offerings. As we now look to integrate Istio and Envoy, we are extending that collaboration to include these projects and how we can enable a common policy language for layers 3 through 7.

“It takes more than just open sourcing technology to drive innovation,” said Andy Randall, Tigera co-founder and CEO. “There has to be an open, active multi-vendor community, and as a true believer in the power of open collaboration, IBM is playing an essential role in fostering that community around Kubernetes and related projects including Calico and Istio. We have been thrilled with our partnership and look forward to ongoing collaboration for the benefit of all users of these technologies.”

Key Istio features
Automatic zone-aware load balancing and failover for HTTP/1.1, HTTP/2, gRPC, and TCP traffic.
Fine-grained control of traffic behavior with rich routing rules, fault tolerance, and fault injection.
A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
Automatic metrics, logs and traces for all traffic within a cluster, including cluster ingress and egress.
Secure service-to-service authentication with strong identity assertions between services in a cluster.

How to use it today
You can get started with Istio here. We also have a sample application composed of four separate microservices that can be easily deployed and used to demonstrate various features of the Istio service mesh.

Project and collaboration
Istio is an open source project developed by IBM, Google and Lyft. The current version works with Kubernetes clusters, but we will have major releases every few months as we add support for more platforms. If you have any questions or feedback, feel free to contact us on istio-users@googlegroups.com mailing list.

We are excited to see early commitment and support for the project from many companies in the community: Red Hat with Red Hat Openshift and OpenShift Application Runtimes, Pivotal with Pivotal Cloud Foundry, Weaveworks with Weave Cloud and Weave Net 2.0, Tigera with the Project Calico Network Policy Engine. If you are also interested in participating in further development of this open source project, please join us at GitHub. If you are an IBM partner/vendor, we encourage you to build solutions on top of Istio to serve your client’s unique needs. As your clients move from monolithic applications to microservices, they can easily manage complex enterprise level microservices running on Bluemix infrastructure using Istio.

Please feel free to reach out to us at istio-users@googlegroups.com if you have any questions.

Data is the new natural resource — abundant, often untamed, and fueling Artificial Intelligence (AI). It has the potential not only to transform business, but to enable the creation of new business models.

But where and how are you expected to begin your data journey? These are two of the most common questions we get asked. For us, it has everything to do with making things like data science and machine learning capabilities, accessible and easy to use across platforms — providing solutions that handle the analytics where the data resides, rather than bringing the data to the analytics.

The Real World with OpenShift - Red Hat DevOps & Microservices

By taking this approach, IBM has been a leader in helping clients around the globe more easily collect, organize, and analyze their growing data volumes, all with the end goal of ascending the AI Ladder. To be clear, that’s not as easy as it sounds. For example, according to a report from MIT Sloan, Reshaping Business with Artificial Intelligence, an estimated 85% of 3,000 business leaders surveyed believed artificial intelligence (AI) would enable competitive advantage, however, only about 20% have done anything about it. For many organizations, the task of understanding, organizing, and managing their data at the enterprise level was too complex.

So earlier this year we set out to change all that and make it easier for enterprises to gain control of their data, to make their data simple, and then to put that data to work to unearth insights into their organizations. We launched IBM Cloud Private for Data, the first true data platform of its kind that integrates data science, data engineering, and app building under one containerized roof that can be run on premises or across clouds.

IBM has been busy adding to the platform ever since. Since launch we’ve added support for MongoDB Enterprise and EDB Postgres; we’ve integrated IBM Data Risk Manager; and we’ve announced support for Red Hat Openshift, to name a few. This week we’re keeping the momentum going, announcing a variety of new updates, from premium add-on services and modular install options, to the availability of the first-of-a-kind Data Virtualization technology.

With these updates, the design criterion was to help organizations modernize their environments even further to take advantage of cloud benefits — flexibility, agility, scalability, cost-efficiency — while keeping their data where it is. Leveraging multi-cloud elasticity and the portability of a microservices-based containerized architecture lets enterprises place their data and process where it most benefits the business.

Here’s how the new capabilities line up:

Premium Add-On Services
We continue to enrich IBM Cloud Private for Data’s service catalog with premium add-on services:
An advanced data science add-on featuring IBM SPSS Modeler, IBM Watson Explorer, and IBM Decision Optimization to help organizations turn data into game-changing insights and actions with powerful ML and data science technologies

Key databases— Mongo DB and Db2 on Cloud

IBM AI OpenScale will soon be available as a single convenient package with IBM Cloud Private for Data, helping businesses operate and automate AI at scale, with trust and transparency capabilities to eliminate bias and explain outcomes.

DB2 family and v11.1.4.4

Data Virtualization

IBM Cloud Private for Data’s data virtualization (announced in September) can help organizations leverage distributed data at the source, eliminating the need to move or centralize their data. Some of the key highlights are:

  • Query anything, anywhere — across data silos (heterogeneous data assets)
  • Help reduce operational costs with distributed parallel processing (vs centralized processing) — free of data movement, ETL, duplication, etc.
  • Auto-discovery of source and metadata, for ease of viewing information across your organization
  • Self-discovering, self-organizing cluster
  • Unify disparate data assets with simple automation, providing seamless access to data as one virtualized source
  • Governance, security, and scalability built in

DB2 12 overview

In essence the service appears as a single Db2 database to all applications.

IBM Cloud Private for Data update

Other Capabilities in this Release

  • FISMA Compliance — FIPS Level 1
  • Modular Installation — Reduced footprint for base installer by almost 50%. Customers can deploy add-ons as optional features.
  • Support for Microsoft Azure by end of year — Adding to existing IBM Cloud Private, Red Hat OpenShift, and OpenStack and Amazon Web Services support

CNCF Reference Architecture

Next steps
As organizations take this journey with us, these new capabilities of IBM Cloud Private for Data can help further modernize and simplify their data estates for multicloud, leverage the best of the open source ecosystem, and infuse their applications and business processes with data science and AI capabilities. We remain committed to helping our clients unlock the value of their data in innovative smarter ways for better, more timely business outcomes. IBM Cloud Private for Data can be that place to start.

IBM® and Red Hat have partnered to provide a joint solution that uses IBM Cloud Private and OpenShift. You can now deploy IBM certified software containers running on IBM Cloud Private onto Red Hat OpenShift.
Similar to IBM Cloud Private, OpenShift is a container platform built on top of Kubernetes. You can install IBM Cloud Private on OpenShift by using the IBM Cloud Private installer for OpenShift.

Integration capabilities

  • Supports Linux® 64-bit platform in offline only installation mode
  • Single-master configuration
  • Integrated IBM Cloud Private cluster management console and Catalog
  • Integrated core Platform services, such as monitoring, metering, and logging
  • IBM Cloud Private uses the OpenShift image registry

This integration defaults to using the Open Service Broker in OpenShift. Brokers that are registered in OpenShift are still recognized and can contribute to the IBM Cloud Private Catalog. IBM Cloud Private is also configured to use the OpenShift Kube API Server.

IBM Cloud Private Platform images are not Red Hat OpenShift certified
IBM Cloud Private Vulnerability Advisor (VA) and audit logging are not available on OpenShift
Not all CLI command options, for example all cloudctl cm commands, are supported


Authentication and authorization administration happens from only IBM Cloud Private to OpenShift. If a user is created in OpenShift, the user is not available in IBM Cloud Private. Authorization is handled by IBM Cloud Private IAM services that integrate with OpenShift RBAC.
The IBM Cloud Private cluster administrator is created in OpenShift during installation. All other users and user-groups from IBM Cloud Private LDAP are dynamically created in OpenShift when the users invoke any Kube API for the first time. The roles for all IBM Cloud Private users and user-groups are mapped to equivalent OpenShift roles. The tokens that are generated by IBM Cloud Private are accepted by the OpenShift Kube API server, OpenShift UI and OpenShift CLI.

What is IBM Cloud Private and OpenShift?

Before getting into the details of the partnership, a little refresher on IBM Cloud Private and Red Hat OpenShift.

OpenShift 4.0 - Features, Functions, Future at OpenShift Commons Gathering Seattle 2018

Cloud Private is IBM’s private cloud platform that enables enterprise IT to employ a hybrid cloud environment on both x86 and POWER platforms.  IBM sees Cloud Private as addressing three enterprise IT needs:

IBM’s value prop is essentially save money on legacy app support, securely integrate with third parties for implementations such as Blockchain  and simply develop twelve-factor cloud-native applications in a microservices architecture. Important to note in Cloud Private is its ability to run in both POWER and x86 environments.

OpenShift is Red Hat’s enterprise container platform. OpenShift is based on Docker and Kubernetes and manages the hosting, deployment, scaling, and security of containers in the enterprise cloud.

What this partnership enables

As previously mentioned, the partnership extends IBM Cloud Private to Red Hat OpenShift. So, enterprise IT organizations familiar with the Red Hat tools can more simply deploy a cloud environment that brings all its data and apps together in a single console. Legacy line of business (LoB) applications can be deployed and managed alongside native cloud applications. IBM middleware can be deployed in any OpenShift environment.

This partnership also allows a simpler, more secure interface to the power of IBM Cloud Services. The seamless integration from IBM Cloud Private should allow IT organizations to quickly enable services that would normally take months to deploy such as Artificial Intelligence (AI), Blockchain and other platforms.

OpenShift on OpenStack and Bare Metal

PowerAI and RHEL brings Deep Learning to the enterprise
Somewhat hidden in the news of the IBM – Red Hat announcement is what may be the most interesting bit of news. That is, the availability of PowerAI for RHEL 7.5 on the recently updated Power System AC922 server platform.

PowerAI is IBM’s packaging and delivery of performance-tuned frameworks for deep learning such as Tensorflow, Kerras, and Caffe. This should lead to simplified deployment of frameworks, quicker development time and shorter training times. This is the beginning of democratizing deep learning for the enterprise. You can find more on PowerAI here by Patrick Moorhead.

OpenShift Commons Briefing: New Marketplace and Catalog UX for OpenShift 4 Serena Nichols, Red Hat

The IBM POWER System AC922 is the building block of PowerAI. As previously mentioned, this is based on the IBM POWER9 architecture. Why does this matter? In an acronym, I/O. POWER9 has native support for both PCIe Gen4, NVLink 2.0 and CAPI 2.0. Both of these allow for greater I/O capacity and bandwidth. Moreover, what that means to a workload like deep learning is the ability to move more data between (more) storage and compute much faster. This leads to a big decrease in learning time. To an enterprise IT organization, that means faster customer insights, greater efficiencies in manufacturing and a lot of other benefits that drive differentiation from competitors.

What this means for Enterprise IT

There are a few ways this partnership benefits the enterprise IT organization. One of the more obvious benefits is the tighter integration of applications and data, both legacy and cloud-native. Enterprise IT organizations that have gone through the pains of trying to integrate legacy data with newer applications can more easily take advantage of IBMs (and open source) middleware to achieve greater efficiencies.

This partnership also allows enterprise IT to more quickly enable a greater catalog of services to business units looking to gain competitive advantages in the marketplace through IBM Cloud Services.

Perhaps the biggest benefit to enterprise IT is the availability of OpenAI on RHEL. I believe this begins the democratization of AI for the enterprise. This partnership attempts to remove the biggest barriers to adoption by simplifying the deployment and tuning of Deep Learning frameworks.

OpenShift roadmap: You won't believe what's next

How this benefits IBM and Red Hat

IBM can extend the reach of its cloud services to enterprise IT organizations running Red Hat OpenShift. I believe those organizations will quickly be able to understand the real benefits associated with Cloud Private and Cloud Services.

The benefit to Red Hat is maybe a little less obvious, but equally significant. Red Hat’s support for IBM Cloud Private and (by extension) Cloud Services opens the addressable market for OpenShift and enables a new set of differentiated capabilities. In an ever-increasing competitive hybrid cloud management space, this sets Red Hat apart.

On the AI front, I believe this partnership further sets IBM apart as the leader and introduces Red Hat into the discussion for good measure.  This could be a partnership that many try to catch for some time.

Transform the Enterprise with IBM Cloud Private on OpenShift

Closing thoughts

The partnership between IBM and Red Hat has always been strong, and in many ways these solutions offerings only make sense. Red Hat has a strong offering in the developing, deploying and managing cloud-native applications with OpenShift. IBM has a best-of-breed solution in Cloud Private and PowerAI. Marrying the two can empower the enterprise IT organization and extend the datacenter footprint of both Red Hat and IBM.

However, many great technical partnerships never reach their potential because the partnerships end at technical enablement. Red Hat and IBM would be wise to develop a comprehensive go-to-market campaign that focuses on education and awareness. Cross-selling and account seeding is the first step in enabling this partnership, followed by a series of focused campaigns in targeted vertical industries and market segments.

Cloud Native, Event Drive, Serverless, Microservices Framework - OpenWhisk - Daniel Krook, IBM

Finally, the joint consulting services between the IBM Garage and Red Hat Consulting organizations will need to work closely in ensuring early customer success with PowerAI.  Real enterprises realizing real benefits is the difference between a science project and a solution. Moreover, these organizations are going to be critical to helping enterprise IT stand up these deep learning frameworks.

I will be following this partnership closely and look forward to watching how Red Hat and IBM jointly attack the market. Look for a follow up on this as the partnership evolves.

IBM Cloud SQL Query Introduction

More Information:





Final Christmas Thought:

Bach - Aria mit 30 Veränderungen Goldberg Variations BWV 988 - Rondeau | Netherlands Bach Society

23 November 2018

Powering IT’s future while preserving the present: Introducing Red Hat Enterprise Linux 8

Powering IT’s future while preserving the present: Introducing Red Hat Enterprise Linux 8: Red Hat Enterprise Linux multi-year roadmap

Red Hat Enterprise Linux multi-year roadmap

Red Hat Enterprise Linux 8 (RHEL 8) has not been released, but, the beta was released on November 14 for you to get your hands dirty on the new version of world’s best enterprise operating system. This release came after IBM acquired Red Hat for $34 billion on October 28, 2018.  https://www.itzgeek.com/how-tos/linux/centos-how-tos/red-hat-enterprise-linux-8-release-date-and-new-features.html

Meet Red Hat Enterprise Linux 8

Linux containers, Kubernetes, artificial intelligence, blockchain and too many other technical breakthroughs to list all share a common component - Linux, the same workhorse that has driven mission-critical, production systems for nearly two decades. Today, we’re offering a vision of a Linux foundation to power the innovations that can extend and transform business IT well into the future: Meet Red Hat Enterprise Linux 8.

Microservices with Docker, Kubernetes, and Jenkins

Enterprise IT is evolving at a pace faster today than at any other point in history. This reality necessitates a common foundation that can span every footprint, from the datacenter to multiple public clouds, enabling organizations to meet every workload requirement and deliver any app, everywhere.

With Red Hat Enterprise Linux 8, we worked to deliver a shared foundation for both the emerging and current worlds of enterprise IT. The next generation of the world’s leading enterprise Linux platform helps fuel digital transformation strategies across the hybrid cloud, where organizations use innovations like Linux containers and Kubernetes to deliver differentiated products and services. At the same time, Red Hat Enterprise Linux 8 Beta enables IT teams to optimize and extract added value from existing technology investments, helping to bridge demands for innovation with stability and productivity.

Sidecars and a Microservices Mesh

In the four years since Red Hat Enterprise Linux 7 redefined the operating system, the IT world has changed dramatically and Red Hat Enterprise Linux has evolved with it. Red Hat Enterprise Linux 8 Beta once again sets a bar for how the operating system can enable IT innovation. While Red Hat Enterprise Linux 8 Beta features hundreds of improvements and dozens of new features, several key capabilities are designed to help the platform drive digital transformation and fuel hybrid cloud adoption without disrupting existing production systems.

Your journey into the serverless world

Red Hat Enterprise Linux 8 introduces the concept of Application Streams to deliver userspace packages more simply and with greater flexibility. Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system. Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream. This helps to deliver greater agility and user-customized versions of Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments.

Red Hat Enterprise Linux roadmap 2018

Beyond a refined core architecture, Red Hat Enterprise Linux 8 also enhances:


Red Hat Enterprise Linux 8 Beta supports more efficient Linux networking in containers through IPVLAN, connecting containers nested in virtual machines (VMs) to networking hosts with a minimal impact on throughput and latency. It also includes a new TCP/IP stack with Bandwidth and Round-trip propagation time (BBR) congestion control, which enables higher performance, minimized latency and decreased packet loss for Internet-connected services like streaming video or hosted storage.


As with all versions of Red Hat Enterprise Linux before it, Red Hat Enterprise Linux 8 Beta brings hardened code and security fixes to enterprise users, along with the backing of Red Hat’s overall software security expertise. With Red Hat Enterprise Linux 8 Beta, our aim is to deliver a more secure by default operating system foundation across the hybrid cloud.

Serverless and Servicefull Applications - Where Microservices complements Serverless

OpenSSL 1.1.1 and TLS 1.3 are both supported in Red Hat Enterprise Linux 8, enabling server applications on the platform to use the latest standards for cryptographic protection of customer data. System-wide Cryptographic Policies are also included, making it easier to manage cryptographic compliance from a single prompt without the need to modify and tune specific applications.

Linux containers

Red Hat set a standard when we introduced enterprise support for Linux containers in Red Hat Enterprise Linux 7. Now, Linux containers have become a critical component of digital transformation, offering a roadmap for more portable and flexible enterprise applications, and Red Hat remains at the forefront of this shift with Red Hat Enterprise Linux 8.

Red Hat’s lightweight, open standards-based container toolkit is now fully supported and included with Red Hat Enterprise Linux 8. Built with enterprise IT security needs in mind, Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers) help developers find, run, build and share containerized applications more quickly and efficiently, thanks to the distributed and daemonless nature of the tools.

FaaS and Furious - 0 to Serverless in 60 Seconds, Anywhere - Alex Ellis, ADP

Systems management

The growth of Linux in corporate datacenters requires management and, frequently, new systems administrators are faced with managing complex system footprints or performing difficult tasks that are outside of their comfort zones. Red Hat Enterprise Linux 8 aims to make it easier on systems administrators of all experience levels with several quality of life improvements, starting with a single and consistent user control panel through the Red Hat Enterprise Linux Web Console. This provides a simplified interface to more easily manage Red Hat Enterprise Linux servers locally and remotely, including virtual machines.

Camel Riders in the Cloud

Red hat enterprise linux roadmap

Composer makes it easier for both new and experienced Red Hat Enterprise Linux users to build and deploy custom images across the hybrid cloud - from physical and virtualized environments to private and public cloud instances. Using a straightforward graphical interface, Composer simplifies access to packages as well as the process for assembling deployable images. This means that users can more readily create Red Hat Enterprise Linux-based images, from minimal footprint to specifically optimized, for a variety of deployment models, including virtual machines and cloud environments.

Istio canaries and kubernetes

Yum 4, the next generation of the Yum package manager in Red Hat Enterprise Linux, delivers faster performance, fewer installed dependencies and more choices of package versions to meet specific workload requirements.

Lightning Talk: The State Of FaaS on Kubernetes - Michael Hausenblas, Red Hat

File systems and storage

New to Red Hat Enterprise Linux 8 Beta is Stratis, a volume-managing file system for more sophisticated data management. Stratis abstracts away the complexities inherent to data management via an API, enabling these capabilities without requiring systems administrators to understand the underlying nuances, delivering a faster and more efficient file system.

File System Snapshots provide for a faster way of conducting file-level tasks, like cloning virtual machines, while saving space by consuming new storage only when data changes. Support for LUKSv2 to encrypt on-disk data combined with Network-Bound Disk Encryption (NBDE) for more robust data security and more simplified access to encrypted data.

IBM Acquires RedHat: Creating World Leading Hybrid Clod Provider. Ibm red-hat-charts-10-2018

Test the future

We don’t just want to tell you what makes Red Hat Enterprise Linux 8 Beta a foundation for the future of IT. We want you to experience it. Existing customers and subscribers are invited and encouraged to test Red Hat Enterprise Linux 8 Beta for themselves to see how they can deploy applications with more flexibility, more confidence and more control. Developers can also see the future of the world’s leading enterprise Linux platform through the Red Hat Developer Program. If you are new to Red Hat Enterprise Linux, please visit the Red Hat Enterprise Linux 8 Public Beta download site and view the README file for instructions on how to download and install the software.


Gartner predicts that, by 2020, more than 50% of global organizations will be running containerized applications in production, up from less than 20% today.* This means to us that developers need to be able to more quickly and easily create containerized applications. It’s this challenge that the Buildah project, with the release of version 1.0, aims to solve by bringing new innovation to the world of container development.

IBM + REDHAT "Creating the World's Leading Hybrid Cloud Provider..."

While Linux containers themselves present a path to digital transformation, the actual building of these containers isn’t quite so clear. Typically, building a Linux container image requires the use of an extensive set of tools and daemons (a container engine, so to speak). The existing tools are bulky by container standards and I believe there has been a distinct lack of innovation. IT teams may want their build systems running the bare minimum of processes and tools, otherwise, additional complexity can be introduced that could lead to loss of system stability and even security risks. Complexity is a serious architectural and security challenge.

This is where Buildah comes in. A command line utility, Buildah provides only the basic requirements needed to create or modify Linux container images making it easier to integrate into existing application build pipelines.

The resulting container images are not snowflakes, either; they are OCI-compliant and can even be built using Dockerfiles. Buildah is a distillation of container development to the bare necessities, designed to help IT teams to limit complexity on critical systems and streamline ownership and security workflows.

OpenShift Commons Briefing #122: State of FaaS on Kubernetes - Michael Hausenblas (Red Hat)

When we say “bare necessities,” we mean it. Buildah allows for the on-the-fly creation of containers from scratch—think of it as an empty box. For example, Buildah can assemble containers that omit things like package managers (DNF/YUM), that are not required by the final image. So not only can Buildah provide the capability to build these containers in a less complex and more secure fashion, it can cut bloat (and therefore image size) and extend customization to what you need in your cloud-native applications.

Since Buildah is daemonless, it is easier to run it in a container without setting up special infrastructure on the host or “leaking” host sockets into the container. You can run Buildah inside of your Kubernetes (or enterprise Kubernetes, like Red Hat OpenShift) cluster.

On-premises FaaS on Kubernetes

What’s special about Buildah 1.0

We’ve talked about Buildah before, most notably launching full, product-level support for it in Red Hat Enterprise Linux 7.5. Now that 1.0 has hit the community, here are a few of the notable features in Buildah that make it interesting:

Buildah has added external read/write volumes during builds, which enables users to build container images that reference external volumes while being built, but without having to ship those external volumes in the completed image. This helps to simplify image creation without bloating those images with unnecessary and unwanted artifacts in production.

To enhance security, Buildah can help the resulting images better comply with Federal Information Processing Standards (FIPS), computer systems standards required by the U.S. Federal Government for non-military, governmental operations, with support for FIPS mode. When a host is running in FIPS mode, Buildah can build and run containers in FIPS mode as well, making it easier for containers on hosts running in FIPS mode to comply with the standards.

  • Buildah now also offers multi-stage builds, multiple container transport methods for pulling and pushing images, and more. By focusing solely on building and manipulating container images, 
  • Buildah is a useful tool for anyone working with Linux containers. Whether you’re a developer testing images locally or looking for an independent image builder for a production toolchain, 
  • Buildah is a worthy addition to your container toolbelt.

Want to start building with Buildah yourself?

Try `yum -y install buildah` or learn more and contribute at the project site: https://github.com/projectatomic/buildah.

You can also see a more detailed example at https://www.projectatomic.io/blog/2018/03/building-buildah-container-image-for-kubernetes/.

*Smarter with Gartner, 6 Best Practices for Creating a Container Platform Strategy, October 31, 2017, https://www.gartner.com/smarterwithgartner/6-best-practices-for-creating-a-container-platform-strategy/

6 Best Practices for Creating a Container Platform Strategy

Gartner has identified six key elements that should be part of a container platform strategy to help I&O leaders mitigate the challenges of deploying containers in production environments:

  1. Security and governance - Security is a particularly challenging issue for production container deployments. The integrity of the shared host OS kernel is critical to the integrity and isolation of the containers that run on top of it. A hardened, patched, minimalist OS should be used as the host OS, and containers should be monitored on an ongoing basis for vulnerabilities and malware to ensure a trusted service delivery.
  2. Monitoring - The deployment of cloud-native applications shifts the focus to container-specific and service-oriented monitoring (from host-based) to ensure compliance with resiliency and performance service-level agreements. “It’s therefore important to deploy packaged tools that can provide container and service-level monitoring, as well as linking container monitoring tools to the container orchestrators to pull in metrics on other components for better visualization and analytics,” says Chandrasekaran.
  3. Storage - Since containers are transient, the data should be disassociated from the container so that the data persists and is protected even after the container is spun down. Scale-out software-defined storage products can solve the problem of data mobility, the need for agility and simultaneous access to data from multiple application containers.
  4. Networking - The portability and short-lived life cycle of containers will overwhelm the traditional networking stack. The native container networking stack doesn’t have robust-enough access and policy management capabilities. “I&O teams must therefore eliminate manual network provisioning within containerized environments, enable agility through network automation and provide developers with proper tools and sufficient flexibility,” Chandrasekaran says.
  5. Container life cycle management - Containers present the potential for sprawl even more severe than many virtual machine deployments caused. This complexity is often intensified by many layers of services and tooling. Container life cycle management can be automated through a close tie-in with continuous integration/continuous delivery processes together with continuous configuration automation tools to automate infrastructure deployment and operational tasks.
  6. Container orchestration - Container management tools are the “brains” of a distributed system, making decisions on discovery of infrastructure components making up a service, balancing workloads with infrastructure resources, and provisioning and deprovisioning infrastructures, among other things. “The key decision here is whether hybrid orchestration for container workloads is required or if it is sufficient to provision based on use case and manage multiple infrastructure silos individually,” Chandrasekaran says.

Jaeger Project Intro - Juraci Kröhling, Red Hat (Any Skill Level)

More Information:



Red Hat Enterprise Linux 8 – Release Date and New Features @itzgeek https://www.itzgeek.com/how-tos/linux/centos-how-tos/red-hat-enterprise-linux-8-release-date-and-new-features.html










23 October 2018

Windows Server 2019 (Version: 10.0.17763) and SQL server 2019

Windows Server 2019 (Version: 10.0.17763) and SQL server 2019

The Latest from Ignite 2018

From Ops to DevOps with Windows Server containers and Windows Server 2019

Windows Server 2019 will be generally available in October and we have updated Windows Admin Center, version 1809,  to support Windows Server 2019 and Azure hybrid scenarios. Windows Server 2019 builds on the foundation of Windows Server 2016, the fastest adopted version of Windows Server with 10s of millions of instances deployed worldwide. Customers like Alaska Airlines, Tyco, and Tieto have adopted Windows Server 2016 to modernize their datacenters.

What's new in Remote Desktop Services on Windows Server 2019

Through various listening channels such as the Insider program, product telemetry analysis, and industry trends, we heard loud and clear that hybrid, security, agility, and TCO are top of mind for our customers. Datacenter modernization is critical to support your business and deliver innovation, especially given the competitive landscape today. Windows Server 2019 is designed and engineered to help modernize your datacenter, delivering on four key areas:

Hybrid: The move to the cloud is a journey. A hybrid approach, one that combines on-premises and cloud environments working together, is a core element of our customers’ modernization strategy. This is why hybrid is built in to Windows Server 2019 and Windows Admin Center. To make it easier to connect existing Windows Server deployments to Azure services, we built interfaces for hybrid capabilities into the Windows Admin Center. With Windows Admin Center and Windows Server 2019, customers can use hybrid features like Azure Backup, Azure File Sync, disaster recovery to extend their datacenters to Azure. We also added the Storage Migration Service to help migrate file servers and their data to Azure without the need to reconfigure applications or users.

Windows Server 2019 deep dive | Best of Microsoft Ignite 2018

Security: Security continues to be a top priority for our customers. With the security threats growing in number and becoming more and more sophisticated, we continue to keep a persistent focus on security. Our approach to security is three-fold: Protect, Detect, and Respond. We bring security features in all three areas to Windows Server 2019. On the Protect front, we had previously introduced Shielded VMs to protect sensitive virtualized workloads such as Domain Controllers, PCI data, sensitive healthcare, and financial data among others. In Windows Server 2019, we extended support of Shielded VMs to Linux VMs. On the Detect and Respond front, we enabled Windows Defender Advanced Threat Protection (ATP), that detects attacks and zero-day exploits among other capabilities. Windows Server 2019 also includes Defender Exploit Guard to help you elevate the security posture of your IT environment and combat ransomware attacks.

Windows Server 2019 deep dive

Application Platform: A key guiding principle for us on the Windows Server team is a relentless focus on the developer experience. We learned from your feedback, a smaller container image size will significantly improve experience of developers and IT Pros who are modernizing their existing applications using containers. In Windows Server 2019, we reduced the Server Core base container image to a third of its size. We also provide improved app compatibility, support for Service Fabric and  Kubernetes, and support for Linux containers on Windows to help modernize your apps. A feedback we constantly hear from developers is the complexity in navigating environments with Linux and Windows deployments. To address that, we previously extended Windows Subsystem for Linux (WSL) into insider builds for Windows Server, so that customers can run Linux containers side-by-side with Windows containers on a Windows Server. In Windows Server 2019, we are continuing on this journey to improve WSL, helping Linux users bring their scripts to Windows while using industry standards like OpenSSH, Curl & Tar.

Windows 2019

Hyper-converged Infrastructure (HCI): HCI is one of the latest trends in the server industry today. It is primarily because customers understand the value of using servers with high performant local disks to run their compute and storage needs at the same time. In Windows Server 2019, we democratize HCI with cost-effective high-performance software-defined storage and networking that allows deployments to scale from small 2-node, all the way up to 100s of servers with Cluster Sets technology, making it affordable regardless of the deployment scale. Through our Windows Server Software Defined program, we partner with industry leading hardware vendors to provide an affordable and yet extremely robust HCI solution with validated design.

In October, customers will have access to Windows Server 2019 through all the channels! We will publish a blog post to mark the availability of Windows Server 2019 soon.

Are you wondering how Windows Server 2016 vs. 2019 compare? Let’s find out!

Microsoft, the Redmond giant, has recently announced the new version for the Windows Server. Aptly named Windows Server 2019, the service is likely to be available for downloads. In fact, the downloads have been made available for the users of inside builds and should be available for a general roll out quite soon. How does it improve the user experience from the days of Windows Server 2016? Let us get to know through an introduction to the new features on Windows Server 2019.

Windows server 2016 vs. 2019 – What’s the difference?
The Windows Server 2019 was officially announced on March 20, 2018, through communication on officially Windows Server Blog. The new server edition will be available for the general public from the second half of the calendar year 2018. If you want to have the experience before it is possible for everyone else, you may check it out by registering for the Windows Insider Program.

Differentiating the Windows Server 2019 from its predecessor, the Windows Server 2016 should not be an easy task. The latest version of the Windows Server is based on the Windows Server 2016, and thus you would find almost all the features virtually on the similar lines except for the new improvements and optimizations. We will attempt differentiating between the two based on the new features.

Windows Server 2016 has been one of the fastest ever server version from the Redmond giant. The Windows Server 2019 continues from where the 2016 version has left. The primary areas that were selected for the changes and improvements were – Hybrid, Security, Application Platform, and Hyper-converged infrastructure.

Hybrid Cloud Scenario

The Windows Server 2019 uses a hybrid approach for the movement to the Cloud. Unlike the option available on Windows Server 2016, both on-premise and cloud solutions would work together, thus offering an enhanced environment for the users.

The Server 2016 uses Active Directory, file server synchronization and backing up the data in the cloud. The difference lies in the way the Windows Server 2019 lets the on-premises make use of more advanced systems like IoT and Artificial Intelligence. The hybrid approach would ensure that you are future proof and long-term option.

Integration with Project Honolulu offers you a seamless, lightweight and flexible platform for all your needs. If you are using the Cloud Services from Microsoft, the Microsoft Azure, this is something you would indeed love.

New Security Services

Security is yet another feature that has received an impetus from the days of Windows Server 2016. The Server 2016 had been reliant on Shielded VMs. But, what has changed with the new version of the server edition is the additional support for Linux VMs.

Windows Server 2019 introduces new security features with an emphasis on three particular areas that need attention – Protect, Detect and Respond. The Windows Server 2019 brings in a new functionality of extended support of VMConnect for your troubleshooting needs on Shielded VMs for Windows Server and Linux.

There is another added functionality that has been added from the days of Windows Server 2016 is the embedded Windows Defender Advanced Threat Protection. It can perform efficient preventive actions for complete detection of attacks.

Application Platform

Microsoft has been focussing on the enhanced developer experiences. The Windows Server 2019 brings in the new developments in the form of improved Windows Server Containers and the Windows Subsystem for hosting Linux.

Windows Server 2016 has had a good performance concerning the Windows Server Containers. In fact, the concept has had greater success regarding the adoption. Thousands of container images have already been downloaded ever since the launch of 2016 edition of Windows Server. However, Windows Server 2019 edition has been aiming to reduce the size of the server base core of the container image. This is bound to enhance the development and performance remarkably.

Windows Server 2016 introduced support for the robust level of Hyper-Converged Infrastructure or HCI options. It brought in the support from the industry’s leading hardware vendors. Windows Server 2019 taking it ahead from the days of Windows Server 2016.

Windows Server 2019: What’s new and what's next

Yes, the 2019 version brings in a few extra features – extra-scale, performance, reliability and better support for HCI deployment. The Project Honolulu we mentioned above brings in a high-performance interface for Storage Space Direct. However, if you are someone belonging to the small business genre, you would not be able to afford it as of now.

Enterprise-grade hyperconverged infrastructure (HCI)

With the release of Windows Server 2019, Microsoft rolls up three years of updates for its HCI platform. That’s because the gradual upgrade schedule Microsoft now uses includes what it calls Semi-Annual Channel releases – incremental upgrades as they become available. Then every couple of years it creates a major release called the Long-Term Servicing Channel (LTSC) version that includes the upgrades from the preceding Semi-Annual Channel releases.

The LTSC Windows Server 2019 is due out this fall, and is now available to members of Microsoft’s Insider program.

While the fundamental components of HCI (compute, storage and networking) have been improved with the Semi-Annual Channel releases, for organizations building datacenters and high-scale software defined platforms, Windows Server 2019 is a significant release for the software-defined datacenter.

What's new in Active Directory Federation Services (AD FS) in Windows Server 2019

With the latest release, HCI is provided on top of a set of components that are bundled in with the server license. This means a backbone of servers running HyperV to enable dynamic increase or decrease of capacity for workloads without downtime.

Improvements in security

Microsoft has continued to include built-in security functionality to help organizations address an “expect breach” model of security management.  Rather than assuming firewalls along the perimeter of an enterprise will prevent any and all security compromises, Windows Server 2019 assumes servers and applications within the core of a datacenter have already been compromised.

Windows Server 2019 includes Windows Defender Advanced Threat Protection (ATP) that assess common vectors for security breaches, and automatically blocks and alerts about potential malicious attacks.  Users of Windows 10 have received many of the Windows Defender ATP features over the past few months. Including  Windows Defender ATP on Windows Server 2019 lets them take advantage of data storage, network transport and security-integrity components to prevent compromises on Windows Server 2019 systems.

Smaller, more efficient containers

Organizations are rapidly minimizing the footprint and overhead of their IT operations and eliminating more bloated servers with thinner and more efficient containers. Windows Insiders have benefited by achieving higher density of compute to improve overall application operations with no additional expenditure in hardware server systems or expansion of hardware capacity.

Getting started with Windows Server containers in Windows Server 2019

Windows Server 2019 has a smaller, leaner ServerCore image that cuts virtual machine overhead by 50-80 percent.  When an organization can get the same (or more) functionality in a significantly smaller image, the organization is able to lower costs and improve efficiencies in IT investments.

Windows subsystem on Linux

A decade ago, one would rarely say Microsoft and Linux in the same breath as complimentary platform services, but that has changed. Windows Server 2016 has open support for Linux instances as virtual machines, and the new Windows Server 2019 release makes huge headway by including an entire subsystem optimized for the operation of Linux systems on Windows Server.

The Windows Subsystem for Linux extends basic virtual machine operation of Linux systems on Windows Server, and provides a deeper layer of integration for networking, native filesystem storage and security controls. It can enable encrypted Linux virtual instances. That’s exactly how Microsoft provided Shielded VMs for Windows in Windows Server 2016, but now native Shielded VMs for Linux on Windows Server 2019.

Be an IT hero with Storage Spaces Direct in Windows Server 2019

Enterprises have found the optimization of containers along with the ability to natively support Linux on Windows Server hosts can decrease costs by eliminating the need for two or three infrastructure platforms, and instead running them on Windows Server 2019.

Because most of the “new features” in Windows Server 2019 have been included in updates over the past couple years, these features are not earth-shattering surprises.  However, it also means that the features in Windows Server 2019 that were part of Windows Server 2016 Semi-Annual Channel releases have been tried, tested, updated and proven already, so that when Windows Server 2019 ships, organizations don’t have to wait six to 12 months for a service pack of bug fixes.

This is a significant change that is helping organizations plan their adoption of Windows Server 2019 sooner than orgs may have adopted a major release platform in the past, and with significant improvements for enterprise datacenters in gaining the benefits of Windows Server 2019 to meet security, scalability, and optimized data center requirements so badly needed in today’s fast-paced environments.

Windows Server 2019 has the following new features:

  • Windows Subsystem for Linux (WSL)
  • Support for Kubernetes (Beta)
  • Other GUI new features from Windows 10 version 1809.
  • Storage Spaces Direct.
  • Storage Migration Service.
  • Storage Replica.
  • System Insights.
  • Improved Windows Defender.

What is New in Windows Server 2019

Windows Server 2019 has four main areas of investments and below is glimpse of each area.

Hybrid: Windows Server 2019 and Windows Admin Center will make it easier for our customers to connect existing on-premises environments to Azure. With Windows Admin Center it also easier for customers on Windows Server 2019 to use Azure services such as Azure Backup, Azure Site Recovery, and more services will be added over time.
Security: Security continues to be a top priority for our customers and we are committed to helping our customers elevate their security posture. Windows Server 2016 started on this journey and Windows Server 2019 builds on that strong foundation, along with some shared security features with Windows 10, such as Defender ATP for server and Defender Exploit Guard.
Application Platform: Containers are becoming popular as developers and operations teams realize the benefits of running in this new model. In addition to the work we did in Windows Server 2016, we have been busy with the Semi-Annual Channel releases and all that work culminates in Windows Server 2019. Examples of these include Linux containers on Windows, the work on the Windows Subsystem for Linux (WSL), and the smaller container images.
Hyper-converged Infrastructure (HCI): If you are thinking about evolving your physical or host server infrastructure, you should consider HCI. This new deployment model allows you to consolidate compute, storage, and networking into the same nodes allowing you to reduce the infrastructure cost while still getting better performance, scalability, and reliability.

Microsoft SQL Server 2019 

SQL Server 2019 Vision

What’s New in Microsoft SQL Server 2019 

• Big Data Clusters

  • Deploy a Big Data cluster with SQL and Spark Linux containers on Kubernetes
  • Access your big data from HDFS
  • Run Advanced analytics and machine learning with Spark
  • Use Spark streaming to data to SQL data pools
  • Use Azure Data Studio to run Query books that provide a notebook experience

• Database engine

  • UTF-8 support
  • Resumable online index create allows index create to resume after interruption
  • Clustered columnstore online index build and rebuild
  • Always Encrypted with secure enclaves
  • Intelligent query processing
  • Java language programmability extension
  • SQL Graph features
  • Database scoped configuration setting for online and resumable DDL operations
  • Always On Availability Groups – secondary replica connection redirection
  • Data discovery and classification – natively built into SQL Server
  • Expanded support for persistent memory devices
  • Support for columnstore statistics in DBCC CLONEDATABASE
  • New options added to sp_estimate_data_compression_savings
  • SQL Server Machine Learning Services failover clusters
  • Lightweight query profiling infrastructure enabled by default
  • New Polybase connectors
  • New sys.dm_db_page_info system function returns page information

• SQL Server on Linux

  • Replication support
  • Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
  • Always On Availability Group on Docker containers with Kubernetes
  • OpenLDAP support for third-party AD providers
  • Machine Learning on Linux
  • New container registry
  • New RHEL-based container images
  • Memory pressure notification

• Master Data Services

  • Silverlight controls replaced

• Security

  • Certificate management in SQL Server Configuration Manager

• Tools

  • SQL Server Management Studio (SSMS) 18.0 (preview)
  • Azure Data Studio

Introducing Microsoft SQL Server 2019 Big Data Clusters

SQL Server 2019 big data clusters make it easier for big data sets to be joined to the dimensional data typically stored in the enterprise relational database, enabling people and apps that use SQL Server to query big data more easily. The value of the big data greatly increases when it is not just in the hands of the data scientists and big data engineers but is also included in reports, dashboards, and applications. At the same time, the data scientists can continue to use big data ecosystem tools while also utilizing easy, real-time access to the high-value data in SQL Server because it is all part of one integrated, complete system.
Read the complete Awesome blogpost from Travis Wright about SQL Server 2019 Big Data Cluster here: https://cloudblogs.microsoft.com/sqlserver/2018/09/25/introducing-microsoft-sql-server-2019-big-data-clusters/
Starting in SQL Server 2017 with support for Linux and containers, Microsoft has been on a journey of platform and operating system choice. With SQL Server 2019 preview, we are making it easier to adopt SQL Server in containers by enabling new HA scenarios and adding supported Red Hat Enterprise Linux container images. Today we are happy to announce the availability of SQL Server 2019 preview Linux-based container images on Microsoft Container Registry, Red Hat-Certified Container Images, and the SQL Server operator for Kubernetes, which makes it easy to deploy an Availability Group.

SQL Server 2019: Celebrating 25 years of SQL Server Database Engine and the path forward
Awesome work Microsoft SQL Team and Congrats on your 25th Anniversary !
Microsoft announced the preview of SQL Server 2019. For 25 years, SQL Server has helped enterprises manage all facets of their relational data. In recent releases, SQL Server has gone beyond querying relational data by unifying graph and relational data and bringing machine learning to where the data is with R and Python model training and scoring. As the volume and variety of data increases, customers need to easily integrate and analyze data across all types of data.

SQL Server 2019 big data clusters - intro session

Now, for the first time ever, SQL Server 2019 creates a unified data platform with Apache SparkTM and Hadoop Distributed File System (HDFS) packaged together with SQL Server as a single, integrated solution. Through the ability to create big data clusters, SQL Server 2019 delivers an incredible expansion of database management capabilities, further redefining SQL Server beyond a traditional relational database. And as with every release, SQL Server 2019 continues to push the boundaries of security, availability, and performance for every workload with Intelligent Query Processing, data compliance tools and support for persistent memory. With SQL Server 2019, you can take on any data project, from traditional SQL Server workloads like OLTP, Data Warehousing and BI, to AI and advanced analytics over big data.

SQL Server 2017 Deep Dive

SQL Server provides a true hybrid platform, with a consistent SQL Server surface area from your data center to public cloud—making it easy to run in the location of your choice. Because SQL Server 2019 big data clusters are deployed as containers on Kubernetes with a built-in management service, customers can get a consistent management and deployment experience on a variety of supported platforms on-premises and in the cloud: OpenShift or Kubernetes on premises, Azure Kubernetes Service (AKS), Azure Stack (on AKS) and OpenShift on Azure. With Azure Hybrid Benefit license portability, you can choose to run SQL Server workloads on-premises or in Azure, at a fraction of the cost of any other cloud provider.

SQL Server – Insights over all your data

SQL Server continues to embrace open source, from SQL Server 2017 support for Linux and containers to SQL Server 2019 now embracing Spark and HDFS to bring you a unified data platform. With SQL Server 2019, all the components needed to perform analytics over your data are built into a managed cluster, which is easy to deploy and it can scale as per your business needs. HDFS, Spark, Knox, Ranger, Livy, all come packaged together with SQL Server and are quickly and easily deployed as Linux containers on Kubernetes. SQL Server simplifies the management of all your enterprise data by removing any barriers that currently exist between structured and unstructured data.

SQL server 2019 big data clusters - deep dive session

Here’s how we make it easy for you to break down barriers to realized insights across all your data, providing one view of your data across the organization:

Simplify big data analytics for SQL Server users. SQL Server 2019 makes it easier to manage big data environments. It comes with everything you need to create a data lake, including HDFS and Spark provided by Microsoft and analytics tools, all deeply integrated with SQL Server and fully supported by Microsoft. Now, you can run apps, analytics, and AI over structured and unstructured data – using familiar T-SQL queries or people familiar with Spark can use Python, R, Scala, or Java to run Spark jobs for data preparation or analytics – all in the same, integrated cluster.
Give developers, data analysts, and data engineers a single source for all your data – structured and unstructured – using their favorite tools. With SQL Server 2019, data scientists can easily analyze data in SQL Server and HDFS through Spark jobs. Analysts can run advanced analytics over big data using SQL Server Machine Learning Services: train over large datasets in Hadoop and operationalize in SQL Server. Data scientists can use a brand new notebook experience running on the Jupyter notebooks engine in a new extension of Azure Data Studio to interactively perform advanced analysis of data and easily share the analysis with their colleagues.
Break down data silos and deliver one view across all of your data using data virtualization. Starting in SQL Server 2016, PolyBase has enabled you to run a T-SQL query inside SQL Server to pull data from your data lake and return it in a structured format—all without moving or copying the data. Now in SQL Server 2019, we’re expanding that concept of data virtualization to additional data sources, including Oracle, Teradata, MongoDB, PostgreSQL, and others. Using the new PolyBase, you can break down data silos and easily combine data from many sources using virtualization to avoid the time, effort, security risks and duplicate data created by data movement and replication. New elastically scalable “data pools” and “compute pools” make querying virtualized data lighting fast by caching data and distributing query execution across many instances of SQL Server.

“From its inception, the Sloan Digital Sky Survey database has run on SQL Server, and SQL Server also stores object catalogs from large cosmological simulations. We are delighted with the promise of SQL Server 2019 big data clusters, which will allow us to enhance our databases to include all our big data sets. The distributed nature of SQL Server 2019 allows us to expand our efforts to new types of simulations and to the next generation of astronomical surveys with datasets up to 10PB or more, well beyond the limits of our current database solutions.”- Dr. Gerard Lemson, Institute for Data Intensive Engineering and Science, Johns Hopkins University.

Enhanced performance, security, and availability

The SQL Server 2019 relational engine will deliver new and enhanced features in the areas of mission-critical performance, security and compliance, and database availability, as well as additional features for developers, SQL Server on Linux and containers, and general engine enhancements.

Industry-leading performance – The Intelligent Database

The Intelligent Query Processing family of features builds on hands-free performance tuning features of Adaptive Query Processing in SQL Server 2017 including Row mode memory grant feedback, approximate COUNT DISTINCT, Batch mode on rowstore, and table variable deferred compilation.
Persistent memory support is improved in this release with a new, optimized I/O path available for interacting with persistent memory storage.
The Lightweight query profiling infrastructure is now enabled by default to provide per query operator statistics anytime and anywhere you need it.

Advanced security – Confidential Computing

Always Encrypted with secure enclaves extends the client-side encryption technology introduced in SQL Server 2016. Secure enclaves protect sensitive data in a hardware or software-created enclave inside the database, securing it from malware and privileged users while enabling advanced operations on encrypted data.
SQL Data Discovery and Classification is now built into the SQL Server engine with new metadata and auditing support to help with GDPR and other compliance needs.
Certification Management is now easier using SQL Server Configuration Manager.
Mission-critical availability – High uptime

Always On Availability Groups have been enhanced to include automatic redirection of connections to the primary based on read/write intent.
High availability configurations for SQL Server running in containers can be enabled with Always On Availability Groups using Kubernetes.
Resumable online indexes now support create operations and include database scoped defaults.
Developer experience

Enhancements to SQL Graph include match support with T-SQL MERGE and edge constraints.
New UTF-8 support gives customers the ability to reduce SQL Server’s storage footprint for character data.
The new Java language extension will allow you to call a pre-compiled Java program and securely execute Java code on the same server with SQL Server. This reduces the need to move data and improves application performance by bringing your workloads closer to your data.
Machine Learning Services has several enhancements including Windows Failover cluster support, partitioned models, and support for SQL Server on Linux.

Platform of choice
Additional capabilities for SQL Server on Linux include distributed transactions, replication, Polybase, Machine Learning Services, memory notifications, and OpenLDAP support.
Containers have new enhancements including use of the new Microsoft Container Registry with support for RedHat Enterprise Linux images and Always On Availability Groups for Kubernetes.
You can read more about what’s new in SQL Server 2019 in our documentation.

SQL Server 2019 support in Azure Data Studio

Expanded support for more data workloads in SQL Server requires expanded tooling. As Microsoft has worked with users of its data platform we have seen the coming together of previously disparate personas: database administrators, data scientists, data developers, data analysts, and new roles still being defined. These users increasingly want to use the same tools to work together, seamlessly, across on-premises and cloud, using relational and unstructured data, working with OLTP, ETL, analytics, and streaming workloads.

Azure Data Studio offers a modern editor experience with lightning fast IntelliSense, code snippets, source control integration, and an integrated terminal. It is engineered with the data platform user in mind, with built-in charting of query result sets, an integrated notebook, and customizable dashboards. Azure Data Studio currently offers built-in support for SQL Server on-premises and Azure SQL Database, along with preview support for Azure SQL Managed Instance and Azure SQL Data Warehouse.

Azure Data Studio is today shipping a new SQL Server 2019 Preview Extension to add support for select SQL Server 2019 features. The extension offers connectivity and tooling for SQL Server big data clusters, including a preview of the first ever notebook experience in the SQL Server toolset, and a new PolyBase Create External Table wizard that makes accessing data from remote SQL Server and Oracle instances easy and fast.

More information: