• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

23 November 2018

Powering IT’s future while preserving the present: Introducing Red Hat Enterprise Linux 8

Powering IT’s future while preserving the present: Introducing Red Hat Enterprise Linux 8: Red Hat Enterprise Linux multi-year roadmap

Red Hat Enterprise Linux multi-year roadmap

Red Hat Enterprise Linux 8 (RHEL 8) has not been released, but, the beta was released on November 14 for you to get your hands dirty on the new version of world’s best enterprise operating system. This release came after IBM acquired Red Hat for $34 billion on October 28, 2018.  https://www.itzgeek.com/how-tos/linux/centos-how-tos/red-hat-enterprise-linux-8-release-date-and-new-features.html

Meet Red Hat Enterprise Linux 8

Linux containers, Kubernetes, artificial intelligence, blockchain and too many other technical breakthroughs to list all share a common component - Linux, the same workhorse that has driven mission-critical, production systems for nearly two decades. Today, we’re offering a vision of a Linux foundation to power the innovations that can extend and transform business IT well into the future: Meet Red Hat Enterprise Linux 8.

Microservices with Docker, Kubernetes, and Jenkins

Enterprise IT is evolving at a pace faster today than at any other point in history. This reality necessitates a common foundation that can span every footprint, from the datacenter to multiple public clouds, enabling organizations to meet every workload requirement and deliver any app, everywhere.

With Red Hat Enterprise Linux 8, we worked to deliver a shared foundation for both the emerging and current worlds of enterprise IT. The next generation of the world’s leading enterprise Linux platform helps fuel digital transformation strategies across the hybrid cloud, where organizations use innovations like Linux containers and Kubernetes to deliver differentiated products and services. At the same time, Red Hat Enterprise Linux 8 Beta enables IT teams to optimize and extract added value from existing technology investments, helping to bridge demands for innovation with stability and productivity.

Sidecars and a Microservices Mesh

In the four years since Red Hat Enterprise Linux 7 redefined the operating system, the IT world has changed dramatically and Red Hat Enterprise Linux has evolved with it. Red Hat Enterprise Linux 8 Beta once again sets a bar for how the operating system can enable IT innovation. While Red Hat Enterprise Linux 8 Beta features hundreds of improvements and dozens of new features, several key capabilities are designed to help the platform drive digital transformation and fuel hybrid cloud adoption without disrupting existing production systems.

Your journey into the serverless world

Red Hat Enterprise Linux 8 introduces the concept of Application Streams to deliver userspace packages more simply and with greater flexibility. Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system. Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream. This helps to deliver greater agility and user-customized versions of Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments.

Red Hat Enterprise Linux roadmap 2018

Beyond a refined core architecture, Red Hat Enterprise Linux 8 also enhances:


Red Hat Enterprise Linux 8 Beta supports more efficient Linux networking in containers through IPVLAN, connecting containers nested in virtual machines (VMs) to networking hosts with a minimal impact on throughput and latency. It also includes a new TCP/IP stack with Bandwidth and Round-trip propagation time (BBR) congestion control, which enables higher performance, minimized latency and decreased packet loss for Internet-connected services like streaming video or hosted storage.


As with all versions of Red Hat Enterprise Linux before it, Red Hat Enterprise Linux 8 Beta brings hardened code and security fixes to enterprise users, along with the backing of Red Hat’s overall software security expertise. With Red Hat Enterprise Linux 8 Beta, our aim is to deliver a more secure by default operating system foundation across the hybrid cloud.

Serverless and Servicefull Applications - Where Microservices complements Serverless

OpenSSL 1.1.1 and TLS 1.3 are both supported in Red Hat Enterprise Linux 8, enabling server applications on the platform to use the latest standards for cryptographic protection of customer data. System-wide Cryptographic Policies are also included, making it easier to manage cryptographic compliance from a single prompt without the need to modify and tune specific applications.

Linux containers

Red Hat set a standard when we introduced enterprise support for Linux containers in Red Hat Enterprise Linux 7. Now, Linux containers have become a critical component of digital transformation, offering a roadmap for more portable and flexible enterprise applications, and Red Hat remains at the forefront of this shift with Red Hat Enterprise Linux 8.

Red Hat’s lightweight, open standards-based container toolkit is now fully supported and included with Red Hat Enterprise Linux 8. Built with enterprise IT security needs in mind, Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers) help developers find, run, build and share containerized applications more quickly and efficiently, thanks to the distributed and daemonless nature of the tools.

FaaS and Furious - 0 to Serverless in 60 Seconds, Anywhere - Alex Ellis, ADP

Systems management

The growth of Linux in corporate datacenters requires management and, frequently, new systems administrators are faced with managing complex system footprints or performing difficult tasks that are outside of their comfort zones. Red Hat Enterprise Linux 8 aims to make it easier on systems administrators of all experience levels with several quality of life improvements, starting with a single and consistent user control panel through the Red Hat Enterprise Linux Web Console. This provides a simplified interface to more easily manage Red Hat Enterprise Linux servers locally and remotely, including virtual machines.

Camel Riders in the Cloud

Red hat enterprise linux roadmap

Composer makes it easier for both new and experienced Red Hat Enterprise Linux users to build and deploy custom images across the hybrid cloud - from physical and virtualized environments to private and public cloud instances. Using a straightforward graphical interface, Composer simplifies access to packages as well as the process for assembling deployable images. This means that users can more readily create Red Hat Enterprise Linux-based images, from minimal footprint to specifically optimized, for a variety of deployment models, including virtual machines and cloud environments.

Istio canaries and kubernetes

Yum 4, the next generation of the Yum package manager in Red Hat Enterprise Linux, delivers faster performance, fewer installed dependencies and more choices of package versions to meet specific workload requirements.

Lightning Talk: The State Of FaaS on Kubernetes - Michael Hausenblas, Red Hat

File systems and storage

New to Red Hat Enterprise Linux 8 Beta is Stratis, a volume-managing file system for more sophisticated data management. Stratis abstracts away the complexities inherent to data management via an API, enabling these capabilities without requiring systems administrators to understand the underlying nuances, delivering a faster and more efficient file system.

File System Snapshots provide for a faster way of conducting file-level tasks, like cloning virtual machines, while saving space by consuming new storage only when data changes. Support for LUKSv2 to encrypt on-disk data combined with Network-Bound Disk Encryption (NBDE) for more robust data security and more simplified access to encrypted data.

IBM Acquires RedHat: Creating World Leading Hybrid Clod Provider. Ibm red-hat-charts-10-2018

Test the future

We don’t just want to tell you what makes Red Hat Enterprise Linux 8 Beta a foundation for the future of IT. We want you to experience it. Existing customers and subscribers are invited and encouraged to test Red Hat Enterprise Linux 8 Beta for themselves to see how they can deploy applications with more flexibility, more confidence and more control. Developers can also see the future of the world’s leading enterprise Linux platform through the Red Hat Developer Program. If you are new to Red Hat Enterprise Linux, please visit the Red Hat Enterprise Linux 8 Public Beta download site and view the README file for instructions on how to download and install the software.


Gartner predicts that, by 2020, more than 50% of global organizations will be running containerized applications in production, up from less than 20% today.* This means to us that developers need to be able to more quickly and easily create containerized applications. It’s this challenge that the Buildah project, with the release of version 1.0, aims to solve by bringing new innovation to the world of container development.

IBM + REDHAT "Creating the World's Leading Hybrid Cloud Provider..."

While Linux containers themselves present a path to digital transformation, the actual building of these containers isn’t quite so clear. Typically, building a Linux container image requires the use of an extensive set of tools and daemons (a container engine, so to speak). The existing tools are bulky by container standards and I believe there has been a distinct lack of innovation. IT teams may want their build systems running the bare minimum of processes and tools, otherwise, additional complexity can be introduced that could lead to loss of system stability and even security risks. Complexity is a serious architectural and security challenge.

This is where Buildah comes in. A command line utility, Buildah provides only the basic requirements needed to create or modify Linux container images making it easier to integrate into existing application build pipelines.

The resulting container images are not snowflakes, either; they are OCI-compliant and can even be built using Dockerfiles. Buildah is a distillation of container development to the bare necessities, designed to help IT teams to limit complexity on critical systems and streamline ownership and security workflows.

OpenShift Commons Briefing #122: State of FaaS on Kubernetes - Michael Hausenblas (Red Hat)

When we say “bare necessities,” we mean it. Buildah allows for the on-the-fly creation of containers from scratch—think of it as an empty box. For example, Buildah can assemble containers that omit things like package managers (DNF/YUM), that are not required by the final image. So not only can Buildah provide the capability to build these containers in a less complex and more secure fashion, it can cut bloat (and therefore image size) and extend customization to what you need in your cloud-native applications.

Since Buildah is daemonless, it is easier to run it in a container without setting up special infrastructure on the host or “leaking” host sockets into the container. You can run Buildah inside of your Kubernetes (or enterprise Kubernetes, like Red Hat OpenShift) cluster.

On-premises FaaS on Kubernetes

What’s special about Buildah 1.0

We’ve talked about Buildah before, most notably launching full, product-level support for it in Red Hat Enterprise Linux 7.5. Now that 1.0 has hit the community, here are a few of the notable features in Buildah that make it interesting:

Buildah has added external read/write volumes during builds, which enables users to build container images that reference external volumes while being built, but without having to ship those external volumes in the completed image. This helps to simplify image creation without bloating those images with unnecessary and unwanted artifacts in production.

To enhance security, Buildah can help the resulting images better comply with Federal Information Processing Standards (FIPS), computer systems standards required by the U.S. Federal Government for non-military, governmental operations, with support for FIPS mode. When a host is running in FIPS mode, Buildah can build and run containers in FIPS mode as well, making it easier for containers on hosts running in FIPS mode to comply with the standards.

  • Buildah now also offers multi-stage builds, multiple container transport methods for pulling and pushing images, and more. By focusing solely on building and manipulating container images, 
  • Buildah is a useful tool for anyone working with Linux containers. Whether you’re a developer testing images locally or looking for an independent image builder for a production toolchain, 
  • Buildah is a worthy addition to your container toolbelt.

Want to start building with Buildah yourself?

Try `yum -y install buildah` or learn more and contribute at the project site: https://github.com/projectatomic/buildah.

You can also see a more detailed example at https://www.projectatomic.io/blog/2018/03/building-buildah-container-image-for-kubernetes/.

*Smarter with Gartner, 6 Best Practices for Creating a Container Platform Strategy, October 31, 2017, https://www.gartner.com/smarterwithgartner/6-best-practices-for-creating-a-container-platform-strategy/

6 Best Practices for Creating a Container Platform Strategy

Gartner has identified six key elements that should be part of a container platform strategy to help I&O leaders mitigate the challenges of deploying containers in production environments:

  1. Security and governance - Security is a particularly challenging issue for production container deployments. The integrity of the shared host OS kernel is critical to the integrity and isolation of the containers that run on top of it. A hardened, patched, minimalist OS should be used as the host OS, and containers should be monitored on an ongoing basis for vulnerabilities and malware to ensure a trusted service delivery.
  2. Monitoring - The deployment of cloud-native applications shifts the focus to container-specific and service-oriented monitoring (from host-based) to ensure compliance with resiliency and performance service-level agreements. “It’s therefore important to deploy packaged tools that can provide container and service-level monitoring, as well as linking container monitoring tools to the container orchestrators to pull in metrics on other components for better visualization and analytics,” says Chandrasekaran.
  3. Storage - Since containers are transient, the data should be disassociated from the container so that the data persists and is protected even after the container is spun down. Scale-out software-defined storage products can solve the problem of data mobility, the need for agility and simultaneous access to data from multiple application containers.
  4. Networking - The portability and short-lived life cycle of containers will overwhelm the traditional networking stack. The native container networking stack doesn’t have robust-enough access and policy management capabilities. “I&O teams must therefore eliminate manual network provisioning within containerized environments, enable agility through network automation and provide developers with proper tools and sufficient flexibility,” Chandrasekaran says.
  5. Container life cycle management - Containers present the potential for sprawl even more severe than many virtual machine deployments caused. This complexity is often intensified by many layers of services and tooling. Container life cycle management can be automated through a close tie-in with continuous integration/continuous delivery processes together with continuous configuration automation tools to automate infrastructure deployment and operational tasks.
  6. Container orchestration - Container management tools are the “brains” of a distributed system, making decisions on discovery of infrastructure components making up a service, balancing workloads with infrastructure resources, and provisioning and deprovisioning infrastructures, among other things. “The key decision here is whether hybrid orchestration for container workloads is required or if it is sufficient to provision based on use case and manage multiple infrastructure silos individually,” Chandrasekaran says.

Jaeger Project Intro - Juraci Kröhling, Red Hat (Any Skill Level)

More Information:



Red Hat Enterprise Linux 8 – Release Date and New Features @itzgeek https://www.itzgeek.com/how-tos/linux/centos-how-tos/red-hat-enterprise-linux-8-release-date-and-new-features.html










23 October 2018

Windows Server 2019 (Version: 10.0.17763) and SQL server 2019

Windows Server 2019 (Version: 10.0.17763) and SQL server 2019

The Latest from Ignite 2018

From Ops to DevOps with Windows Server containers and Windows Server 2019

Windows Server 2019 will be generally available in October and we have updated Windows Admin Center, version 1809,  to support Windows Server 2019 and Azure hybrid scenarios. Windows Server 2019 builds on the foundation of Windows Server 2016, the fastest adopted version of Windows Server with 10s of millions of instances deployed worldwide. Customers like Alaska Airlines, Tyco, and Tieto have adopted Windows Server 2016 to modernize their datacenters.

What's new in Remote Desktop Services on Windows Server 2019

Through various listening channels such as the Insider program, product telemetry analysis, and industry trends, we heard loud and clear that hybrid, security, agility, and TCO are top of mind for our customers. Datacenter modernization is critical to support your business and deliver innovation, especially given the competitive landscape today. Windows Server 2019 is designed and engineered to help modernize your datacenter, delivering on four key areas:

Hybrid: The move to the cloud is a journey. A hybrid approach, one that combines on-premises and cloud environments working together, is a core element of our customers’ modernization strategy. This is why hybrid is built in to Windows Server 2019 and Windows Admin Center. To make it easier to connect existing Windows Server deployments to Azure services, we built interfaces for hybrid capabilities into the Windows Admin Center. With Windows Admin Center and Windows Server 2019, customers can use hybrid features like Azure Backup, Azure File Sync, disaster recovery to extend their datacenters to Azure. We also added the Storage Migration Service to help migrate file servers and their data to Azure without the need to reconfigure applications or users.

Windows Server 2019 deep dive | Best of Microsoft Ignite 2018

Security: Security continues to be a top priority for our customers. With the security threats growing in number and becoming more and more sophisticated, we continue to keep a persistent focus on security. Our approach to security is three-fold: Protect, Detect, and Respond. We bring security features in all three areas to Windows Server 2019. On the Protect front, we had previously introduced Shielded VMs to protect sensitive virtualized workloads such as Domain Controllers, PCI data, sensitive healthcare, and financial data among others. In Windows Server 2019, we extended support of Shielded VMs to Linux VMs. On the Detect and Respond front, we enabled Windows Defender Advanced Threat Protection (ATP), that detects attacks and zero-day exploits among other capabilities. Windows Server 2019 also includes Defender Exploit Guard to help you elevate the security posture of your IT environment and combat ransomware attacks.

Windows Server 2019 deep dive

Application Platform: A key guiding principle for us on the Windows Server team is a relentless focus on the developer experience. We learned from your feedback, a smaller container image size will significantly improve experience of developers and IT Pros who are modernizing their existing applications using containers. In Windows Server 2019, we reduced the Server Core base container image to a third of its size. We also provide improved app compatibility, support for Service Fabric and  Kubernetes, and support for Linux containers on Windows to help modernize your apps. A feedback we constantly hear from developers is the complexity in navigating environments with Linux and Windows deployments. To address that, we previously extended Windows Subsystem for Linux (WSL) into insider builds for Windows Server, so that customers can run Linux containers side-by-side with Windows containers on a Windows Server. In Windows Server 2019, we are continuing on this journey to improve WSL, helping Linux users bring their scripts to Windows while using industry standards like OpenSSH, Curl & Tar.

Windows 2019

Hyper-converged Infrastructure (HCI): HCI is one of the latest trends in the server industry today. It is primarily because customers understand the value of using servers with high performant local disks to run their compute and storage needs at the same time. In Windows Server 2019, we democratize HCI with cost-effective high-performance software-defined storage and networking that allows deployments to scale from small 2-node, all the way up to 100s of servers with Cluster Sets technology, making it affordable regardless of the deployment scale. Through our Windows Server Software Defined program, we partner with industry leading hardware vendors to provide an affordable and yet extremely robust HCI solution with validated design.

In October, customers will have access to Windows Server 2019 through all the channels! We will publish a blog post to mark the availability of Windows Server 2019 soon.

Are you wondering how Windows Server 2016 vs. 2019 compare? Let’s find out!

Microsoft, the Redmond giant, has recently announced the new version for the Windows Server. Aptly named Windows Server 2019, the service is likely to be available for downloads. In fact, the downloads have been made available for the users of inside builds and should be available for a general roll out quite soon. How does it improve the user experience from the days of Windows Server 2016? Let us get to know through an introduction to the new features on Windows Server 2019.

Windows server 2016 vs. 2019 – What’s the difference?
The Windows Server 2019 was officially announced on March 20, 2018, through communication on officially Windows Server Blog. The new server edition will be available for the general public from the second half of the calendar year 2018. If you want to have the experience before it is possible for everyone else, you may check it out by registering for the Windows Insider Program.

Differentiating the Windows Server 2019 from its predecessor, the Windows Server 2016 should not be an easy task. The latest version of the Windows Server is based on the Windows Server 2016, and thus you would find almost all the features virtually on the similar lines except for the new improvements and optimizations. We will attempt differentiating between the two based on the new features.

Windows Server 2016 has been one of the fastest ever server version from the Redmond giant. The Windows Server 2019 continues from where the 2016 version has left. The primary areas that were selected for the changes and improvements were – Hybrid, Security, Application Platform, and Hyper-converged infrastructure.

Hybrid Cloud Scenario

The Windows Server 2019 uses a hybrid approach for the movement to the Cloud. Unlike the option available on Windows Server 2016, both on-premise and cloud solutions would work together, thus offering an enhanced environment for the users.

The Server 2016 uses Active Directory, file server synchronization and backing up the data in the cloud. The difference lies in the way the Windows Server 2019 lets the on-premises make use of more advanced systems like IoT and Artificial Intelligence. The hybrid approach would ensure that you are future proof and long-term option.

Integration with Project Honolulu offers you a seamless, lightweight and flexible platform for all your needs. If you are using the Cloud Services from Microsoft, the Microsoft Azure, this is something you would indeed love.

New Security Services

Security is yet another feature that has received an impetus from the days of Windows Server 2016. The Server 2016 had been reliant on Shielded VMs. But, what has changed with the new version of the server edition is the additional support for Linux VMs.

Windows Server 2019 introduces new security features with an emphasis on three particular areas that need attention – Protect, Detect and Respond. The Windows Server 2019 brings in a new functionality of extended support of VMConnect for your troubleshooting needs on Shielded VMs for Windows Server and Linux.

There is another added functionality that has been added from the days of Windows Server 2016 is the embedded Windows Defender Advanced Threat Protection. It can perform efficient preventive actions for complete detection of attacks.

Application Platform

Microsoft has been focussing on the enhanced developer experiences. The Windows Server 2019 brings in the new developments in the form of improved Windows Server Containers and the Windows Subsystem for hosting Linux.

Windows Server 2016 has had a good performance concerning the Windows Server Containers. In fact, the concept has had greater success regarding the adoption. Thousands of container images have already been downloaded ever since the launch of 2016 edition of Windows Server. However, Windows Server 2019 edition has been aiming to reduce the size of the server base core of the container image. This is bound to enhance the development and performance remarkably.

Windows Server 2016 introduced support for the robust level of Hyper-Converged Infrastructure or HCI options. It brought in the support from the industry’s leading hardware vendors. Windows Server 2019 taking it ahead from the days of Windows Server 2016.

Windows Server 2019: What’s new and what's next

Yes, the 2019 version brings in a few extra features – extra-scale, performance, reliability and better support for HCI deployment. The Project Honolulu we mentioned above brings in a high-performance interface for Storage Space Direct. However, if you are someone belonging to the small business genre, you would not be able to afford it as of now.

Enterprise-grade hyperconverged infrastructure (HCI)

With the release of Windows Server 2019, Microsoft rolls up three years of updates for its HCI platform. That’s because the gradual upgrade schedule Microsoft now uses includes what it calls Semi-Annual Channel releases – incremental upgrades as they become available. Then every couple of years it creates a major release called the Long-Term Servicing Channel (LTSC) version that includes the upgrades from the preceding Semi-Annual Channel releases.

The LTSC Windows Server 2019 is due out this fall, and is now available to members of Microsoft’s Insider program.

While the fundamental components of HCI (compute, storage and networking) have been improved with the Semi-Annual Channel releases, for organizations building datacenters and high-scale software defined platforms, Windows Server 2019 is a significant release for the software-defined datacenter.

What's new in Active Directory Federation Services (AD FS) in Windows Server 2019

With the latest release, HCI is provided on top of a set of components that are bundled in with the server license. This means a backbone of servers running HyperV to enable dynamic increase or decrease of capacity for workloads without downtime.

Improvements in security

Microsoft has continued to include built-in security functionality to help organizations address an “expect breach” model of security management.  Rather than assuming firewalls along the perimeter of an enterprise will prevent any and all security compromises, Windows Server 2019 assumes servers and applications within the core of a datacenter have already been compromised.

Windows Server 2019 includes Windows Defender Advanced Threat Protection (ATP) that assess common vectors for security breaches, and automatically blocks and alerts about potential malicious attacks.  Users of Windows 10 have received many of the Windows Defender ATP features over the past few months. Including  Windows Defender ATP on Windows Server 2019 lets them take advantage of data storage, network transport and security-integrity components to prevent compromises on Windows Server 2019 systems.

Smaller, more efficient containers

Organizations are rapidly minimizing the footprint and overhead of their IT operations and eliminating more bloated servers with thinner and more efficient containers. Windows Insiders have benefited by achieving higher density of compute to improve overall application operations with no additional expenditure in hardware server systems or expansion of hardware capacity.

Getting started with Windows Server containers in Windows Server 2019

Windows Server 2019 has a smaller, leaner ServerCore image that cuts virtual machine overhead by 50-80 percent.  When an organization can get the same (or more) functionality in a significantly smaller image, the organization is able to lower costs and improve efficiencies in IT investments.

Windows subsystem on Linux

A decade ago, one would rarely say Microsoft and Linux in the same breath as complimentary platform services, but that has changed. Windows Server 2016 has open support for Linux instances as virtual machines, and the new Windows Server 2019 release makes huge headway by including an entire subsystem optimized for the operation of Linux systems on Windows Server.

The Windows Subsystem for Linux extends basic virtual machine operation of Linux systems on Windows Server, and provides a deeper layer of integration for networking, native filesystem storage and security controls. It can enable encrypted Linux virtual instances. That’s exactly how Microsoft provided Shielded VMs for Windows in Windows Server 2016, but now native Shielded VMs for Linux on Windows Server 2019.

Be an IT hero with Storage Spaces Direct in Windows Server 2019

Enterprises have found the optimization of containers along with the ability to natively support Linux on Windows Server hosts can decrease costs by eliminating the need for two or three infrastructure platforms, and instead running them on Windows Server 2019.

Because most of the “new features” in Windows Server 2019 have been included in updates over the past couple years, these features are not earth-shattering surprises.  However, it also means that the features in Windows Server 2019 that were part of Windows Server 2016 Semi-Annual Channel releases have been tried, tested, updated and proven already, so that when Windows Server 2019 ships, organizations don’t have to wait six to 12 months for a service pack of bug fixes.

This is a significant change that is helping organizations plan their adoption of Windows Server 2019 sooner than orgs may have adopted a major release platform in the past, and with significant improvements for enterprise datacenters in gaining the benefits of Windows Server 2019 to meet security, scalability, and optimized data center requirements so badly needed in today’s fast-paced environments.

Windows Server 2019 has the following new features:

  • Windows Subsystem for Linux (WSL)
  • Support for Kubernetes (Beta)
  • Other GUI new features from Windows 10 version 1809.
  • Storage Spaces Direct.
  • Storage Migration Service.
  • Storage Replica.
  • System Insights.
  • Improved Windows Defender.

What is New in Windows Server 2019

Windows Server 2019 has four main areas of investments and below is glimpse of each area.

Hybrid: Windows Server 2019 and Windows Admin Center will make it easier for our customers to connect existing on-premises environments to Azure. With Windows Admin Center it also easier for customers on Windows Server 2019 to use Azure services such as Azure Backup, Azure Site Recovery, and more services will be added over time.
Security: Security continues to be a top priority for our customers and we are committed to helping our customers elevate their security posture. Windows Server 2016 started on this journey and Windows Server 2019 builds on that strong foundation, along with some shared security features with Windows 10, such as Defender ATP for server and Defender Exploit Guard.
Application Platform: Containers are becoming popular as developers and operations teams realize the benefits of running in this new model. In addition to the work we did in Windows Server 2016, we have been busy with the Semi-Annual Channel releases and all that work culminates in Windows Server 2019. Examples of these include Linux containers on Windows, the work on the Windows Subsystem for Linux (WSL), and the smaller container images.
Hyper-converged Infrastructure (HCI): If you are thinking about evolving your physical or host server infrastructure, you should consider HCI. This new deployment model allows you to consolidate compute, storage, and networking into the same nodes allowing you to reduce the infrastructure cost while still getting better performance, scalability, and reliability.

Microsoft SQL Server 2019 

SQL Server 2019 Vision

What’s New in Microsoft SQL Server 2019 

• Big Data Clusters

  • Deploy a Big Data cluster with SQL and Spark Linux containers on Kubernetes
  • Access your big data from HDFS
  • Run Advanced analytics and machine learning with Spark
  • Use Spark streaming to data to SQL data pools
  • Use Azure Data Studio to run Query books that provide a notebook experience

• Database engine

  • UTF-8 support
  • Resumable online index create allows index create to resume after interruption
  • Clustered columnstore online index build and rebuild
  • Always Encrypted with secure enclaves
  • Intelligent query processing
  • Java language programmability extension
  • SQL Graph features
  • Database scoped configuration setting for online and resumable DDL operations
  • Always On Availability Groups – secondary replica connection redirection
  • Data discovery and classification – natively built into SQL Server
  • Expanded support for persistent memory devices
  • Support for columnstore statistics in DBCC CLONEDATABASE
  • New options added to sp_estimate_data_compression_savings
  • SQL Server Machine Learning Services failover clusters
  • Lightweight query profiling infrastructure enabled by default
  • New Polybase connectors
  • New sys.dm_db_page_info system function returns page information

• SQL Server on Linux

  • Replication support
  • Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
  • Always On Availability Group on Docker containers with Kubernetes
  • OpenLDAP support for third-party AD providers
  • Machine Learning on Linux
  • New container registry
  • New RHEL-based container images
  • Memory pressure notification

• Master Data Services

  • Silverlight controls replaced

• Security

  • Certificate management in SQL Server Configuration Manager

• Tools

  • SQL Server Management Studio (SSMS) 18.0 (preview)
  • Azure Data Studio

Introducing Microsoft SQL Server 2019 Big Data Clusters

SQL Server 2019 big data clusters make it easier for big data sets to be joined to the dimensional data typically stored in the enterprise relational database, enabling people and apps that use SQL Server to query big data more easily. The value of the big data greatly increases when it is not just in the hands of the data scientists and big data engineers but is also included in reports, dashboards, and applications. At the same time, the data scientists can continue to use big data ecosystem tools while also utilizing easy, real-time access to the high-value data in SQL Server because it is all part of one integrated, complete system.
Read the complete Awesome blogpost from Travis Wright about SQL Server 2019 Big Data Cluster here: https://cloudblogs.microsoft.com/sqlserver/2018/09/25/introducing-microsoft-sql-server-2019-big-data-clusters/
Starting in SQL Server 2017 with support for Linux and containers, Microsoft has been on a journey of platform and operating system choice. With SQL Server 2019 preview, we are making it easier to adopt SQL Server in containers by enabling new HA scenarios and adding supported Red Hat Enterprise Linux container images. Today we are happy to announce the availability of SQL Server 2019 preview Linux-based container images on Microsoft Container Registry, Red Hat-Certified Container Images, and the SQL Server operator for Kubernetes, which makes it easy to deploy an Availability Group.

SQL Server 2019: Celebrating 25 years of SQL Server Database Engine and the path forward
Awesome work Microsoft SQL Team and Congrats on your 25th Anniversary !
Microsoft announced the preview of SQL Server 2019. For 25 years, SQL Server has helped enterprises manage all facets of their relational data. In recent releases, SQL Server has gone beyond querying relational data by unifying graph and relational data and bringing machine learning to where the data is with R and Python model training and scoring. As the volume and variety of data increases, customers need to easily integrate and analyze data across all types of data.

SQL Server 2019 big data clusters - intro session

Now, for the first time ever, SQL Server 2019 creates a unified data platform with Apache SparkTM and Hadoop Distributed File System (HDFS) packaged together with SQL Server as a single, integrated solution. Through the ability to create big data clusters, SQL Server 2019 delivers an incredible expansion of database management capabilities, further redefining SQL Server beyond a traditional relational database. And as with every release, SQL Server 2019 continues to push the boundaries of security, availability, and performance for every workload with Intelligent Query Processing, data compliance tools and support for persistent memory. With SQL Server 2019, you can take on any data project, from traditional SQL Server workloads like OLTP, Data Warehousing and BI, to AI and advanced analytics over big data.

SQL Server 2017 Deep Dive

SQL Server provides a true hybrid platform, with a consistent SQL Server surface area from your data center to public cloud—making it easy to run in the location of your choice. Because SQL Server 2019 big data clusters are deployed as containers on Kubernetes with a built-in management service, customers can get a consistent management and deployment experience on a variety of supported platforms on-premises and in the cloud: OpenShift or Kubernetes on premises, Azure Kubernetes Service (AKS), Azure Stack (on AKS) and OpenShift on Azure. With Azure Hybrid Benefit license portability, you can choose to run SQL Server workloads on-premises or in Azure, at a fraction of the cost of any other cloud provider.

SQL Server – Insights over all your data

SQL Server continues to embrace open source, from SQL Server 2017 support for Linux and containers to SQL Server 2019 now embracing Spark and HDFS to bring you a unified data platform. With SQL Server 2019, all the components needed to perform analytics over your data are built into a managed cluster, which is easy to deploy and it can scale as per your business needs. HDFS, Spark, Knox, Ranger, Livy, all come packaged together with SQL Server and are quickly and easily deployed as Linux containers on Kubernetes. SQL Server simplifies the management of all your enterprise data by removing any barriers that currently exist between structured and unstructured data.

SQL server 2019 big data clusters - deep dive session

Here’s how we make it easy for you to break down barriers to realized insights across all your data, providing one view of your data across the organization:

Simplify big data analytics for SQL Server users. SQL Server 2019 makes it easier to manage big data environments. It comes with everything you need to create a data lake, including HDFS and Spark provided by Microsoft and analytics tools, all deeply integrated with SQL Server and fully supported by Microsoft. Now, you can run apps, analytics, and AI over structured and unstructured data – using familiar T-SQL queries or people familiar with Spark can use Python, R, Scala, or Java to run Spark jobs for data preparation or analytics – all in the same, integrated cluster.
Give developers, data analysts, and data engineers a single source for all your data – structured and unstructured – using their favorite tools. With SQL Server 2019, data scientists can easily analyze data in SQL Server and HDFS through Spark jobs. Analysts can run advanced analytics over big data using SQL Server Machine Learning Services: train over large datasets in Hadoop and operationalize in SQL Server. Data scientists can use a brand new notebook experience running on the Jupyter notebooks engine in a new extension of Azure Data Studio to interactively perform advanced analysis of data and easily share the analysis with their colleagues.
Break down data silos and deliver one view across all of your data using data virtualization. Starting in SQL Server 2016, PolyBase has enabled you to run a T-SQL query inside SQL Server to pull data from your data lake and return it in a structured format—all without moving or copying the data. Now in SQL Server 2019, we’re expanding that concept of data virtualization to additional data sources, including Oracle, Teradata, MongoDB, PostgreSQL, and others. Using the new PolyBase, you can break down data silos and easily combine data from many sources using virtualization to avoid the time, effort, security risks and duplicate data created by data movement and replication. New elastically scalable “data pools” and “compute pools” make querying virtualized data lighting fast by caching data and distributing query execution across many instances of SQL Server.

“From its inception, the Sloan Digital Sky Survey database has run on SQL Server, and SQL Server also stores object catalogs from large cosmological simulations. We are delighted with the promise of SQL Server 2019 big data clusters, which will allow us to enhance our databases to include all our big data sets. The distributed nature of SQL Server 2019 allows us to expand our efforts to new types of simulations and to the next generation of astronomical surveys with datasets up to 10PB or more, well beyond the limits of our current database solutions.”- Dr. Gerard Lemson, Institute for Data Intensive Engineering and Science, Johns Hopkins University.

Enhanced performance, security, and availability

The SQL Server 2019 relational engine will deliver new and enhanced features in the areas of mission-critical performance, security and compliance, and database availability, as well as additional features for developers, SQL Server on Linux and containers, and general engine enhancements.

Industry-leading performance – The Intelligent Database

The Intelligent Query Processing family of features builds on hands-free performance tuning features of Adaptive Query Processing in SQL Server 2017 including Row mode memory grant feedback, approximate COUNT DISTINCT, Batch mode on rowstore, and table variable deferred compilation.
Persistent memory support is improved in this release with a new, optimized I/O path available for interacting with persistent memory storage.
The Lightweight query profiling infrastructure is now enabled by default to provide per query operator statistics anytime and anywhere you need it.

Advanced security – Confidential Computing

Always Encrypted with secure enclaves extends the client-side encryption technology introduced in SQL Server 2016. Secure enclaves protect sensitive data in a hardware or software-created enclave inside the database, securing it from malware and privileged users while enabling advanced operations on encrypted data.
SQL Data Discovery and Classification is now built into the SQL Server engine with new metadata and auditing support to help with GDPR and other compliance needs.
Certification Management is now easier using SQL Server Configuration Manager.
Mission-critical availability – High uptime

Always On Availability Groups have been enhanced to include automatic redirection of connections to the primary based on read/write intent.
High availability configurations for SQL Server running in containers can be enabled with Always On Availability Groups using Kubernetes.
Resumable online indexes now support create operations and include database scoped defaults.
Developer experience

Enhancements to SQL Graph include match support with T-SQL MERGE and edge constraints.
New UTF-8 support gives customers the ability to reduce SQL Server’s storage footprint for character data.
The new Java language extension will allow you to call a pre-compiled Java program and securely execute Java code on the same server with SQL Server. This reduces the need to move data and improves application performance by bringing your workloads closer to your data.
Machine Learning Services has several enhancements including Windows Failover cluster support, partitioned models, and support for SQL Server on Linux.

Platform of choice
Additional capabilities for SQL Server on Linux include distributed transactions, replication, Polybase, Machine Learning Services, memory notifications, and OpenLDAP support.
Containers have new enhancements including use of the new Microsoft Container Registry with support for RedHat Enterprise Linux images and Always On Availability Groups for Kubernetes.
You can read more about what’s new in SQL Server 2019 in our documentation.

SQL Server 2019 support in Azure Data Studio

Expanded support for more data workloads in SQL Server requires expanded tooling. As Microsoft has worked with users of its data platform we have seen the coming together of previously disparate personas: database administrators, data scientists, data developers, data analysts, and new roles still being defined. These users increasingly want to use the same tools to work together, seamlessly, across on-premises and cloud, using relational and unstructured data, working with OLTP, ETL, analytics, and streaming workloads.

Azure Data Studio offers a modern editor experience with lightning fast IntelliSense, code snippets, source control integration, and an integrated terminal. It is engineered with the data platform user in mind, with built-in charting of query result sets, an integrated notebook, and customizable dashboards. Azure Data Studio currently offers built-in support for SQL Server on-premises and Azure SQL Database, along with preview support for Azure SQL Managed Instance and Azure SQL Data Warehouse.

Azure Data Studio is today shipping a new SQL Server 2019 Preview Extension to add support for select SQL Server 2019 features. The extension offers connectivity and tooling for SQL Server big data clusters, including a preview of the first ever notebook experience in the SQL Server toolset, and a new PolyBase Create External Table wizard that makes accessing data from remote SQL Server and Oracle instances easy and fast.

More information:






















24 September 2018

Kubernetes for the Enterprise!

Announcing SUSE CaaS Platform 3

Containers for Big Data: How MapR Expands Containers Use to Access Data Directly

Every enterprise needs Kubernetes today, including yours.  But with the platform evolving so rapidly, it can be difficult to keep up.  Not to worry, SUSE can take care of that for you: SUSE CaaS Platform delivers Kubernetes advancements to you in an enterprise-grade solution.

SUSE and Big Data

SUSE today announced SUSE CaaS Platform 3, introducing support for a raft of new features and a special focus on the Kubernetes platform operator.  You can read all about it in the press release, but let’s hit on a few of the highlights here.  With SUSE CaaS Platform 3 you can:

Optimize your cluster configuration with expanded datacenter integration and cluster re-configuration options
Setting up your Kubernetes environment is easier than ever with improved integration of private (OpenStack) and public (Amazon Web Services, Microsoft Azure, and Google Cloud Platform) cloud storage, and automatic deployment of the Kubernetes software load balancer.

Persistent Storage for Docker Containers | Whiteboard Walkthrough

A new SUSE toolchain module also allows you to tune the MicroOS container operating system to support your custom hardware configuration needs. Now you can, for example, install additional packages required to run your own monitoring agent or other custom software.
Transform your start-up cluster into a highly available environment. With new cluster reconfiguration capabilities, you can switch from a single-master to a multi-master environment, or vice-versa, to accommodate your changing needs.

Manage container images more efficiently and securely with a local container registry
Download a container image from an external registry once, then save a copy in your own local registry for sharing among all nodes in your cluster. By connecting to an internal proxy rather than an external registry, and by downloading from a local cache rather than a remote server, you’ll improve security and increase performance every time a cluster node pulls an image from the local registry.
For still greater security, disconnect from external registries altogether and use only trusted images you’ve loaded into your local registry.

Try out the new, lightweight CRI-O container runtime, designed specifically for Kubernetes, and introduced in CaaSP 3 as a tech preview feature. Stable and secure, CRI-O is also smaller and architecturally simpler than traditional container runtimes.

Simplify deployment and management of long running workloads through the Apps Workloads API. Promoted to ‘stable’ in upstream Kubernetes 1.9 code, the Apps Workloads API is now supported by SUSE.  This API generally facilitates orchestration (self-healing, scaling, updates, termination) of common types of workloads.

Modern Big Data Pipelines over Kubernetes [I] - Eliran Bivas, Iguazio

With Kubernetes now a must-have for every enterprise, you’ll want to give SUSE CaaS Platform a serious look.  Focused on providing an exceptional platform operator experience, it delivers Kubernetes innovations in a complete, enterprise grade solution that enables IT to deliver the power of Kubernetes to users more quickly, consistently, and efficiently.

SUSE CaaS Platform also serves as the Kubernetes foundation for SUSE Cloud Application Platform, which addresses modern application developers’ needs by bringing the industry’s most respected cloud-native developer experience (Cloud Foundry) into a Kubernetes environment.

SUSE CaaS Platform

SUSE CaaS Platform is an enterprise class container management solution that enables IT and DevOps professionals to more easily deploy, manage, and scale container-based applications and services. It includes Kubernetes to automate lifecycle management of modern applications, and surrounding technologies that enrich Kubernetes and make the platform itself easy to operate. As a result, enterprises that use SUSE CaaS Platform can reduce application delivery cycle times and improve business agility.

SUSE, Hadoop and Big Data Update. Stephen Mogg, SUSE UK

SUSE is focused on delivering an exceptional operator experience with SUSE CaaS Platform.

HDFS on Kubernetes—Lessons Learned - Kimoon Kim

With deep competencies in infrastructure, systems, process integration, platform security, lifecycle management and enterprise-grade support, SUSE aims to ensure IT operations teams can deliver the power of Kubernetes to their users quickly, securely and efficiently. With SUSE CaaS Platform you can:

Achieve faster time to value with an enterprise-ready container management platform, built from industry leading technologies, and delivered as a complete package, with everything you need to quickly offer container services.

Simplify management and control of your container platform with efficient installation, easy scaling, and update automation.

Maximize return on your investment, with a flexible container services solution for today and tomorrow.

Episode 3: Kubernetes and Big Data Services

Key Features

A Cloud Native Computing Foundation (CNCF) certified Kubernetes distribution, SUSE CaaS Platform automates the orchestration and management of your containerized applications and services with powerful Kubernetes capabilities, including:

  • Workload scheduling places containers according to their needs while improving resource utilization
  • Service discovery and load balancing provides an IP address for your service, and distributes load behind the scenes
  • Application scaling up and down, accommodates changing load
  • Non-disruptive Rollout/Rollback of new applications and updates enables frequent change without downtime
  • Health monitoring and management supports application self-healing and ensures application availability

In addition, SUSE CaaS Platform simplifies the platform operator’s experience, with everything you need to get up and running quickly, and to manage the environment effectively in production. It provides:

  • Application ecosystem support with SUSE Linux container base images, and access to tools and services offered by SUSE Ready for CaaS Platform partners and the Kubernetes community
  • Enhanced datacenter integration features that enable you to plug Kubernetes into new or existing infrastructure, systems, and processes
  • A complete container execution environment, including a purpose-built container host operating system, container runtime, and container image registries
  • End-to-End security, implemented holistically across the full stack
  • Advanced platform management that simplifies platform installation, configuration, re-configuration, monitoring, maintenance, updates, and recovery
  • Enterprise hardening including comprehensive interoperability testing, support for thousands of platforms, and world-class platform maintenance and technical support

Cisco and SUSE have collaborated for years on solutions that improve efficiencies and lower costs in the data center by leveraging the flexibility and value of the UCS platform and the performance and reliability of the SUSE Linux Enterprise Server.

With focus and advancement in the areas of compute, storage and networking, Cisco and SUSE are now looking to help organizations tackle the challenges associated with the ‘5 Vs’ of Big Data:

1. Volume
2. Variety
3. Velocity
4. Veracity (of data)
5. Value

Ian Chard of Cisco recently published a great read untangling these challenges, and pointing to areas that help harness the power of data analytics.
Article content Below:

The harnessing of data through analytics is key to staying competitive and relevant in the age of connected computing and the data economy.

Analytics now combines statistics, artificial intelligence, machine learning, deep learning and data processing in order to extract valuable information and insights from the data flowing through your business.

Unlock the Power of Kubernetes for Big Data by Joey Zwicker, Pachyderm

Your ability to harness analytics defines how well you know your business, your customers, and your partners – and how quickly you understand them.

But it’s still hard to gain valuable insights from data. Collectively the challenges are known as the ‘5 Vs of big data’:

The Volume of data has grown so much that traditional relational database management software running on monolithic servers is incapable of processing it.

The Variety of data has also increased. There are many more sources of data and many more different types.

Velocity describes how fast the data is coming in. It has to be processed, often in real time, and stored in huge volume.

Veracity of data refers to how much you can trust it. Traditional structured data (i.e. in fixed fields or formats) goes through a validation process. This approach does not work with unstructured (i.e. raw) data.

Deriving Value from the data is hard due to the above.

If you’re wrestling with the 5 Vs, chances are you’ll be heading to ExCeL London for the annual Strata Data Conference on 22-24 May 2018.

We’ll be there on Booth 316, together with our partners including SUSE, where we’ll be showcasing how much of the progress made in compute, storage, and networking, as well as distributed data processing frameworks can help to address these challenges.

1) The Infrastructure evolution
Compute demands are growing in direct response to data growth. More powerful servers or, more servers working in parallel – aka scale-out – are needed.

Deep learning techniques for example can absorb an insatiable amount of data, making a robust HDFS cluster a great way to achieve scale out storage for the collection and preparation of the data. Machine learning algorithms can run on traditional x86 CPUs, but GPUs can accelerate these algorithms by up to a factor of 100.

New approaches to data analytics applications and storage are also needed because the majority of the data available is unstructured. Email, text documents, images, audio, and video are data types that are a poor fit for relational databases and traditional storage methods.

Google Cloud Dataproc - Easier, faster, more cost-effective Spark and Hadoop

Storing data in the public cloud can ease the load. But as your data grows and you need to access it more frequently, cloud services can become expensive, while the sovereignty of that data can be a concern.

Software-defined storage is a server virtualisation technology that allows you to shift large amounts of unstructured data to cost-effective, flexible solutions located on-premises. This assures performance and data sovereignty while reducing storage costs over time.

You can use platforms such as Hadoop to create shared repositories of unstructured data known as data lakes. Running on a cluster of servers, data lakes can be accessed by all users. However, they must be managed in a way that’s compliant, using enterprise-class data management platforms that allow you to store, protect and access data quickly and easily.

2) Need for speed
The pace of data analytics innovation continues to increase. Previously, you would define your data structures and build an application to operate on the data. The lifetime of such applications was measured in years.

Today, raw data is collected and explored for meaningful patterns using applications that are rebuilt when new patterns emerge. The lifetime of these applications is measured in months – and even days.

The value of data can also be short-lived. There’s a need to analyse it at source, as it arrives, in real time. Data solutions that employ in-memory processing for example, give your users immediate, drill-down access to all the data across your enterprise applications, data warehouses and data lakes.

3) Come see us at Strata Data Conference
Ultimately, your ability to innovate at speed with security and governance assured comes down to your IT infrastructure.

Cisco UCS is a trusted computing platform proven to deliver lower TCO and optimum performance and capacity for data-intensive workloads.

85% of Fortune 500 companies and more than 60,000 organisations globally rely on our validated solutions. These combine our servers with software from a broad ecosystem of partners to simplify the task of pooling IT resources and storing data across systems.

Modern big data and machine learning in the era of cloud, docker and kubernetes

Crucially, they come with role- and policy-based management, which means you can configure hundreds of storage servers as easily as you can configure one, making scale-out a breeze as your data analytics projects mature.

If you’re looking to transform your business and turn your data into insights faster, there’s plenty of reasons to come visit us on booth 316:

4) Accelerated Analytics
If your data lake is deep and your data scientists are struggling to making sense of what lies beneath, then our MapD demo powered by data from mobile masts will show you how to cut through the depths and find the enlightenment you seek fast.

5) Deep learning with Cloudera Data Science Workbench
For those with a Hadoop cluster to manage their data lakes and deep learning framework, we’ll be demonstrating how to accelerate the training of deep learning modules with Cisco UCS C240 and C480 servers equipped with 2 and 6 GPUs respectively. We’ll also show you how to support growing cluster sizes using cloud-managed service profiles rather than more manpower.

6) Get with the Cisco Gateway
If you’re already a customer and fancy winning some shiny new tech, why not step through the Gateway to grow your reputation as a thought leader and showcase the success you’ve had?

7) Find your digital twin
To effectively create a digital twin of the enterprise, data scientists have to incorporate data sources inside and outside of the data centre for a holistic 360-view. Come join our resident expert Han Yang for his session on how we’re benefiting from big data and analytics, as well as helping our customers to incorporate data sources from Internet of Things and deploy machine learning at the edge and at the enterprise.

8) Get the scoop with SUSE
We’re set to unveil a new integration of SUSE Linux Enterprise Server and Cisco UCS. There’ll be SUSE specialists on our booth, so you can be the first to find out more about what’s in the pipeline.

What is Kubernetes?

Kubernetes is an open source system for automatically orchestrating and managing containerized applications.

AI meets Big Data

Designing applications using open source Linux containers is an ideal approach for building cloud-native applications for hosting in private, public or hybrid clouds. Kubernetes automates the deployment, management and scaling of these containerized applications, making the whole process easier, faster and more efficient.

Businesses of all types are looking for a new paradigm to drive faster innovation and agility. This is changing forever how applications are architected, deployed, scaled and managed to deliver new levels of innovation and agility. Kubernetes has become widely embraced by almost everyone interested in dramatically accelerating application delivery with containerized and cloud-native workloads.

Kubernetes is now seen as the outright market leader by software developers, operations teams, DevOps professionals and IT business decision makers.

Manage Microservices & Fast Data Systems on One Platform w/ DC/OS

Kubernetes Heritage

Kubernetes was originally the brainchild of Google. Google has been building and managing container-based applications and cloud-native workloads in production and at scale for well over a decade. Kubernetes emerged from the knowledge and experience gained with earlier Google container management systems called Borg and Omega.

Extending DevOps to Big Data Applications with Kubernetes

Now an open source project, Kubernetes is under the stewardship of the Cloud Native Computing Foundation (CNCF) and The Linux Foundation. This ensures that the project benefits from the best ideas and practices from a huge open source community and makes sure the danger of vendor lock-in is avoided.

Key Features:

  • Deploy applications rapidly and predictably to private, public or hybrid clouds
  • Scale applications non-disruptively
  • Roll out new features seamlessly
  • Lean and efficient use of computing resources
  • Keep production applications up and running with self-healing capabilities

SUSE and Kubernetes

SUSE believes Kubernetes will be a key element of the application delivery solutions needed to drive the enterprise business of the future.

Big data and Kubernetes

Here is a selection of SUSE products built using Kubernetes:

SUSE Cloud Application Platform brings advanced Cloud Foundry productivity to modern Kubernetes infrastructure, helping software development and operations teams to streamline lifecycle management of traditional and new cloud-native applications. Building on
SUSE CaaS Platform, SUSE Cloud Application Platform adds a unique Kubernetes-based implementation of Cloud Foundry, introducing a powerful DevOps workflow into a Kubernetes environment. Built on enterprise-grade Linux and with full Cloud Foundry and Kubernetes certification, it is an outstanding platform to support the entire development lifecycle for traditional and new cloud-native applications.

SUSE OpenStack Cloud makes it easy to spin up Kubernetes clusters in a full multi-tenant environment, allowing different users to have their own Kubernetes cluster. Customers can use either the built-in support for OpenStack Magnum or leverage SUSE CaaS Platform, which gives the added benefits of ready-to-run images, templates and heat automation. With these Kubernetes-as-a-Service capabilities, it’s no wonder that OpenStack users are reported to be adopting containers 3 times faster than the rest of the enterprise market.

SUSE CaaS Platform is a certified Kubernetes software distribution. It provides an enterprise-class container management solution that enables IT and DevOps professionals to more easily deploy, manage, and scale container-based applications and services. Using SUSE CaaS Platform, enterprises can reduce application delivery cycle times and improve business agility.

What's the Hadoop-la about Kubernetes?


Big data, long an industry buzzword, is now commonplace among most businesses. A 2014 survey from Gartner found 73 percent of organizations had already invested or planned to invest in big data by 2016. For many companies, the question now is not how to manage and harness data, but how to do it even more effectively. The next frontier for big data is to master speed. If you can’t analyze big data in real time, you lose much of the value of the information passing through databases.

What is fast data?

Fast Data with Apache Ignite and Apache Spark - Christos Erotocritou

While big data refers to the massive fire hose of information generated each hour, fast data refers to data that provides real-time insights. In many industries, especially the payment industry, making quick analyses of information is crucial to the bottom line. For example, fast data could prevent a massive breach that would release sensitive customer information. In this case, analyzing data in real time is far more important than storing it in massive quantities. When it comes to ecommerce fraud, the insights happening in the moment matter the most.

Kubernetes vs Docker Swarm | Container Orchestration War | Kubernetes Training | Edureka

As a Wired article put it, where in the past, gaining insights from big data was like finding a needle in a haystack, fast data is like finding the needle as soon as it’s dropped.

Fast data for payments

“For payment systems, decisions must be made in the sub-second range,” Richard Harris, head of international operations at Feedzai, recently told Payment Cards and Mobile. “Our clients typically require 20-50 millisecond response times. So we’ve overcome this by using technology founded in the Big Data era, such as Hadoop and Cassandra.”

Apache Spark on Kubernetes - Anirudh Ramanathan & Tim Chen

Payment processor First Data and Feedzai has teamed up to use machine learning to fight fraud. Feedzai monitors the company’s STAR Network, which enables debit payments for First Data’s clients.

Todd Clark, Senior Vice President and Head of STAR Network and Debit Processing at First Data explained “The combination of Feedzai’s machine learning software and First Data’s experience, [Clark says], has made the STAR Network capable of scoring over 3,000 transactions per second.”

 “This big speed and accuracy advantage means the STAR network is less of an attractive target for fraud,” Harris said.

Infrastructure challenges

Not all systems are set up to handle fast data. Without the right tools to manage the data flow quickly, valuable insights are lost or gained too late to be of use. While many existing platforms can handle and store large quantities of data, most fall behind when it comes to analyzing the information in real time. To begin with, organizations need to move beyond systems that only allow batch processing, according to Wired. In this case, companies need to tell computers to analyze large batches of information, which it processes one at a time – similar to the way credit card bills are processed at the end of each month.

With most companies now set up to gain insights from big data, the next step is to enable real-time insights. In the payment world, this means catching potential fraud as it’s happening, not waiting until it has already happened.

Beyond Hadoop: The Rise of Fast Data

Over the past two to three years, companies have started transitioning from big data, where analytics are processed after-the-fact in batch mode, to fast data, where data analysis is done in real-time to provide immediate insights. For example, in the past, retail stores such as Macy’s analyzed historical purchases by store to determine which products to add to stores in the next year. In comparison, Amazon drives personalized recommendations based on hundreds of individual characteristics about you, including what products you viewed in the last five minutes.

Containerized Hadoop beyond Kubernetes

Big data is collected from many sources in real-time, but is processed after collection in batches to provide information about the past. The benefits of data are lost if real-time streaming data is dumped into a database because of the inability to act on data as it is collected.

Super Fast Real-time Data Processing on Cloud-Native Architecture [I] - Yaron Haviv, iguazio

Modern applications need to respond to events happening now, to provide insights in real time. To do this they use fast data, which is processed as it is collected to provide real-time insights. Whereas big data provided insights into user segmentation and seasonal trending using descriptive (what happened) and predictive analytics (what will likely happen), fast data allows for real-time recommendations and alerting using prescriptive analytics (what should you do about it).

Big help for your first big data project

It’s clear. Today, big data is changing the way companies work. What hasn’t been clear is how companies should go about implementing big data projects.

Until now.

Our highly practical workbook is full of advice about big data that’ll help you keep your project on track. From setting clear goals to strategic resourcing and ideal big data architectures, we’ve covered everything you need to know about big data.

Streaming Big Data with Heron on Kubernetes Cluster

Read “The Big, Big Data Workbook” to gain insights into:

  • How to choose the right project and set up the right goals
  • How to build the right team and maximize productivity
  • What your data governance framework should look like
  • The architecture and processes you should aim to build
  • “The Big, Big Data Workbook” is a comprehensive guide about the practical aspects of big data and an absolute must-read if you’re attempting to bring greater insights to your enterprise.

More Information: