• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

28 June 2017

Red Hat OpenShift and Orchestrating Containers With KUBERNETES!


Red Hat OpenShift and  Orchestrating Containers With KUBERNETES!

OVERVIEW

Kubernetes is a tool for orchestrating and managing Docker containers. Red Hat provides several ways you can use Kubernetes that include:

  • OpenShift Container Platform: Kubernetes is built into OpenShift, allowing you to configure Kubernetes, assign host computers as Kubernetes nodes, deploy containers to those nodes in pods, and manage containers across multiple systems. The OpenShift Container Platform web console provides a browser-based interface to using Kubernetes.
  • Container Development Kit (CDK): The CDK provides Vagrantfiles to launch the CDK with either OpenShift (which includes Kubernetes) or a bare-bones Kubernetes configuration. This gives you the choice of using the OpenShift tools or Kubernetes commands (such as kubectl) to manage Kubernetes.
  • Kubernetes in Red Hat Enterprise Linux: To try out Kubernetes on a standard Red Hat Enterprise Linux server system, you can install a combination of RPM packages and container images to manually set up your own Kubernetes configuration.

Resilient microservices with Kubernetes - Mete Atamel


Kubernetes, or k8s (k, 8 characters, s...get it?), or “kube” if you’re into brevity, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across public, private, or hybrid clouds.

The Illustrated Children's Guide to Kubernetes


Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.) Google generates more than 2 billion container deployments a week—all powered by an internal platform: Borg. Borg was the predecessor to Kubernetes and the lessons learned from developing Borg over the years became the primary influence behind much of the Kubernetes technology.

Fun fact: The seven spokes in the Kubernetes logo refer to the project’s original name, “Project Seven of Nine.”

Kubernetes & Container Engine


Red Hat was one of the first companies to work with Google on Kubernetes, even prior to launch, and has become the 2nd leading contributor to Kubernetes upstream project. Google donated the Kubernetes project to the newly formed Cloud Native Computing Foundation in 2015.

An Introduction to Kubernetes


Why do you need Kubernetes?

Real production apps span multiple containers. Those containers must be deployed across multiple server hosts. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads. Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.


Kubernetes also needs to integrate with networking, storage, security, telemetry and other services to provide a comprehensive container infrastructure.

Of course, this depends on how you’re using containers in your environment. A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it's clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.

Hands on Kubernetes 


Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.

With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.

What can you do with Kubernetes?

The primary advantage of using Kubernetes in your environment is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines. More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments. And because Kubernetes is all about automation of operational tasks, you can do many of the same things that other application platforms or management systems let you do, but for your containers.

Red Hat is Driving Kubernetes/Container Security Forward - Clayton Coleman


Kubernetes’ features provide everything you need to deploy containerized applications. Here are the highlights:


  • Container Deployments & Rollout Control. Describe your containers and how many you want with a “Deployment.” Kubernetes will keep those containers running and handle deploying changes (such as updating the image or changing environment variables) with a “rollout.” You can pause, resume, and rollback changes as you like.
  • Resource Bin Packing. You can declare minimum and maximum compute resources (CPU & Memory) for your containers. Kubernetes will slot your containers into where ever they fit. This increases your compute efficiency and ultimately lowers costs.
  • Built-in Service Discovery & Autoscaling. Kubernetes can automatically expose your containers to the internet or other containers in the cluster. It automatically load-balances traffic across matching containers. Kubernetes supports service discovery via environment variables and DNS, out of the box. You can also configure CPU-based autoscaling for containers for increased resource utilization.
  • Heterogeneous Clusters. Kubernetes runs anywhere. You can build your Kubernetes cluster for a mix of virtual machines (VMs) running the cloud, on-prem, or bare metal in your datacenter. Simply choose the composition according to your requirements.
  • Persistent Storage. Kubernetes includes support for persistent storage connected to stateless application containers. There is support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and many, many more.
  • High Availability Features. Kubernetes is planet scale. This requires special attention to high availability features such as multi-master or cluster federation. Cluster federation allows linking clusters together so that if one cluster goes down containers can automatically move to another cluster.

These key features make Kubernetes well suited for running different application architectures from monolithic web applications, to highly distributed microservice applications, and even batch driven applications.

With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running how you deployed them.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.

Kubernetes, however, relies on other projects to fully provide these orchestrated services. With the addition of other open source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):


  • Registry, through projects like Atomic Registry or Docker Registry.
  • Networking, through projects like OpenvSwitch and intelligent edge routing.
  • Telemetry, through projects such as heapster, kibana, hawkular, and elastic.
  • Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multi-tenancy layers.
  • Automation, with the addition of Ansible playbooks for installation and cluster life-cycle management.
  • Services, through a rich catalog of precreated content of popular app patterns.
  • Get all of this, prebuilt and ready to deploy, with Red Hat OpenShift

Container Management with OpenShift Red Hat - Open Cloud Day 2016



Learn to speak Kubernetes

Like any technology, there are a lot of words specific to the technology that can be a barrier to entry. Let's break down some of the more common terms to help you understand Kubernetes.

Master: The machine that controls Kubernetes nodes. This is where all task assignments originate.

Node: These machines perform the requested, assigned tasks. The Kubernetes master controls them.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage away from the underlying container. This lets you move containers around the cluster more easily.

Replication controller:  This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves to in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.

kubectl: This is the command line configuration tool for Kubernetes.

Check out the Kubernetes Reference: https://kubernetes.io/docs/reference/

Using Kubernetes in production

Kubernetes is open source. And, as such, there’s not a formalized support structure around that technology—at least not one you’d trust your business on. If you had an issue with your implementation of Kubernetes, while running in production, you’re not going to be very happy. And your customers probably won’t, either.

Performance and Scalability Tuning Kubernetes for OpenShift and Docker by Jeremy Eder, Red Hat


That’s where Red Hat OpenShift comes in. OpenShift is Kubernetes for the enterprise—and a lot more. OpenShift includes all of the extra pieces of technology that makes Kubernetes powerful and viable for the enterprise, including: registry, networking, telemetry, security, automation, and services. With OpenShift, your developers can make new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration that can turn a good idea into new business quickly and easily.

Best of all, OpenShift is supported and developed by the #1 leader in open source, Red Hat.


Kubernetes runs on top of an operating system (Red Hat Enterprise Linux Atomic Host, for example) and interacts with pods of containers running on the nodes. The Kubernetes master takes the commands from an administrator (or DevOps team) and relays those instructions to the subservient nodes. This handoff works with a multitude of services to automatically decide which node is best suited for the task. It then allocates resources and assigns the pods in that node to fulfill the requested work.

So, from an infrastructure point of view, there is little change to how you’ve been managing containers. Your control over those containers happens at a higher level, giving you better control without the need to micromanage each separate container or node. Some work is necessary, but it’s mostly a question of assigning a Kubernetes master, defining nodes, and defining pods.


What about docker?
The docker technology still does what it's meant to do. When kubernetes schedules a pod to a node, the kubelet on that node will instruct docker to launch the specified containers. The kubelet then continuously collects the status of those containers from docker and aggregates that information in the master. Docker pulls containers onto that node and starts and stops those containers as normal. The difference is that an automated system asks docker to do those things instead of the admin doing so by hand on all nodes for all containers.


OpenStack Compute for Containers
While many customers are already running containers on Red Hat Enterprise Linux 7 as an OpenStack guest operating system, we are also seeing greater interest in Red Hat Enterprise Linux Atomic Host as a container-optimized guest OS option. And while most customers run their containers in guest VMs driven by Nova, we are also seeing growing interest in customers who want to integrate with OpenStack Ironic to run containers on bare metal hosts. With OpenStack, customers can manage both virtual and physical compute infrastructure to serve as the foundation for their container application workloads.

Earlier this year we also demonstrated how OpenStack administrators could use Heat to deploy a cluster of Nova instances running Kubernetes. The Heat templates contributed by Red Hat simplify the provisioning of new container host clusters, which are ready to run container workloads orchestrated by Kubernetes. Heat templates also serve at the foundation for OpenStack Magnum API to make container orchestration engines like Kubernetes available as first class resources in OpenStack. We also recently created Heat templates to deploy OpenShift 3 and added them to the OpenStack Community App Catalog. Our next step is to make elastic provisioning and deprovisioning of Kubernetes nodes based on resource demand a reality.

Building Clustered Applications with Kubernetes and Docker - Stephen Watt, Red Hat


Linux
Linux is at the foundation of OpenStack and modern container infrastructures. While we are excited to see Microsoft invest in Docker to bring containers to Windows, they are still Linux containers after all. Red Hat’s first major contribution was bringing containers to enterprise Linux and RPM-based distributions like Fedora, Red Hat Enterprise Linux and CentOS. Since then we launched Project Atomic and made available Red Hat Enterprise Linux Atomic Host as a lightweight, container-optimized, immutable Linux platform for enterprise customers. With the recent surge in new container-optimized Linux distributions being announced, we see this as more than just a short term trend. This year we plan to release Red Hat Enterprise Linux Atomic Host 7.2 and talk about how customers are using it as the foundation for a containerized application workloads.

Red Hat Container Strategy


Docker
Docker has defined the packaging format and runtime for containers, which has now become the defacto standard for the industry, as embodied in OCI and the runC reference implementation. Red Hat continues to contribute extensively to the Docker project and is now helping to drive governance of OCI and implementation of runC. We are committed to helping to make Docker more secure, both in the container runtime and content and working with our partners to enable customers to safely containerize their most mission critical applications.

Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1


Kubernetes
Kubernetes is Red Hat’s choice for container orchestration and management and it is also seeing significant growth with more than 500 contributors and nearly 20,000 commits to the Kubernetes project in just over a year. While there is a lot of innovation in the container orchestration space, we see Kubernetes as another emerging standard given the combination of Google’s experience running container workloads at massive scale, Red Hat’s contributions and experience making open source work in enterprise environments, and the growing community surrounding it.

Microservices with Docker, Kubernetes, and Jenkins


This “LDK” stack is the foundation of Red Hat OpenShift 3 and Atomic Enterprise Platform announced recently at Red Hat Summit. It’s also the foundation of the Google Container Engine which is now generally available and other vendor and customer solutions that were featured recently at LinuxCon during the Kubernetes 1.0 launch.

Red Hat has helped drive innovation in this new Container stack while also driving integration with OpenStack. We have focused our efforts on integrating in the three core pillars of OpenStack – compute, networking and storage. Here’s how:

OpenStack Networking for Containers
Red Hat leverages Kubernetes networking model to enable networking across multiple containers, running across multiple hosts. In Kubernetes, each container (or “pod”) has its own IP address and can communicate with other containers/pods, regardless of which host they run on. Red Hat integrated RHEL Atomic Host with Flannel for container networking and also developed a new OVS-based SDN solution that is included in OpenShift 3 and Atomic Enterprise Platform. But in OpenStack environments, users may want to leverage Neutron and its rich ecosystem of networking plugins to handle networking for containers. We’ve been working in both the OpenStack and Kubernetes community to integrate Neutron with Kubernetes networking to enable this.

OpenShift Enterprise 3.1 vs kubernetes


OpenStack Storage for Containers
Red Hat also leverages Kubernetes storage volumes to enable users to run stateful services in containers like databases, message queues and other stateful apps. Users map their containers to persistent storage clusters, leveraging Kubernetes storage plugins like  NFS, iSCSI, Gluster, Ceph, and more. The OpenStack Cinder storage plugin currently under development will enable users to map to storage volumes managed by OpenStack Cinder.

Linux, Docker, and Kubernetes form the core of Red Hat’s enterprise container infrastructure. This LDK stack integrates with OpenStack’s compute, storage and networking services to provide an infrastructure platform for running containers. In addition to these areas, there are others that we consider critical for enterprises who are building a container-based infrastructure. A few of these include:

  • Container Security – Red Hat is working with Docker and the Open Containers community on container security. Security is commonly cited as one of the leading concerns limiting container adoption and Red Hat is tackling this on multiple levels. The first is multi-tenant isolation to help prevent containers from exploiting other containers or the underlying container host. Red Hat contributed SELinux integration to Docker, to provide a layered security model for container isolation and is also contributing to the development of features like privileged containers and user namespaces. The second area is securing container images to verify trusted content, which is another key concern. Red Hat has driven innovation in areas like image signing, scanning and certification and we recently announced our work with Black Duck to help make application containers free from known vulnerabilities
  • Enterprise Registry – Red Hat provides a standard Docker registry as a fully integrated component of both OpenShift and Atomic. This enables customers to more securely store and manage their own Docker images for enterprise deployments. Administrators can manage who has access to images, determine which images can be deployed and manage image updates.
  • Logging & Metrics – Red Hat has already integrated the ELK stack with Red Hat Enterprise Linux OpenStack Platform. It is doing the same in OpenShift and Atomic to provide users with aggregate logging for containers. This will enable administrators to get aggregated logs across the platform and also simplify log access for application developers. This work extends into integrated metrics for containerized applications and infrastructure.
  • Container Management – Red Hat CloudForms enables infrastructure and operations teams to manage application workloads across many different deployment fabrics – physical, virtual, public cloud and also private clouds based on OpenStack. CloudForms is being extended to manage container-based workloads in its next release. This will provide a single pane of glass to manage container-based workloads on OpenStack infrastructure.
Ultimately the goal of containers is to provide a better way to package and deploy your applications and enable application developers. Containers provide many benefits to developers like portability, fast deployment times and a broad ecosystem of packaged container images for a wide array of software stacks. As applications become more componentized and highly distributed with the advent of microservices architectures, containers provide an efficient way to deploy these microservices without the overhead of traditional VMs.

Red Hat OpenShift Container Platform Overview


But to provide a robust application platform and enable DevOps and Continuous Delivery, we also need to solve other challenges. Red Hat is tackling many of these in OpenShift, which is a containerized application platform that natively integrates Docker and is built on Red Hat’s enterprise container stack. These challenges include:

Build Automation – Developers moving to containerize their applications will likely need to update their build tools and processes to build container images. Red Hat is working on automating the Docker image build process at scale and has developed innovations like OpenShift source-to-image which enables users to push code changes and patches to their application containers, without being concerned with the details of Dockerfiles or Docker images.
Deployment Automation and CI/CD – Developers will also need to determine how containers will impact their deployment workflows and integrate with their CI/CD systems. Red Hat is working on automating common application deployment patterns with containers like rolling, canary and A/B deployments. We are also working to enable CI/CD with containers with work underway in OpenShift upstream projects like Origin and Fabric8
Containerized Middleware and Data Services – Administrators will need to provide their developers with trusted images to build their applications. Red Hat provides multiple language runtime images in OpenShift including Java, Node.js, Python, Ruby and more. We are also providing containerized middleware images like JBoss EAP, A-MQ and Fuse as well as database images from Red Hat’s Software Collections including MongoDB, Postgres and MySQL.
Developer Self Service – Ultimately developers want to access all of these capabilities without having to call on IT. With OpenShift, developers can access self-service Web, CLI and IDE interfaces to build and deploy containerized applications. OpenShift’s developer and application-centric view provide a great complement to OpenStack.

Containers Anywhere with OpenShift by Red Hat


This is just a sampling of the work we are doing in Containers and complements all the great work Red Hat contributes to in the OpenStack community. OpenStack and Containers are two examples of the tremendous innovation happening in open source and this week we are showcasing how they are great together.

More Information:

http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_orchestrating_containers_with_kubernetes

https://www.redhat.com/en/containers/what-is-kubernetes

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/getting_started_with_kubernetes/

http://rhelblog.redhat.com/tag/kubernetes/

http://redhatstackblog.redhat.com/tag/kubernetes/

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/

https://www.redhat.com/en/services/training/do180-introduction-containers-kubernetes-red-hat-openshift

https://blog.openshift.com/red-hat-chose-kubernetes-openshift/

https://www.openshift.com/container-platform/kubernetes.html

https://www.openshift.com

https://cloudacademy.com/blog/what-is-kubernetes/

https://keithtenzer.com/2015/04/15/containers-at-scale-with-kubernetes-on-openstack/





26 May 2017

KVM (Kernel Virtual Machine) or Xen? Choosing a Virtualization Platform

KVM versus Xen which should you choose?

KVM (Kernel Virtual Machine)

KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

Virtualization Architecture & KVM




Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

Virtualization Platform Smackdown: VMware vs. Microsoft vs. Red Hat vs. Citrix



KVM is open source software. The kernel component of KVM is included in mainline Linux, as of 2.6.20. The userspace component of KVM is included in mainline QEMU, as of 1.3.

Blogs from people active in KVM-related virtualization development are syndicated at http://planet.virt-tools.org/


KVM-Features

This is a possibly incomplete list of KVM features, together with their status. Feel free to update any of them as you see fit.

Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVM




As a guideline, there is a feature description template in here:

  • QMP - Qemu Monitor Protocol
  • KSM - Kernel Samepage Merging
  • Kvm Paravirtual Clock - A Paravirtual timesource for KVM
  • CPU Hotplug support - Adding cpus on the fly
  • PCI Hotplug support - Adding pci devices on the fly
  • vmchannel - Communication channel between the host and guests
  • migration - Migrating Virtual Machines
  • vhost -
  • SCSI disk emulation -
  • Virtio Devices -
  • CPU clustering -
  • hpet -
  • Device assignment -
  • pxe boot -
  • iscsi boot -
  • x2apic -
  • Floppy -
  • CDROM -
  • USB -
  • USB host device passthrough -
  • Sound -
  • Userspace irqchip emulation -
  • Userspace pit emulation -
  • Balloon memory driver -
  • Large pages support -
  • Stable Guest ABI -

Xen Hypervisor



The Xen hypervisor was first created by Keir Fraser and Ian Pratt as part of the Xenoserver research project at Cambridge University in the late 1990s. A hypervisor "forms the core of each Xenoserver node, providing the resource management, accounting and auditing that we require." The earliest web page dedicated to the Xen hypervisor is still available on Cambridge web servers.  The early Xen history can easily be traced through a variety of academic papers from Cambridge University. Controlling the XenoServer Open Platform is an excellent place to begin in understanding the origins of the Xen hypervisor and the XenoServer project. Other relevant research papers can be found at:



Xen and the Art of Virtualization - Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, Andrew Warfield. Puplished at SOSP 2003
Xen and the Art of Repeated Research - Bryan Clark, Todd Deshane, Eli Dow, Stephen Evanchik, Matthew Finlayson, Jason Herne, Jenna Neefe Matthews. Clarkson University. Presented at FREENIX 2004





  • Safe Hardware Access with the Xen Virtual Machine Monitor - Keir Fraser, Steven Hand, Rolf Neugebauer, Ian Pratt, Andrew Warfield, Mark Williamson. Published at OASIS ASPLOS 2004 Workshop
  • Live Migration of Virtual Machines - Christopher Clark, Keir Fraser, Steven Hand, Jacob Gorm Hansen, Eric Jul, Christian Limpach, Ian Pratt, Andrew Warfield. Published at NSDI 2005
  • Ottawa Linux Symposium 2004 Presentation
  • Linux World 2005 Virtualization BOF Presentation - Overview of Xen 2.0, Live Migration, and Xen 3.0 Roadmap
  • Xen Summit 3.0 Status Report - Cambridge 2005
  • Introduction to the Xen Virtual Machine - Rami Rosen, Linux Journal. Sept 1, 2005
  • Virtualization in Xen 3.0 - Rami Rosen, Linux Journal. March 2, 2006
  • Xen and the new processors - Rami Rosen, Lwn.net. May 2, 2006

Over the years, the Xen community has hosted several Xen Summit events where the global development community meets to discuss all things Xen. Many presentations and videos of those events are available here.

Why Xen Project?

The Xen Project team is a global open source community that develops the Xen Project Hypervisor and its associated subprojects.  Xen (pronounced /’zɛn/) Project has its origins in the ancient greek term Xenos (ξένος), which can be used to refer to guest-friends whose relationship is constructed under the ritual of xenia ("guest-friendship"), which in term is a wordplay on the idea of guest operating systems as well as a community of developers and users. The original website was created in 2003 to allow a global community of developers to contribute and improve the hypervisor.  Click on the link to find more about the projects’s interesting history.

Virtualization and Hypervisors




The community supporting the project follows a number of principles: Openess, Transparency, Meritocracy and Consensus Decision Making. Find out more about how the community governs itself.

What Differentiates the Xen Project Software?

Xen and the art of embedded virtualization (ELC 2017)



There are several virtualization technologies available in the world today. Our Xen Project virtualization and cloud software includes many powerful features which make it an excellent choice for many organizations:

Supports multiple guest operating systems: Linux, Windows, NetBSD, FreeBSD A virtualization technology which only supports a few guest operating systems essentially locks the organization into those choices for years to come. With our hypervisor, you have the flexibility to use what you need and add other operating system platforms as your needs dictate. You are in control.

VMware Alternative: Using Xen Server for Virtualization


Supports multiple Cloud platforms: CloudStack, OpenStack A virtualization technology which only supports one Cloud technology locks you into that technology. With the world of the Cloud moving so quickly, it could be a mistake to commit to one Cloud platform too soon. Our software keeps your choices open as Cloud solutions continue to improve and mature.
Reliable technology with a solid track record The hypervisor has been in production for many years and is the #1 Open Source hypervisor according to analysts such as Gartner. Conservative estimates show that Xen has an active user base of 10+ million: these are users, not merely hypervisor installations which are an order of magnitude higher. Amazon Web Services alone runs ½ million virtualized Xen Project instances according to a recent study and other cloud providers such as Rackspace and hosting companies use the hypervisor at extremely large scale. Companies such as Google and Yahoo use the hypervisor at scale for their internal infrastructure. Our software is the basis of successful commercial products such as Citrix XenServer and Oracle VM, which support an ecosystem of more than 2000 commercially certified partners today. It is clear that many major industry players regard our software as a safe virtualization platform for even the largest clouds.

Scalability The hypervisor can scale up to 4,095 host CPUs with 16Tb of RAM. Using Para Virtualization (PV), the hypervisor supports a maximum of 512 VCPUs with 512Gb RAM per guest. Using Hardware Virtualization (HVM), it supports a maximum of 128 VCPUs with 1Tb RAM per guest.

Performance Xen tends to outperform other open source virtualization solutions in most configurations. Check out Ubuntu 15.10: KVM vs. Xen vs. VirtualBox Virtualization Performance (Phoronix, Oct 2015) for a recent benchmarks of Xen 4.6.

High-Performance Virtualization for HPC Cloud on Xen - Jun Nakajima & Tianyu Lan, Intel Corp.



Security Security is one of the major concerns when moving critical services to virtualization or cloud computing environments. The hypervisor provides a high level of security due to its modular architecture, which separates the hypervisor from the control and guest operating systems. The hypervisor itself is thin and thus provides a minimal attack surface. The software also contains the Xen Security Modules (XSM), which have been developed and contributed to the project by the NSA for ultra secure use-cases. XSM introduces control policy providing fine-grained controls over its domains and their interaction amongst themselves and the outside world. And, of course, it is also possible to use the hypervisor with SELinux. In addition, Xen’s Virtual Machine Introspection (VMI) subsystems make it the best hypervisor for security applications. For more information, see Virtual Machine Introspection with Xen and VM Introspection: Practical Applications.

Live patching the xen project hypervisor




The Xen Project also has a dedicated security team, which handles security vulnerabilities in accordance with our Security Policy. Unlike almost all corporations and even most open source projects, the Xen Project properly discloses, via an advisory, every vulnerability discovered in supported configurations. We also often publish advisories about vulnerabilities in other relevant projects, such as Linux and QEMU.

Flexibility Our hypervisor is the most flexible hypervisor on the market, enabling you to tailor your installation to your needs. There are lots of choices and trade-offs that you can make. For example: the hypervisor works on older hardware using paravirtualization, on newer hardware using HVM or PV on HVM. Users can choose from three tool stacks (XL, XAPI & LIBVIRT), from an ecosystem of software complementing the project and choose the most suitable flavour of Linux and Unix operating system for their needs. Further, the project's flexible architecture enables vendors to create Xen-based products and services for servers, cloud, desktop in particular for ultra secure environments.

Modularity Our architecture is uniquely modular, enabling a degree of scalability, robustness, and security suitable even for large, critical, and extremely secure environments. The control functionality in our control domain can be divided into small modular domains running a minimal kernel and a driver, control logic or other functionality: we call this approach Domain Disaggregation. Disaggregated domains are conceptually similar to processes in an operating system. They can be started/ended on demand, without affecting the rest of the system. Disaggregated domains reduce attack surface and distribute bottlenecks.  It enables you to restart an unresponsive device driver without affecting your VMs.

Analysis of the Xen code review process: An example of software development analytics



VM Migration The software supports Virtual Machine Migration. This allows you to react to changing loads on your servers, protecting your workloads.
Open Source Open Source means that you have influence over the direction of the code. You are not at the mercy of some immovable external organization which may have priorities which do not align with your organization. You can participate and help ensure that your needs are heard in the process. And you never have to worry that some entity has decided to terminate the product for business reasons. An Open Source project will live as long as there are parties interested in advancing the software.

Multi-vendor support The project enjoys support from a number of major software and service vendors.  This gives end-users numerous places to find support, as well as numerous service providers to work with.  With such a rich commercial ecosystem around the project, there is plenty of interest in keeping the project moving forward to ever greater heights.

KVM or Xen? Choosing a Virtualization Platform

When Xen was first released in 2002, the GPL'd hypervisor looked likely to take the crown as the virtualization platform for Linux. Fast forward to 2010, and the new kid in town has displaced Xen as the virtualization of choice for Red Hat and lives in the mainline Linux kernel. Which one to choose? Read on for our look at the state of Xen vs. KVM.

Things in virtualization land move pretty fast. If you don't have time to keep up with the developments in KVM or Xen development, it's a bit confusing to decide which one (if either) you ought to choose. This is a quick look at the state of the market between Xen and KVM.

KVM and Xen

Xen is a hypervisor that supports x86, x86_64, Itanium, and ARM architectures, and can run Linux, Windows, Solaris, and some of the BSDs as guests on their supported CPU architectures. It's supported by a number of companies, primarily by Citrix, but also used by Oracle for Oracle VM, and by others. Xen can do full virtualization on systems that support virtualization extensions, but can also work as a hypervisor on machines that don't have the virtualization extensions.

KVM is a hypervisor that is in the mainline Linux kernel. Your host OS has to be Linux, obviously, but it supports Linux, Windows, Solaris, and BSD guests. It runs on x86 and x86-64 systems with hardware supporting virtualization extensions. This means that KVM isn't an option on older CPUs made before the virtualization extensions were developed, and it rules out newer CPUs (like Intel's Atom CPUs) that don't include virtualization extensions. For the most part, that isn't a problem for data centers that tend to replace hardware every few years anyway — but it means that KVM isn't an option on some of the niche systems like the SM10000 that are trying to utilize Atom CPUs in the data center.

If you want to run a Xen host, you need to have a supported kernel. Linux doesn't come with Xen host support out of the box, though Linux has been shipping with support to run natively as a guest since the 2.6.23 kernel. What this means is that you don't just use a stock Linux distro to run Xen guests. Instead, you need to choose a Linux distro that ships with Xen support, or build a custom kernel. Or go with one of the commercial solutions based on Xen, like Citrix XenServer. The problem is that those solutions are not entirely open source.

And many do build custom kernels, or look to their vendors to do so. Xen is running on quite a lot of servers, from low-cost Virtual Private Server (VPS) providers like Linode to big boys like Amazon with EC2. A TechTarget article demonstrates how providers that have invested heavily in Xen are not likely to switch lightly. Even if KVM surpasses Xen technically, they're not likely to rip and replace the existing solutions in order to take advantage of a slight technical advantage.

And KVM doesn't yet have the technical advantage anyway. Because Xen has been around a bit longer, it also has had more time to mature than KVM. You'll find some features in Xen that haven't yet appeared in KVM, though the KVM project has a lengthy TODO list that they're concentrating on. (The list isn't a direct match for parity with Xen, just a good idea what the KVM folks are planning to work on.) KVM does have a slight advantage in the Linux camp of being the anointed mainline hypervisor. If you're getting a recent Linux kernel, you've already got KVM built in. Red Hat Enterprise Linux 5.4 included KVM support and the company is dropping Xen support for KVM in RHEL 6.

This is, in part, an endorsement of how far KVM has come technically. Not only does Red Hat have the benefit of employing much of the talent behind KVM, there's the benefit of introducing friction to companies that have cloned Red Hat Enterprise Linux and invested heavily in Xen. By dropping Xen from the roadmap, they're forcing other companies to drop Xen or pick up maintenance of Xen and diverging from RHEL. This means extra engineering costs, requiring more effort for ISV certifications, etc.

KVM isn't entirely on par with Xen, though it's catching up quickly. It has matured enough that many organizations feel comfortable deploying it in production. So does that mean Xen is on the way out? Not so fast.

There Can Be Only One?

The choice of KVM vs. Xen is as likely to be dictated by your vendors as anything else. If you're going with RHEL over the long haul, bank on KVM. If you're running on Amazon's EC2, you're already using Xen, and so on. The major Linux vendors seem to be standardizing on KVM, but there's plenty of commercial support out there for Xen. Citrix probably isn't going away anytime soon.

It's tempting in the IT industry to look at technology as a zero sum game where one solution wins and another loses. The truth is that Xen and KVM are going to co-exist for years to come. The market is big enough to support multiple solutions, and there's enough backing behind both technologies to ensure that they do well for years to come.

Containers vs. Virtualization: The new Cold War?


More Information:

https://sites.google.com/site/virtualizationtestingframework/

http://www.serverwatch.com/server-trends/slideshows/top-10-virtualization-technology-companies-for-2016.html

https://xenproject.org/users/why-the-xen-project.html

http://planet.virt-tools.org/

https://www.linux-kvm.org/page/KVM_Features

https://lwn.net/Articles/705160/

https://xenproject.org/about/history.html

https://www.linux-kvm.org/page/Guest_Support_Status

https://www.linux-kvm.org/page/Management_Tools

https://www.linux-kvm.org/page/HOWTO

https://xenserver.org/overview-xenserver-open-source-virtualization/open-source-virtualization-features.html

https://blog.xenproject.org/category/releases/

https://wiki.xenproject.org/wiki/Xen_Project_Release_Features

https://wiki.xen.org/wiki/Xen_Project_4.4_Feature_List

http://www.brendangregg.com/blog/2014-05-09/xen-feature-detection.html

https://onapp.com/2016/09/06/hypervisor-choice-xen-or-kvm/

https://www.suse.com/documentation/sles-12/singlehtml/book_virt/book_virt.html

https://www.linux.com/blogs/linux-foundation





27 April 2017

Microsoft touts SQL Server 2017 as 'first RDBMS with built-in AI'


The 2017 Microsoft Product Roadmap

Many key Microsoft products reached significant milestones in 2016, with next-gen versions of SharePoint Server, SQL Server and Windows Server all being rolled out alongside major updates to the Dynamics portfolio and, of course, Windows. This year's product roadmap looks to be a bit less crowded, though major changes are on tap for Microsoft's productivity solutions, while Windows 10 is poised for another landmark update. Here's what to watch for in the coming months 


With a constantly changing, and increasingly diversifying IT landscape– particularly in terms of heterogeneous operating systems (Linux, Windows, etc.) - IT organizations must contend with multiple data types, different development languages, and a mix of on-premises/cloud/hybrid environments, and somehow simultaneously reduce operational costs. To enable you to choose the best platform for your data and applications, SQL Server is bringing its world-class RDBMS to Linux and Windows with SQL Server v.Next.



You will learn more about the SQL Server on Linux offering and how it provides a broader range of choice for all organizations, not just those who want to run SQL on Windows. It enables SQL Server to run in more private, public, and hybrid cloud ecosystems, to be used by developers regardless of programming languages, frameworks or tools, and further empowers ‘every person and every organization on the planet to achieve more.’

Bootcamp 2017 - SQL Server on Linux


Learn More about:

  • What’s next for SQL Server on Linux
  • The Evolution and Power of SQL Server 2016
  • Enabling DevOps practices such as Dev/Test and CI/CD  with containers
  • What is new with SQL Server 2016 SP1: Enterprise class features in every edition
  • How to determine which SQL Server edition to deploy based on operation need, not feature set

SQL Server on Linux: High Availability and security on Linux


Why Microsoft for your operational database management system?

When it comes to the systems you choose for managing your data, you want performance and security that won't get in the way of running your business. As an industry leader in operational database management systems (ODBMS), Microsoft continuously improves its offerings to help you get the most out of your ever-expanding data world.

Read Gartner’s assessment of the ODBMS landscape and learn about the Microsoft "cloud first" strategy. In its latest Magic Quadrant report for ODBMS, Gartner positioned the Microsoft DBMS furthest in completeness of vision and highest for ability to execute. Gartner Reprint of SQL Server 2017

Top Features Coming to SQL Server 2017
From Python to adaptive query optimization to the many cloud-focused changes (not to mention Linux!), Joey D'Antoni takes you through the major changes coming to SQL Server 2017.

Top three capabilities to get excited about in the next version of SQL Server

Microsoft announced the first public preview of SQL Server v.Next in November 2016, and since then we’ve had lots of customer interest, but a few key scenarios are generating the most discussion.

If you’d like to learn more about SQL Server v.Next on Linux and Windows, please join us for the upcoming Microsoft Data Amp online event on April 19 at 8 AM Pacific. It will showcase how data is the nexus between application innovation and intelligence—how data and analytics powered by the most trusted and intelligent cloud can help companies differentiate and out-innovate their competition.

In this blog, we discuss three top things that customers are excited to do with the next version of SQL Server.

1. Scenario 1: Give applications the power of SQL Server on the platform of your choice

With the upcoming availability of SQL Server v.Next on Linux, Windows, and Docker, customers will have the added flexibility to build and deploy more of their applications on SQL Server. In addition to Windows Server and Windows 10, SQL Server v.Next supports Red Hat Enterprise Linux (RHEL), Ubuntu, and SUSE Linux Enterprise Server (SLES). SQL Server v.Next also runs on Linux and Windows Docker containers opening up even more possibilities to run on public and private cloud application platforms like Kubernetes, OpenShift, Docker Swarm, Mesosphere DC/OS, Azure Stack, and Open Stack. Customers will be able to continue to leverage existing tools, talents, and resources for more of their applications.


Some of the things customers are planning for SQL Server v.Next on Windows, Linux, and Docker include migrating existing applications from other databases on Linux to SQL Server; implementing new DevOps processes using Docker containers; developing locally on the dev machine of choice, including Windows, Linux, and macOS; and building new applications on SQL Server that can run anywhere—on Windows, Linux, or Docker containers, on-premises, and in the cloud.

SQL Server on Linux - march 2017


2. Scenario 2: Faster performance with minimal effort

SQL Server v.Next further expands the use cases supported by SQL Server’s in-memory capabilities, In-Memory OLTP and In-Memory ColumnStore. These capabilities can be combined on a single table delivering the best Hybrid Transactional and Analytical Processing (HTAP) performance available in any database system. Both in-memory capabilities can yield performance improvements of more than 30x, enabling the possibility to perform analytics in real time on operational data.

In v.Next natively compiled stored procedures (In-memory OLTP) now support JSON data as well as new query capabilities. For the column store both building and rebuilding a nonclustered column store can now be done online. Another critical addition to the column store is support for LOBs (Large Objects).

SQL Server on Linux 2017


With these additions, the parts of an application that can benefit from the extreme performance of SQL Server’s in-memory capabilities have been greatly expanded! We also introduced a new set of features that learn and adapt from an application’s query patterns over time without requiring actions from your DBA.


3. Scenario 3: Scale out your analytics

In preparation for the release of SQL Server v.Next, we are enabling the same High Availability (HA) and Disaster Recovery (DR) solutions on all platforms supported by SQL Server, including Windows and Linux. Always On Availability Groups is SQL Server’s flagship solution for HA and DR. Microsoft has released a preview of Always On Availability Groups for Linux in SQL Server v.Next Community Technology Preview (CTP) 1.3.

SQL Server Always On availability groups can have up to eight readable secondary replicas. Each of these secondary replicas can have their own replicas as well. When daisy chained together, these readable replicas can create massive scale-out for analytics workloads. This scale-out scenario enables you to replicate around the globe, keeping read replicas close to your Business Analytics users. It’s of particularly big interest to users with large data warehouse implementations. And, it’s also easy to set up.

In fact, you can now create availability groups that span Windows and Linux nodes, and scale out your analytics workloads across multiple operating systems.

In addition, a cross-platform availability group can be used to migrate a database from SQL Server on Windows to Linux or vice versa with minimal downtime. You can learn more about SQL Server HA and DR on Linux by reading the blog SQL Server on Linux: Mission-critical HADR with Always On Availability Groups Mission Critical HADR .

To find out more, you can watch our SQL Server on Linux webcast Linux Webinars . Find instructions for acquiring and installing SQL Server v.Next on the operating system of your choice at www.microsoft.com/sqlserveronlinux http://www.microsoft.com/sqlserveronlinux . To get your SQL Server app on Linux faster, you can nominate your app for the SQL Server on Linux Early Adopter Program, or EAP. Sign up now to see if your application qualifies for technical support, workload validation, and help moving your application to production on Linux before general availability.

To find out more about SQL Server v.Next and get all the latest announcements, register now to attend Microsoft Data Amp—where Data Amp—- where Data data gets to work.

Microsoft announced the name and many of the new features in the next release of SQL Server at its Data Amp Virtual Event on Wednesday. While SQL Server 2017 may not have as comprehensive of a feature set as SQL Server 2016, there is still some big news and very interesting new features. The reason for this is simple -- the development cycle for SQL Server 2017 is much shorter than the SQL Server 2016 development cycle. The big news at Wednesday's event is the release of SQL Server 2017 later this year on both Windows and Linux operating systems.

Microsoft Data Platform Airlift 2017 Rui Quintino Machine Learning with SQL Server 2016 and R Services


I was able to quickly download the latest Linux release on Docker and have it up and running on my Mac during today's briefing. (I have previously written about the Linux release here.) That speed to development is one of the major benefits of Docker that Microsoft hopes developers will leverage when building new applications. Docker is just one of many open source trends we have seen Microsoft adopt in recent years with SQL Server. Wednesday's soft launch not only introduced SQL on Linux, but also includes Python support, a new graph engine and a myriad of other features.

First R, Now Python
One of the major features of SQL Server 2016 was the integration of R, an open source statistical analysis language, into the SQL Server database engine. Users can use the sp_execute_external_script stored procedure to run R code that takes advantage of parallelism in the database engine. Savvy users of this procedure might notice the first parameter of this stored procedure is @language. Microsoft designed this stored procedure to be open-ended, and now adds Python as the second language that it supports. Python combines powerful scripting with eminent readability and is broadly used by IT admins, developers, data scientists, and data analysts. Additionally, Python can leverage external statistical packages to perform data manipulation and statistical analysis. When you combine this capability with Transact-SQL (T-SQL), the result is powerful.

SQL Server 2017: Advanced Analytics with Python
In this session you will learn how SQL Server 2017 takes in-database analytics to the next level with support for both Python and R; delivering unparalleled scalability and speed with new deep learning algorithms built in. Download SQL Server 2017: https://aka.ms/sqlserver17linuxyt




Big Changes to the Cloud
It is rare for a Microsoft launch event to omit news about cloud services, and Wednesday's event was no exception. Microsoft Azure SQL Database (formerly known as SQL Azure), which is the company's Database as a Service offering, has always lacked complete compatibility with the on-premises (or in an Azure VM) version of SQL Server. Over time, compatibility has gotten much better, but there are still gaps such as unsupported features like SQL CLR and cross-database query.

SQL Server 2017: Security on Linux


The new solution to this problem is a hybrid Platform as a Server (PaaS)/Infrastructure as a Service (IaaS) solution that is currently called Azure Managed Instances. Just as with Azure SQL Database, the Managed Instances administrator is not responsible for OS and patching operations. However, the Managed Instances solution supports many features and functions that are not currently supported in SQL Database. One such new feature is the cross-database query capability. In an on-premises environment, multiple databases commonly exist on the same instance, and a single query can reference separate databases by using database.schema.table notation. In SQL Database, it is not possible to reference multiple databases in one query which has limited many migrations to the platform due to the amount of code that must be rewritten. Support for cross-database queries in Managed Instances simplifies the process of migrating applications to Azure PaaS offerings, and should thereby increase the number of independent software vendor (ISV) applications that can run in PaaS.

SQL Server 2017: HA and DR on Linux


SQL Server 2017: Adaptive Query Processing


Microsoft also showcased some of the data protection features in Azure SQL Database that are now generally available. Azure SQL Database Threat Detection detects SQL Injection, potential SQL Injection vulnerabilities, and anomalous login monitoring. This can simply be turned on at the SQL Database level by enabling auditing and configuring notifications. The administrator is then notified when the threat detection engine detects any anomalous behavior.

Graph Database
One of things I was happiest to see in SQL Server 2017 was the introduction of a graph database within the core database engine. Despite the name, relational databases struggle in managing relationships between data objects. The simplest example of this struggle is hierarchy management . In a classic relational structure, an organizational chart can be a challenge to model -- who does the CEO report to? With graph database support in SQL Server, the concept of nodes and edges is introduced. Nodes represent entities, edges represent relationships between any two given nodes, and both nodes and edges can be associated with data properties . SQL Server 2017 also uses extensions in the T-SQL language to support join-less queries that use matching to return related values.

SQL Server 2017: Building applications using graph data
Graph extensions in SQL Server 2017 will facilitate users in linking different pieces of connected data to help gather powerful insights and increase operational agility. Graphs are well suited for applications where relationships are important, such as fraud detection, risk management, social networks, recommendation engines, predictive analysis, dependence analysis, and IoT applications. In this session we will demonstrate how you can use SQL Graph extensions to build your application using graph data. Download SQL Server 2017: Now on Windows, Linux, and Docker https://www.microsoft.com/en-us/sql-server/sql-server-vnext-including-Linux



Graph databases are especially useful in Internet of Things (IoT), social network, recommendation engine, and predictive analytics applications. It should be noted that many vendors have been investing in graph solutions in recent years. Besides Microsoft, IBM and SAP have also released graph database features in recent years.

Adaptive Query Plans
One the biggest challenges of a DBA is managing system performance over time. As data changes, the query optimizer generates new execution plans which at times might be less than optimal . With Adaptive Query Optimization in SQL Server 2017, SQL Server can evaluate the runtime of a query and compare the current execution to the query's history, building on some of the technology that was introduced in the Query Store feature in SQL Server 2016 . For the next run of the same query, Adaptive Query Optimization can then improve the execution plan .

Because a change to an execution plan  that is based on one slow execution can have a dramatically damaging effect on system performance, the changes made by Adaptive Query Optimization are incremental and conservative. Over time, this feature handles the tuning a busy DBA may not have time to perform. This feature also benefits from Microsoft's management of Azure SQL Database because the development team monitors the execution data and the improvements that adaptive execution plans make in the cloud. They can then optimize the process and flow for adaptive execution plans in future versions of the on-premises product.

Are You a Business Intelligence Pro?
SQL Server includes much more than the database engine. Tools like Reporting Services (SSRS) and Analysis Services (SSAS) have long been a core part of the value proposition of SQL Server. Reporting Services benefited from a big overhaul in SQL Server 2016, and more improvements are coming in SQL Server 2017 with on-premises support for storage of Power BI reports in a SSRS instance. This capability is big news to organizations who are cloud-averse for various reasons. In addition, SQL Server 2017 adds support for the Power Query data sources in SSAS tabular models to expand. This capability means tabular models can store data from a broader range of data sources than it currently supports, such as Azure Blob Storage and Web page data.

2017 OWASP SanFran March Meetup - Hacking SQL Server on Scale with PowerShell


And More...
Although it is only an incremental release, Microsoft has packed a lot of functionality into SQL Server 2017. I barely mentioned Linux in this article for a reason: From a database perspective SQL Server on Linux is simply SQL Server. Certainly, there are some changes in infrastructure, but your development experience in SQL Server, whether on Linux, Windows or Docker, is exactly the same.

Keep your environment always on with sql server 2016 sql bits 2017


From my perspective, the exciting news is not just the new features that are in this version, but also the groundwork for feature enhancements down the road. Adaptive query optimization will get better over time, as will the graph database feature which you can query by using standard SQL syntax. Furthermore, the enhancements to Azure SQL Database with managed instances should allow more organizations to consider adoption of the database as a service option. In general, I am impressed with Microsoft's ability to push the envelope on database technology so shortly after releasing SQL Server 2016.

Nordic infrastructure Conference 2017 - SQL Server on Linux Overview



You can get started with the CTP by downloading the package for Docker, https://hub.docker.com/r/microsoft/mssql-server-windows/ or the Linux, https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-red-hat platforms, or you can download the Windows release here https://www.microsoft.com/evalcenter/evaluate-sql-server-vnext-ctp .

More Information:

https://www.microsoft.com/en-us/sql-server/sql-server-2017

https://rcpmag.com/articles/2011/02/01/the-2011-microsoft-product-roadmap.aspx

https://adtmag.com/articles/2017/04/19/sql-server-2017.aspx

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/sql-server-2017-community-technology-preview-2-0-now-available/

http://info.microsoft.com/rs/157-GQE-382/images/EN-CNTNT-SQL_Server_on_Linux_Public_Preview_Technical_Whitepaper-en-us.pdf

https://info.microsoft.com/SQL-Server-on-Linux-Open-source-enterprise-environment.html

https://info.microsoft.com/SQL-Server-on-Linux-Open-source-enterprise-environment.html

https://info.microsoft.com/CO-SQL-CNTNT-FY16-09Sep-14-MQOperational-Register.html?ls=website

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/20/graph-data-processing-with-sql-server-2017/

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/20/resumable-online-index-rebuild-is-in-public-preview-for-sql-server-2017-ctp-2-0/

https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/python-in-sql-server-2017-enhanced-in-database-machine-learning/