• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

22 November 2016

Service Fabric as a microservices platform

What is a microservice?

Introduction to Microservices

There are different definitions of microservices. If you search the Internet, you'll find many useful resources that provide their own viewpoints and definitions. However, most of the following characteristics of microservices are widely agreed upon:

  • Encapsulate a customer or business scenario. What is the problem you are solving?
  • Developed by a small engineering team.
  • Written in any programming language and use any framework.
  • Consist of code and (optionally) state, both of which are independently versioned, deployed, and scaled.
  • Interact with other microservices over well-defined interfaces and protocols.
  • Have unique names (URLs) used to resolve their location.
  • Remain consistent and available in the presence of failures.

You can summarize these characteristics into:

  • Microservice applications are composed of small, independently versioned, and scalable customer-focused services that communicate with each other over standard protocols with well-defined interfaces.
  • Allows code and state to be independently versioned, deployed, and scaled

However you choose to write your microservices, the code and optionally the state should independently deploy, upgrade, and scale. This is actually one of the harder problems to solve, because it comes down to your choice of technologies. For scaling, understanding how to partition (or shard) both the code and state is challenging. When the code and state use separate technologies, which is common today, the deployment scripts for your microservice need to be able to cope with scaling them both. This is also about agility and flexibility, so you can upgrade some of the microservices without having to upgrade all of them at once.

Azure Service Fabric

Monolithic vs. microservice design approach

All applications evolve over time. Successful applications evolve by being useful to people. Unsuccessful applications do not evolve and eventually are deprecated. The question becomes: How much do you know about your requirements today, and what will they be in the future? For example, let's say that you are building a reporting application for a department. You are sure that the application will remain within the scope of your company and that the reports will be short-lived. Your choice of approach is different from, say, building a service that delivers video content to tens of millions of customers.

Azure Service Fabric 101 - Introduction

Sometimes, getting something out the door as proof of concept is the driving factor, while you know that the application can be redesigned later. There is little point in over-engineering something that never gets used. It’s the usual engineering trade-off. On the other hand, when companies talk about building for the cloud, the expectation is growth and usage. The issue is that growth and scale are unpredictable. We would like to be able to prototype quickly while also knowing that we are on a path to deal with future success. This is the lean startup approach: build, measure, learn, and iterate.

During the client-server era, we tended to focus on building tiered applications by using specific technologies in each tier. The term monolithic application has emerged for these approaches. The interfaces tended to be between the tiers, and a more tightly coupled design was used between components within each tier. Developers designed and factored classes that were compiled into libraries and linked together into a few executables and DLLs.

There are benefits to such a monolithic design approach. It's often simpler to design, and it has faster calls between components, because these calls are often over interprocess communication (IPC). Also, everyone tests a single product, which tends to be more people-resource efficient. The downside is that there's a tight coupling between tiered layers, and you cannot scale individual components. If you need to perform fixes or upgrades, you have to wait for others to finish their testing. It is more difficult to be agile.

Microservices address these downsides and more closely align with the preceding business requirements, but they also have both benefits and liabilities. The benefits of microservices are that each one typically encapsulates simpler business functionality, which you scale up or down, test, deploy, and manage independently. One important benefit of a microservice approach is that teams are driven more by business scenarios than by technology, which the tiered approach encourages. In practice, smaller teams develop a microservice based on a customer scenario and use any technologies they choose.

Exploring Microservices in Docker and Microsoft Azure

In other words, the organization doesn’t need to standardize tech to maintain monolithic applications. Individual teams that own services can do what makes sense for them based on team expertise or what’s most appropriate to solve the problem. In practice, a set of recommended technologies, such as a particular NoSQL store or web application framework, is preferable.

The downside of microservices comes in managing the increased number of separate entities and dealing with more complex deployments and versioning. Network traffic between the microservices increases as well as the corresponding network latencies. Lots of chatty, granular services are a recipe for a performance nightmare. Without tools to help view these dependencies, it is hard to “see” the whole system.

Standards make the microservice approach work by agreeing on how to communicate and being tolerant of only the things you need from a service, rather than rigid contracts. It is important to define these contracts up front in the design, because services update independently of each other. Another description coined for designing with a microservices approach is “fine-grained service-oriented architecture (SOA).”

At its simplest, the microservices design approach is about a decoupled federation of services, with independent changes to each, and agreed-upon standards for communication.

As more cloud apps are produced, people discover that this decomposition of the overall app into independent, scenario-focused services is a better long-term approach.

Returning to the monolithic versus microservice approach for a moment, the following diagram shows the differences in the approach to storing state.

State storage between application styles

A monolithic app contains domain-specific functionality and is normally divided by functional layers, such as web, business, and data.
You scale a monolithic app by cloning it on multiple servers/virtual machines/containers.
A microservice application separates functionality into separate smaller services.
The microservices approach scales out by deploying each service independently, creating instances of these services across servers/virtual machines/containers.
Designing with a microservice approach is not a panacea for all projects, but it does align more closely with the business objectives described earlier. Starting with a monolithic approach might be acceptable if you know that you will not have the opportunity to rework the code later into a microservice design if necessary. More commonly, you begin with a monolithic app and slowly break it up in stages, starting with the functional areas that need to be more scalable or agile.

To summarize, the microservice approach is to compose your application of many small services. The services run in containers that are deployed across a cluster of machines. Smaller teams develop a service that focuses on a scenario and independently test, version, deploy, and scale each service so that the entire application can evolve.

The objective of Service Fabric is to reduce the complexities of building applications with a microservice approach, so that you do not have to go through as many costly redesigns. Start small, scale when needed, deprecate services, add new ones, and evolve with customer usage is the approach. We also know that there are many other problems yet to be solved to make microservices more approachable for most developers. Containers and the actor programming model are examples of small steps in that direction, and we are sure that more innovations will emerge to make this easier.

Explore Microservices solutions and Microsoft Azure Service Fabric

Simplify building microservice-based applications and lifecycle management

Fast time to market: Service Fabric lets developers focus on building features that add business value to their application, without the overhead of designing and writing additional code to deal with issues of reliability, scalability, or latency in the underlying infrastructure.

Choose your architecture: Build stateless or stateful microservices—an architectural approach where complex applications are composed of small, independently versioned services—to power the most complex, low-latency, data-intensive scenarios and scale them into the cloud with Azure Service Fabric.

Microservice agility: Architecting fine-grained microservice applications allows continuous integration and development practices and accelerates delivery of new functions into the application.

Visual Studio integration: Includes Visual Studio tooling, as well as command line support, so developers can quickly and easily build, test, debug, deploy, and update their Service Fabric applications on single-box, test, and production deployments.

Service Fabric as a microservices platform

Azure Service Fabric emerged from a transition by Microsoft from delivering box products, which were typically monolithic in style, to delivering services. The experience of building and operating large services, such as Azure SQL Database and Azure DocumentDB, shaped Service Fabric. The platform evolved over time as more and more services adopted it. Importantly, Service Fabric had to run not only in Azure but also in standalone Windows Server deployments.

The aim of Service Fabric is to solve the hard problems of building and running a service and utilize infrastructure resources efficiently, so that teams can solve business problems using a microservices approach.

Service Fabric provides two broad areas to help you build applications that use a microservices approach:

A platform that provides system services to deploy, upgrade, detect, and restart failed services, discover service location, manage state, and monitor health. These system services in effect enable many of the characteristics of microservices previously described.
Programming APIs, or frameworks, to help you build applications as microservices: reliable actors and reliable services. Of course, you can choose any code to build your microservice. But these APIs make the job more straightforward, and they integrate with the platform at a deeper level. This way, for example, you can get health and diagnostics information, or you can take advantage of built-in high availability.

Exploring microservices in a Microsoft landscape

Service Fabric is agnostic on how you build your service, and you can use any technology.However, it does provide built-in programming APIs that make it easier to build microservices.

Key capabilities

By using Service Fabric, you can:

  • Develop massively scalable applications that are self-healing.
  • Develop applications that are composed of microservices by using the Service Fabric programming model. Or, you can simply host guest executables and other application frameworks of your choice, such as ASP.NET Core 1 or Node.js.
  • Develop highly reliable stateless and stateful microservices.
  • Deploy and orchestrate containers that include Windows containers and Docker containers across a cluster. These containers can contain guest executables or reliable stateless and stateful microservices. In either case, you get mapping from container port to host port, container discoverability, and automated failover.
  • Simplify the design of your application by using stateful microservices in place of caches and queues.
  • Deploy to Azure or to on-premises datacenters that run Windows or Linux with zero code changes. Write once, and then deploy anywhere to any Service Fabric cluster.
  • Develop with a "datacenter on your machine" approach. The local development environment is the same code that runs in the Azure datacenters.
  • Deploy applications in seconds.
  • Deploy applications at higher density than virtual machines, deploying hundreds or thousands of applications per machine.
  • Deploy different versions of the same application side by side, and upgrade each application independently.
  • Manage the lifecycle of your stateful applications without any downtime, including breaking and nonbreaking upgrades.
  • Manage applications by using .NET APIs, Java (Linux), PowerShell, Azure command-line interface (Linux), or REST interface.
  • Upgrade and patch microservices within applications independently.
  • Monitor and diagnose the health of your applications and set policies for performing automatic repairs.
  • Scale out or scale in the number of nodes in a cluster, and scale up or scale down the size of each node. As you scale nodes, your applications automatically scale and are distributed according to the available resources.
  • Watch the self-healing resource balancer orchestrate the redistribution of applications across the cluster. Service Fabric recovers from failures and optimizes the distribution of load based on available resources.
Azure Microservices in Practice - Radu Vunvulea

Deliver low-latency performance and efficiency at massive scale

Deliver fast in-place upgrades with zero downtime, auto-scaling, integrated health monitoring, and service healing. Orchestration and automation for building microservices gives new levels of app awareness and insight to automate live-upgrades with rollback and automatic scale-up and scale-down capabilities.

Microsoft: Building a Massively Scalable System with DataStax and Microsoft's Next Generation PaaS Infrastructure

Plus, solve hard distributed system problems such as failover, leader election, state management and provide application lifecycle management capabilities so developers don’t have to re-architect applications as usage grows. This includes multi-tenant SaaS applications, Internet-of-Things data gathering and processing, and gaming and media serving.

Proven platform used by Azure and other Microsoft services

Azure Service Fabric was born from years of experience at Microsoft delivering mission-critical cloud services and is production-proven since 2010. It’s the foundational technology on which we run our Azure core infrastructure, powering services including Skype for Business, Intune, Azure Event Hubs, Azure Data Factory, Azure DocumentDB, Azure SQL Database, and Cortana.

This experience allowed us to design a platform that intrinsically understands the available infrastructure resources and needs of applications, enabling an automatically updating, self-healing behavior that is essential to delivering highly available and durable services at hyperscale.

Azure Service Fabric Overview

As software developers, there is nothing new in how we think about factoring an application into component parts. It is the central paradigm of object orientation, software abstractions, and componentization. Today, this factorization tends to take the form of classes and interfaces between shared libraries and technology layers. Typically, a tiered approach is taken with a back-end store, middle-tier business logic, and a front-end user interface (UI). What has changed over the last few years is that we, as developers, are building distributed applications that are for the cloud and driven by the business.

The changing business needs are:

  • A service that's built and operates at scale to reach customers in new geographical regions (for example).
  • Faster delivery of features and capabilities to be able to respond to customer demands in an agile way.
  • Improved resource utilization to reduce costs.
  • These business needs are affecting how we build applications.

For more information about the approach of Azure to microservices, read Microservices: An application revolution powered by the cloud.

To summarize, the microservice approach is to compose your application of many small services. The services run in containers that are deployed across a cluster of machines. Smaller teams develop a service that focuses on a scenario and independently test, version, deploy, and scale each service so that the entire application can evolve.

More Information:









25 October 2016

Hyper-Converged OpenStack on Windows Nano Server 2016

Cloudbase Solutions Announces the Industry’s First Platform for Hyper-Converged OpenStack on Windows Nano Server 2016

The Hyper-Converged OpenStack on Windows Server cloud infrastructure enables distributed data across individual cloud servers while dismissing the need for expensive dedicated storage hardware. This particular configuration features all of its nodes having compute, storage and networking roles, thus increasing scalability and fault tolerance to new levels all the while dramatically reducing overall costs.

Hyper-Converged OpenStack on Windows Nano Server 2016

Cloudbase Solutions’ design for the Hyper-Converged data center relies on components that are fully distributed, and is entirely based on commodity hardware, having a remarkably low cost of ownership for the enterprise with the benefit of all the IaaS features offered by OpenStack, for both on-premise as well as public clouds.

Windows in OpenStack

The core components for this solution are OpenStack, Microsoft’s Windows Nano Server 2016, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Hyper-V, deployed starting from the bare metal up with Cloudbase Solutions’ Juju charms for Windows Server.

Cloudbase Solutions offers the platform as managed or unmanaged, with support for OpenStack and Windows Nano Server 2016, along with orchestration solutions based on OpenStack Heat templates or Juju for all Microsoft based workloads, from Active Directory to SharePoint, Exchange and more!

“The Hyper-Converged infrastructure adds simplicity, increased fault tolerance and scalability to your architecture, which is exactly what modern enterprises are looking for in order to compete efficiently. It’s important for OpenStack customers to know they have choices when it comes to their infrastructure, and we see the Hyper-Converged solution as a key to helping them in achieving that architectural freedom” - said Alessandro Pilotti, Cloudbase Solutions CEO

Manage Nano Server and Windows Server 2016 Hyper-V

About Cloudbase Solutions
Cloudbase Solutions™ is dedicated to cloud computing and interoperability. Our mission is to bridge the modern enterprise and cloud computing worlds by bringing OpenStack to Windows-based infrastructures. This effort starts with developing and maintaining all the crucial Windows and Hyper-V OpenStack components and culminates with a product range which includes orchestration for Hyper-V, SQL Server, Active Directory, Exchange and SharePoint Server via Juju charms and Heat templates.

Furthermore, to solve the complexity of cloud migration, Cloudbase Solutions developed Coriolis, a cloud migration-as-a-service product for migrating existing Windows and Linux workloads between clouds. Cloud migration is a necessity for a large number of use cases, especially for users moving from traditional virtualization technologies like VMware vSphere or Microsoft System Center VMM to Azure / Azure Stack, OpenStack, Amazon AWS or Google Cloud.


Building Your First Ceph Cluster for OpenStack— Fighting for Performance, Solving Tradeoffs

Ceph is a full-featured, yet evolving, software-defined storage (SDS) solution. It’s very popular because of its robust design and scaling capabilities, and it has a thriving open source community. Ceph provides all data access methods (file, object, block) and appeals to IT administrators with its unified storage approach.

In the true spirit of SDS solutions, Ceph can work with commodity hardware or, to put it differently, is not dependent on any vendor-specific hardware. A Ceph storage cluster is intelligent enough to utilize storage and compute the powers of any given hardware, and provides access to virtualized storage resources through the use of ceph-clients or other standard protocol and interfaces.

Ceph storage clusters are based on Reliable Automatic Distributed Object Store (RADOS), which uses the CRUSH algorithm to stripe, distribute and replicate data. The CRUSH algorithm originated from a PhD thesis by Sage Weil at the University of California, Santa Cruz. Here’s an overview of Ceph’s different ways for accessing stored data:

The power of Ceph can transform your organization’s IT infrastructure and your ability to manage vast amounts of data. If your organization runs applications with different storage interface needs, Ceph is for you! Ceph’s foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides your applications with object, block, and file system storage in a single unified storage cluster—making Ceph flexible, highly reliable and easy for you to manage.
Ceph’s RADOS provides you with extraordinary data storage scalability—thousands of client hosts or KVMs accessing petabytes to exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, which means your Ceph storage system serves as a flexible foundation for all of your data storage needs. You can use Ceph for free, and deploy it on economical commodity hardware. Ceph is a better way to store data.

OpenStack Australia Day 2016 - Andrew Hatfield, Red Hat: The Future of Cloud Software Defined Storage

The Ceph Storage Cluster
A Ceph storage cluster is a heterogeneous group of compute and storage resources (bare metal servers, virtual machines and even Docker instances) often called Ceph nodes, where each member of the cluster is either working as a monitor (MON) or object storage device (OSD). A Ceph storage cluster is used by Ceph clients to store their data directly as RADOS objects or by using virtualized resources like RDBs or other interfaces.

Wicked Easy Ceph Block Storage & OpenStack Deployment with Crowbar

Windows and OpenStack - What's New in Windows Server 2016

Windows and OpenStack: What’s new in Windows Server 2016? - Alessandro Pilotti from ITCamp on Vimeo.

OpenStack is getting big in the enterprise, which is traditionally very Microsoft centeric. This session will show you everything you need to know about Windows in OpenStack!To begin with we will show how to provision Windows images for OpenStack, including Windows Server 2012 R2, Windows 7, 8.1 and the brand new Windows Server 2016 Nano Server for KVM, Hyper-V and ESXi Nova hosts.

Next, we will show how to deploy Windows workloads with Active Directory, SQL Server, SharePoint, Exchange using Heat templates, Juju, Puppet and more.

Last but not least, we'll talk about Active Directory integration in Keystone, Hyper-V deployment and Windows bare metal support in Ironic and MaaS. The session will give you a comprehensive view on how well OpenStack and Windows can be integrated, along with a great interoperability story with Linux workloads.

Exploring Nano Server for Windows Server 2016 with Jeffrey Snover

For More Information:












22 September 2016

IBM Power Systems for Big Data and Analytics

IBM Linux Servers Designed to Accelerate Artificial Intelligence, Deep Learning and Advanced Analytics

New IBM POWER8 Chip with NVIDIA NVLink(TM) Enables Data Movement 5x Faster than Any Competing Platform
Systems Deliver Average of 80% More Performance Per Dollar than Latest x86-Based Servers(1)
Expanded Linux Server Lineup Leverages OpenPOWER Innovations

A quick introduction to the IBM Power System S822LC from the IBM Client Center Montpellier

A major achievement stemming from open collaboration is the new IBM Power System S822LC for High Performance Computing server.

IBM Linux on Power Big Data Solutions

IBM Data Engine for Hadoop and Spark – Power Systems Edition

With more and more intelligent and interconnected devices and systems, the data companies are collecting is growing at unprecedented rates. As much as 90% of that data is unstructured, coming from social media, electronic documents, machine data, connected devices, etc., and growing at rates as high as 50% per year. This is big data.

Extracting insights from big data can make your business more agile, more competitive and provide insights that, in the past, were beyond reach. The emergence of recent technologies such as the real-time analytics processing capabilities of stream computing, high speed in-memory analytics using Apache Spark and the massive MapReduce scale-out capabilities of Hadoop® has opened the door to a world of possibilities. This has also created the need for robust infrastructures that combine computing power, memory and data bandwidth to process and move large quantities of data -- fast.

Understanding the IBM Power Systems Advantage

Based on this need, the IBM Power System S812LC was used to design a solution to create a big data environment built on a heritage of strong resiliency, availability and security -- the IBM Data Engine for Hadoop and Spark - Power Systems Edition.

With a data-centric design, this Linux-based solution offers a tightly-integrated and performance-optimized infrastructure for in-memory Spark and MapReduce-based Hadoop big data workloads. The IBM Data Engine for Hadoop and Spark can be tailored specifically to meet your Big Data workloads by using a simple building block approach to match the mix of memory, networking and storage to application requirements. This approach gives you the best possible infrastructure for your big data workload.

POWER8 Scale-Out: Massive Bandwidth

With a vision for enhanced bandwidth, IBM POWER8 has achieved vast improvements in latency, two-and-a-half time’s better memory performance, and a lot more.
POWER8 offers more than 32 channels of DDR memory funneling into the POWER8 processor. This is two times the 16-channel capacity for POWER7, and four times the eight-channel capacity of the most competitors.

Move Up to Power8 with Scale Out Servers

The result of a depth and breadth of innovation focused on optimizing for data centers, while increasing efficiency and lowering infrastructure cost, the POWER8 bandwidth contributes to a better system that does more while making technology leadership attainable for customers.

Each POWER8 socket supports up to 1 TB of DRAM in the initial server configurations, yielding 2 TB capacity Scale-out systems and 16 TB capacity Enterprise systems, and supports up to 230 GBs per second of sustained memory bandwidth per socket.
Having developed the first processor designed for Big Data with massive parallelism and bandwidth for real-time results, when coupled with IBM DB2 with BLU Acceleration and Cognos analytics software the capacity of POWER8 far outpaces industry standard options with 82x faster delivery to insights

Far more than a function of size, sophisticated innovations in the POWER8 memory organization is designed to enhance both reliability and performance. Key among the innovations:
Up to eight high-speed channels which each run up to 9.6 GHz for up to 230 GB of sustained performance
Up to 32 total DDR ports yielding 410 GB/sec peak at the DRAM
Up to 1 TB memory capacity per fully configured processor socket

Big Data’s Big Memory requirements call for nothing less than the industry’s most innovative, scalable, and massive bandwidth and capacity. POWER8 thrives on the kinds of complexities that your organization faces in the current environment, with a platform to keep you ahead of the game as unforeseen challenges and opportunities emerge.

Features and benefits

A comprehensive, fully integrated cluster that is designed for ease of procurement, deployment, and operation. It includes all required components for Big Data applications, including servers, network, storage, operating system, management software, Hadoop and Spark software, and runtime libraries.

An application optimized configuration. The configuration of the cluster is carefully designed to optimize application performance and reduce total cost of ownership. The cluster is integrated with IBM Platform™ Cluster Manager, IBM Open Platform with Apache Hadoop and Spark and optionally IBM Spectrum Scale and IBM Spectrum Symphony which include advanced capabilities for storage and resource optimization. This optimized configuration enables users to show results more quickly.
Power S812LC delivers 2.3X BETTER performance per dollar spent for Spark workloads1

Advanced technology for performance and robustness. The hardware and software components in this infrastructure are customizable to allow the best performance or the best price/performance ratio.

Big data clusters can start out small and grow as the demands from line of business increase. Choosing an infrastructure that can scale to handle these demands is vital to meeting service level agreements and continuing access to insights. Organizations must also consider the maintenance required. Smart businesses choose Power Systems because they know Power Systems is built for big data workloads that demand high performance and high reliability.

Analytics solutions
Unlock the value of data with an IT infrastructure that provides speed and availability to deliver accelerated insights to the people and processes that need them.

IBM Data Engine for Analytics - Power Systems Edition
A customized infrastructure solution with integrated software optimized for both big data and analytics workloads.

Co-Design Architecture for Exascale

IBM POWER8 as an HPC platform

The State of Linux Containers

IBM Data Engine for NoSQL – Power Systems Edition
Unique technology from IBM delivers dramatic reductions in the cost of large NoSQL databases.

SAP HANA benefits from the enterprise capabilities of Power Systems
SAP HANA runs on all POWER8 servers. Power Systems Solution Editions for SAP HANA BW are easy to order and tailored for quick deployment and rapid-time-to value, while offering flexibility to meet individual client demands.

DB2 with BLU Acceleration on Power Systems
Enable faster insights using analytics queries and reports from data stored in any data warehouse, with a dynamic in-memory columnar solution.

IBM Solution for Analytics – Power Systems Edition
This flexible integrated solution for faster insights includes options for business intelligence and predictive analytics with in-memory data warehouse acceleration.

IBM Data Engine for Hadoop and Spark – Power Systems Edition
A fully integrated Hadoop and Spark solution optimized to simplify and accelerate unstructured big data analytics.

OpenPOWER Update

IBM PureData System for Operational Analytics
Easily deploy, optimize and manage data intensive workloads for operational analytics with an expert integrated system.

IBM DB2 Web Query for i
Help ensure every decision maker across the organization can easily find, analyze and share the information needed to make better, faster decisions.

OpenPOWER Roadmap Toward CORAL

The Quantum Effect: HPC without FLOPS

More Information:



















21 August 2016

Why Cortana Analytics Suite

Cortana Analytics Suite (CAS), what can it do for you

Microsoft introduced the Cortana Analytics Suite (CAS) in July 2015, at the Worldwide Partner Conference in Orlando. Want to learn more then read on.

Cortana Analytics Suite

When Microsoft first announced CAS, it touted the suite as an integrated set of cloud-based services that vaguely promised to be “a huge differentiator for any business.” The suite would be available through a simple monthly subscription and be customizable to fit the needs of different organizations. The company planned to make CAS available that coming fall.

Two months later, Microsoft hosted the first-ever Cortana Analytics Workshop, a gathering of techies that would provide participants with a chance to learn about Microsoft’s advanced analytics vision. The workshop appeared to represent the suite’s official launch.

Microsoft Envision | Impactful analytics using the Cortana Intelligence Suite with EY

At some point during the build-up, Microsoft also set up a slick new website dedicated to the CAS vision ( https://www.microsoft.com/en-us/server-cloud/cortana-analytics-suite/). The website featured rolling graphics with stylized icons, and large bold headlines that emphasized the suite’s imminent importance. Cortana Analytics, it would seem, had officially arrived.

As we can see from the above architecture diagram, following are the key pillars of Cortana Intelligence Suite:

Information Management: Consists of services which enable us to capture the incoming data from various sources including the streaming data from sensors, devices, and other IoT systems.  Manage various data sources which are part of the data analytics ecosystem within the enterprise; and orchestrate and build end-to-end flows to perform various activities and data processing and data preparation operations.
Big Data Stores: Consists of services which enable us to store and manage large scale data. In other words, enables us to store and manage big data. These services offer high degree of elasticity, high processing power, and high throughput with great performance.
Machine Learning and Analytics: Consists of services which enable us to perform advanced analytics, build predictive models, and apply machine learning algorithms on large scale data.  Allows us to perform data analysis on large scale data of different variety using programming languages like R and Python.
Dashboards and Visualizations: Consists of services which enable us to build reports and dashboards to view the insights. It primarily consists of Power BI which allows us to build highly interactive visually appealing reports and dashboards. Apart from this, other tools like SQL Server Reporting Services (SSRS), Excel, etc. can also be used to connect to data from some of these services in Cortana Intelligence Suite.
Intelligence: Consists of advanced intelligence services which enable us to build smart interactive services using advanced text, speech, and other recognition systems.

  • “Take action ahead of your competitors by going beyond looking in the rear-view mirror to predicting what’s next.”
  • “Get closer to your customers. Infer their needs through their interaction with natural user interfaces.”
  • “Get things done with Cortana in more helpful, proactive, and natural ways.”

Modern Data Warehousing with the Microsoft Analytics Platform System

Cortana Intelligence Suite Highlights

Here are the highlights of Cortana Intelligence Suite:

  • A fully managed Big Data and Advanced Analytics Suite enabling businesses transform data into intelligent actions.
  • An excellent offering perfectly suited for handling modern day data sources, data formats, and data volumes to gain valuable insights.
  • Offers various preconfigured solutions like Forecasting, Churn, Recommendations, etc.
  • Apart from the big data and analytical services, Cortana Intelligence Suite also includes some of the advanced intelligence services - Cortana, Bot Framework, and Cognitive Services.
  • Contains services to capture the data from a variety of data sources, process and integrate the data, perform advanced analytics, visualize and collaborate, and gain intelligence out of it.
  • Offers all the benefits of Cloud Computing like scale, elasticity, and pay-as-you-go model, etc.

Microsoft Envision | Running a data driven company

Use Cases for the Cortana Intelligence Suite

Cortana Intelligence Suite can address the data challenges in various industries and enable them to transform their data into intelligent actions and helps to be more proactive in the day-to-day operational aspects of the business. Of the various industries where Cortana Intelligence Suite can be used, here are a few of them.

Financial Services: Monitor the transactions as they happen in near real-time and based on the analysis on the historical data and historical data anomalies/trends, Cortana Intelligence Suite can be used to apply complex machine learning algorithms and predictive models to predict a potential fraudulent transactions and help business prevent such transactions in future thereby protecting customer's valuable money. The Financial Services sector is pretty vast and we can use Cortana Intelligence Suite in various scenarios including credit/debit card fraud, electronic transfer fraud, phishing attempts to steal confidential customer data, etc.
Retail: Cortana Intelligence Suite can be used across the Retail Industry in various scenarios including optimizing availability by forecasting demand, enabling businesses to ensure the right products in the right location at the right time. There are numerous use cases in the retail industry and Cortana Intelligence Suite can be used in conjunction with IoT systems. For instance, with the help of sensors (Beacon Technology) we can detect when a customer enters a retail store and based the data that we have in the database about that customer, we can offer them targeted discounts based on customer's demographics, past purchase history, what the customer has been browsing online (this is where bringing in the data from outside the enterprise comes into picture as discussed in this tip on Introduction to Big Data), and other relevant information which can help understand the customer's preferences.
Healthcare: There are various scenarios in Healthcare where the Cortana Intelligence Suite can be used. Historical data on the utilization of various resources (Rooms, Beds, Other Equipment, etc.) and manpower (Doctors, Nurses, general staff, etc.) can be analyzed to predict the future demand thereby enabling the hospitals to mobilize and optimize the resources and manpower accordingly. Historical patient data can be analyzed in conjunction with weather data to identify the patterns and potential illness that might be caused during particular seasons and help the authorities take preventive measures.
Manufacturing: By constantly monitoring the equipment and collecting the data over time, probability of issues occurring can be predicted and accordingly a maintenance schedule can be defined to prevent the potential issues which if occur can hamper the production and day-to-day operations leading to unhappy customers, loss of business, and increased operational costs. Cortana Intelligence Suite fits very well in this scenario and enables end to end data collection, monitoring, alerting, and to take proactive actions/decisions.
Public Sector: There are various areas in the public sector where Cortana Intelligence Suite can be used to improve the overall operational efficiency including Public Transport, Power Grids, Water Supplies, and a lot more. By monitoring the usage of resources in various areas, we can identify the patterns in the usage, predict and forecast the demand, and accordingly ensure the supply so that there is neither shortage nor a waste of resources thereby improving the overall operational efficiency and happy customers.

Microsoft Envision | ZAP presents: It’s all about the data --big, small, or diverse

Above are just a glimpse of scenarios in each of those sectors and there are many more such scenarios in each of the sectors. Apart from these, there are various other countless sectors where the Cortana Intelligence Suite can be used like Education, Insurance, Marketing, Hospitality, Aviation, Research, and so on.

The Azure side of Cortana Analytics Suite

When it comes to the individual Azure services, we can often find more concrete information than we can with Cortana Analytics. That’s not to say we won’t run into the same type of marketing clutter, but we can usually find details that are a bit more specific (even if it means going outside of Microsoft). What we don’t find are many references to Cortana Analytics, although that doesn’t prevent us from building the types of solutions that the CAS marketing material likes to show off.

The first of the CAS-related services have to do with storing and processing large sets of data:

Azure Data Warehouse : A database service that can distribute workloads across multiple compute nodes in order to process large volumes of relational and non-relational data. The service uses Microsoft’s massive parallel processing (MPP) architecture, along with advanced query optimizers, making it possible to scale out and parallelize complex SQL queries.

Azure Data Lake Store: A scalable storage repository for data of any size, type, or ingestion speed, regardless of where it originates. The repository uses a Hadoop file system to support compatibility with the Hadoop Distributed File System (HDFS) and offers unlimited storage without restricting file sizes or data volumes.

Azure Data Lake Store is actually part of a larger unit that Microsoft refers to as Azure Data Lake. Not only does it include Data Lake Store, but also Data Lake Analytics and HDInsight, both of which share the CAS label. You can find additional information about the Data Lake services in the Simple-Talk article Azure Data Lake.

The next category of services that fall under the CAS umbrella focus on data management:

Azure Data Factory : A data integration service that uses data flow pipelines to manage and automate the movement and transformation of data. Data Factory orchestrates other services, making it possible to ingest data from on-premises and cloud-based sources, and then transform, analyze, and publish the data. Users can monitor the pipelines from a single unified view.

Azure Data Catalog : A system for registering enterprise data sources, understanding the data in those source, and consuming the data. The data remains in its location, but the metadata is copied to the catalog, where it is indexed for easy discovery. In addition, data professionals can contribute their knowledge in order to enrich the source metadata.

Azure Event Hubs : An event processing service that can ingest millions of events per second and make them available for storage and analysis. The service can log events in near real time and accept data from a wide range of sources. Event Hubs uses technologies that support low latency and high availability, while providing flexible throttling, authentication, and scalability.

Microsoft Envision | Advantage YOU: Be more, do more, with Infosys and Microsoft on your side

For more information about Event Hubs, refer to the Simple-Talk article Azure Event Hubs. In the meantime, here’s a quick overview of the analytic components included in the CAS package:

Azure Machine Learning : A service for building, deploying, and sharing predictive analytic solutions. The service runs predictive models that learn from existing data, making it possible to forecast future behavior and trends. Machine Learning also provides the tools necessary for testing and managing the models as well as deploying them as web services.
Azure Data Lake Analytics : A distributed service for analyzing data of any size, including what is in Data Lake Store. Data Lake Analytics is built on Apache YARN, an application management framework for processing data in Hadoop clusters. Data Lake Analytics also supports U-SQL, a new language that Microsoft developed for writing scalable, distributed queries that analyze data.

Azure HDInsight : A fully managed Hadoop cluster service that supports a wide range of analytic engines, including Spark, Storm, and HBase. Microsoft has updated the service to take advantage of Data Lake Store and to maximize security, scalability, and throughput.
Azure Stream Analytics : A service that supports complex event processing over streaming data. Stream Analytics can handle millions of events per second from a variety of sources, while correlating them across multiple streams. It can also ingest events in real-time, whether from one data stream or multiple streams.
I’ve already mentioned how Data Lake Analytics and HDInsight are part of Azure Data Lake, and I’ve pointed you to a related article. If you want to learn more about Stream Analytics, check out the Simple-Talk article Microsoft Azure Stream Analytics.

Azure Stream Analytics

Cortana Analytics Gallery

Another interesting component of the CAS package is the Cortana Analytics Gallery, formerly the Azure Machine Learning Gallery. The gallery provides an online environment for data scientists and developers to share their solutions, particularly those related to machine learning. Microsoft also publishes its own solutions to the site for participants to consume. Cortana Analytics gallery

The Cortana Analytics Gallery is divided into the following six sections.

Solution Templates : Templates based on industry-specific partner solutions. Currently, the category includes only the Vehicle Telemetry Analytics solution, published by Microsoft this past December. The solution demonstrates how those in the automobile industry can gain real-time and predictive insights into vehicle health and driving habits.
Experiments : Predictive analytic experiments contributed by Microsoft and those in the data science community. The experiments demonstrate advanced machine learning techniques and can be used as a starting point for developing your own solutions. For example, the Telco Customer Churn experiment uses classification algorithms to predict whether a customer will churn.
Machine Learning APIs : APIs that can access operationalized predictive analytic solutions. Some of the APIs are reference within the “Perceptual intelligence” section listed in the table above. For example, the Face APIs were published by Microsoft and are part of Microsoft Project Oxford. They provide state-of-the-art algorithms for processing face images.
Notebooks : A collection of Jupyter notebooks. The notebooks are integrated within Machine Learning Studio and serve as web applications for running code, visualizing data, and trying out ideas. For example, the notebook Topic Discovery in Twitter Tweets demonstrates how a Jupyter notebook can be used for mining Twitter text.
Tutorials : Tutorials on how to use Cortana Analytics to solve real-world problems. For example, the iPhone app for RRS tutorial describes how to create an iOS app that can consume an Azure ML RRS API using the Xamarin development software that ships with Visual Studio.
Collections : A site for grouping together experiments, templates, APIs, or other items within the Cortana Analytics Gallery.
Although Microsoft has changed the name of the gallery to make it more CAS-friendly, much of the content still focuses on the Machine Learning service. Even so, the gallery could prove to be a valuable resource for organizations jumping aboard the CAS train, particularly once the gallery has gained more momentum.

Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Part 1

Cortana Analytics Workshop: The "Big Data" of the Cortana Analytics Suite, Part 2

More Information:

20 July 2016

Getting Started with Oracle OpenStack

Getting Started with OpenStack in Oracle Solaris 11.3


Getting Started with Oracle OpenStack

Oracle Solaris 11 includes a complete OpenStack distribution called Oracle OpenStack for Oracle Solaris. OpenStack, the popular open source cloud computing platform, provides comprehensive self-service environments for sharing and managing compute, network, and storage resources through a centralized web-based portal.

Oracle Solaris Overview

OpenStack has been integrated into all the core technology foundations of Oracle Solaris, allowing you to set up an enterprise private cloud infrastructure in minutes

Simplify Cloud Deployment with Oracle

Why OpenStack on Oracle Solaris?

Using OpenStack with Oracle Solaris provides the following advantages:

Industry-proven hypervisor. Oracle Solaris Zones offer significantly lower virtualization overhead making them a perfect fit for OpenStack compute resources. Oracle Solaris Kernel Zones also provide independent kernel versions without compromise, allowing independent patch versions.

Oracle Solaris Simple, Flexible, Fast: Virtualization in 11.3

Secure and compliant application provisioning. 

Oracle - Secure, Containerized and Highly-Available OpenStack on  

The Unified Archive feature of Oracle Solaris enables rapid application deployment in the cloud via a new archive format that enables portability between bare-metal systems and virtualized systems. Instant cloning in the cloud enables you to scale out and to reliably deal with disaster recovery emergencies.

Oracle Solaris Secure Cloud Infrastructure

Unified Archives in Oracle Solaris 11, combined with capabilities such as Immutable Zones for read-only virtualization and the new Oracle Solaris compliance framework, enable administrators to ensure end-to-end integrity and can significantly reduce the ongoing cost of compliance.

Oracle Solaris Build and Run Applications Better on 11.3

Fast, fail-proof cloud updates. Oracle Solaris makes updating OpenStack an easy and fail-proof process, updating a full cloud environment in less than twenty minutes. Through integration with the Oracle Solaris Image Packaging System (IPS), ZFS boot environments ensure quick rollback in case anything goes wrong, allowing administrators to quickly get back up and running.

Oracle Solaris Cloud Management and Deployment with OpenStack

Application-driven software-defined networking. Taking advantage of Oracle Solaris network virtualization capabilities, applications can now drive their own behavior for prioritizing network traffic across the cloud. The Elastic Virtual Switch (EVS) feature of Oracle Solaris provides a single point of control and enables the management of tenant networks through VLANs and VXLANs. The networks are flexibly connected to virtualized environments that are created on the compute nodes.

Oracle Solaris Software Integration

Single-vendor solution. Oracle is the #1 enterprise vendor offering a full-stack solution that provides the ability to get end-to-end support from a single vendor for database as a service (DaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), saving significant heartache and cost.

Oracle Solaris 11 includes the OpenStack Juno release (Oracle Solaris 11.2 SRU 10.5 or Oracle Solaris 11.3).

Available OpenStack Services

The following OpenStack services are available in Oracle Solaris 11:

Nova. Nova provides the compute capability in a cloud environment, allowing self-service users to be able to create virtual environments from an allocated pool of resources. A driver for Nova has been written to take advantage of Oracle Solaris non-global zones and kernel zones.

Neutron. Neutron manages networking within an OpenStack cloud. Neutron creates and manages virtual networks across multiple physical nodes so that self-service users can create their own subnets that virtual machines (VMs) can connect to and communicate with. Neutron uses a highly extensible plug-in architecture, allowing complex network topologies to be created to support a cloud environment. A driver for Neutron has been written to take advantage of the network virtualization features of Oracle Solaris 11 including the Elastic Virtual Switch that automatically creates the tenant networks across multiple physical nodes.

Cinder. Cinder is responsible for block storage in the cloud. Storage is presented to the guest VMs as virtualized block devices known as Cinder volumes. There are two classes of storage: ephemeral volumes and persistent volumes. Ephemeral volumes exist only for the lifetime of the VM instance, but will persist across reboots of the VM. Once the instance has been deleted, the storage is also deleted. Persistent volumes are typically created separately and attached to an instance. Cinder drivers have been written to take advantage of the ZFS file system, allowing volumes to be created locally on compute nodes or served remotely via iSCSI or Fibre Channel. Additionally, a Cinder driver exists for Oracle ZFS Storage Appliance.

Glance. Glance provides image management services within OpenStack with support for the registration, discovery, and delivery of images that are used to install VMs created by Nova. Glance can use different storage back ends to store these images. The primary image format that Oracle Solaris 11 uses is Unified Archives. Unified Archives can be provisioned across both bare-metal and virtual systems, allowing for complete portability in an OpenStack environment.

Keystone. Keystone is the identity service for OpenStack. It provides a central directory of users—mapped to the OpenStack projects they can access—and an authentication system between the OpenStack services.

Horizon. Horizon is the web-based dashboard that allows administrators to manage compute, network, and storage resources in the data center and allocate those resources to multitenant users. Users can then create and destroy VMs in a self-service capacity, determine the networks on which those VMs communicate, and attach storage volumes to those VMs.

Swift. Swift provides object- and file-based storage in OpenStack. Swift provides redundant and scalable storage, with data replicated across distributed storage clusters. If a storage node fails, Swift will quickly replicate its content to other active nodes. Additional storage nodes can be added to the cluster with full horizontal scale. Oracle Solaris 11 supports Swift being hosted in a ZFS environment.

Ironic. Ironic provides bare-metal provisioning in an OpenStack cloud, as opposed to VMs that are handled by Nova. An Ironic driver has been written to take advantage of the Oracle Solaris Automated Installer, which handles multinode provisioning of Oracle Solaris 11 systems.

Heat. Heat provides application orchestration in the cloud, allowing administrators to describe multitier applications by defining a set of resources through a template. As a result, a self-service user can execute this orchestration and have the appropriate compute, network, and storage deployed in the appropriate order.

Modern Cloud Infrastructure with Oracle Enterprise OpenStack

Oracle Solaris 11 built in virtualization provides a highly efficient and scalable solution that sits at the core of that platform. With the inclusion of Kernel Zones, Oracle Solaris 11 provides a flexible, cost efficient, cloud ready solution perfect for the data center. Enhancements and new features include:

  • Secure Live Migration and OS version flexibility with Oracle Solaris Kernel Zones
  • Cloud ready: a core feature of the OpenStack distribution included in Oracle Solaris 11
  • Rapid adoption with support for Oracle Solaris 10 Zones on Oracle Solaris 11
  • Integration with the Oracle Solaris 11 Software Defined Networking
  • Read only security with Immutable Zones
  • Eliminate downtime with Live Reconfiguration of Zones
  • Enhanced mobility with Zones on Shared Storage
  • Simple to deploy and update enabled by tight integration into the Lifecycle Management system

Oracle Solaris combines the power of industry standard security features, unique security and anti-malware capabilities, and compliance management tools for low risk application deployments and cloud infrastructure. Oracle hardware systems and software in silicon provide the anti-malware trust anchors, accelerate cryptography, and help protect from memory attacks with ADI, NX, and SMEP.

Oracle Solaris:

  • Provides a more secure enterprise cloud
  • Provides a more secure application lifecycle
  • Provides a more compliance infrastructure
  • Provides a more secure application
  • Provides a more secure infrastructure
  • Is an assured and tested low risk platform

Oracle Solaris Software Integration

More Information:

Here's Your Oracle Solaris 11.3 List of Blog Posts:

Oracle Solaris 11.3 Blog List