• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

07 June 2016

SQL Server 2016 Finally Released

SQL Server 2016 is Here!


SQL Server 2016 is the latest addition to Microsoft’s data platform, with a variety of new features and enhancements that deliver breakthrough performance, advanced security, and richer, integrated reporting and analytics capabilities. Built using the new rapid-release model, SQL Server 2016 incorporates many features introduced first in the cloud in Microsoft Azure SQL Database. Furthermore, SQL Server 2016 includes the capability to dynamically migrate historical data to the cloud.

SQL Server 2016 General Availability Announcement with Rohan Kumar 

Introducing Microsoft SQL Server 2016 leads you through the major changes in the data platform, whether you are using SQL Server technology on-premises or in the cloud, but it does not cover every new feature added to the platform. Instead, we explain key concepts and provide examples for the more significant features so that you can start experiencing their benefits firsthand.

SQL Server 2016 novelties

Faster queries

When users want data, they want it as fast as you can give it to them. Microsoft SQL Server 2016 includes several options for enabling faster queries. Memory-optimized tables now support even faster online transaction processing (OLTP) workloads, with better throughput as a result of new parallelized operations. For analytic workloads, you can take advantage of updateable, clustered columnstore indexes on memory-optimized tables to achieve queries that are up to one hundred times faster. Not only is the database engine better and faster in SQL Server 2016, but enhancements to the Analysis Services engine also deliver faster performance for both multidimensional and tabular models. The faster you can deliver data to your users, the faster they can use that data to make better decisions for your organization.

In-Memory OLTP enhancements

Introduced in SQL Server 2014, In-Memory OLTP helps speed up transactional workloads with high concurrency and too many latches by moving data from disk-based tables to memory-optimized tables and by natively compiling stored procedures.

In-Memory OLTP in SQL Server 2016 

In-memory OLTP can also help improve the performance of data warehouse staging by using nondurable, memory-optimized tables as staging tables. Although there were many good reasons to use memory-optimized tables in the first release of In-Memory OLTP, several limitations restricted the number of use cases for which In-memory OLTP was suitable. In this section, we describe the many enhancements that make it easier to put memory-optimized tables to good use.

Sql Server 2016 Evolution Part 1

Reviewing new features for memory-optimized tables

In SQL Server 2016, you can implement the following features in memory-optimized tables:
  • FOREIGN KEY constraints between memory-optimized tables, as long as the foreign key references a primary key.
  • CHECK constraints.
  • UNIQUE constraints.
  • Triggers (AFTER) for INSERT/UPDATE/DELETE operations, as long as you use WITH NATIVE_COMPILATION.
  • Columns with large object (LOB) types—varchar(max), nvarchar(max), and varbinary(max).
  • Collation using any code page supported by SQL Server.
  • Indexes for memory-optimized tables now support the following features:
  • UNIQUE indexes.
  • Index keys with character columns using any SQL Server collation.
  • NULLable index key columns.

Better security

SQL Server 2016 introduces three new principal security features—Always Encrypted, Row-Level Security, and dynamic data masking. While all these features are security related, each provides a different level of data protection within this latest version of the database platform. Throughout this chapter, we explore the uses of these features, how they work, and when they should be used to protect data in your SQL Server database.

Always Encrypted

Always Encrypted is a client-side encryption technology in which data is automatically encrypted not only when it is written but also when it is read by an approved application. Unlike Transparent Data Encryption, which encrypts the data on disk but allows the data to be read by any application that queries the data, Always Encrypted requires your client application to use an Always Encrypted–enabled driver to communicate with the database. By using this driver, the application securely transfers encrypted data to the database that can then be decrypted later only by an application that has access to the encryption key. Any other application querying the data can also retrieve the encrypted values, but that application cannot use the data without the encryption key, thereby rendering the data useless. Because of this encryption architecture, the SQL Server instance never sees the unencrypted version of the data. Note At this time, the only Always Encrypted–enabled drivers are the .NET Framework Data Provider for SQL Server, which requires installation of .NET Framework version 4.6 on the client computer, and the JDBC 6.0 driver. In this chapter, we refer to both of these drivers as the ADO.NET driver for simplicity.

Higher availability

In a world that is always online, maintaining uptime and streamlining maintenance operations for your mission-critical applications are more important than ever. In SQL Server 2016, the capabilities of the AlwaysOn Availability Group feature continue to evolve from previous versions, enabling you to protect data more easily and flexibly and with greater throughput to support modern storage systems and CPUs. Furthermore, AlwaysOn Availability Groups and AlwaysOn Failover Cluster Instances now have higher security, reliability, and scalability. By running SQL Server 2016 on Windows Server 2016, you have more options for better managing clusters and storage. In this chapter, we introduce the new features that you can use to deploy more robust high-availability solutions.

AlwaysOn Availability Groups

First introduced in SQL Server 2012 Enterprise Edition, the AlwaysOn Availability Groups feature provides data protection by sending transactions from the transaction log on the primary replica to one or more secondary replicas, a process that is conceptually similar to database mirroring. In SQL Server 2014, the significant enhancement to availability groups was the increase in the number of supported secondary replicas from three to eight. SQL Server 2016 includes a number of new enhancements that we explain in this section:

  •  AlwaysOn Basic Availability Groups
  •  Support for group Managed Service Accounts (gMSAs)
  •  Database-level failover
  •  Distributed Transaction Coordinator (DTC) support
  •  Load balancing for readable secondary replicas
  •  Up to three automatic failover targets
  •  Improved log transport performance

Improved database engine

In past releases of SQL Server, Microsoft has targeted specific areas for improvement. In SQL Server 2005, the storage engine was new. In SQL Server 2008, the emphasis was on server consolidation. Now, in SQL Server 2016, you can find enhanced functionality across the entire database engine. With Microsoft now managing more than one million SQL Server databases through its Database as a Service (DBaaS) offering—Microsoft Azure SQL Database—it is able to respond more quickly to opportunities to enhance the product and validate those enhancements comprehensively before adding features to the on-premises version of SQL Server. SQL Server 2016 is a beneficiary of this new development paradigm and includes many features that are already available in SQL Database. In this chapter, we explore a few of the key new features, which enable you to better manage growing data volumes and changing data systems, manage query performance, and reduce barriers to entry for hybrid cloud architectures.

Sql Server 2016 Evolution Part 2

SQL Server 2016 introduces a new hybrid feature called Stretch Database that combines the power of Azure SQL Database with an on-premises SQL Server instance to provide nearly bottomless storage at a significantly lower cost, plus enterprise-class security and near-zero management overhead. With Stretch Database, you can store cold, infrequently accessed data in Azure, usually with no changes to application code. All administration and security policies are still managed from the same local SQL Server database as before.

Understanding Stretch Database architecture

Enabling Stretch Database for a SQL Server 2016 table creates a new Stretch Database in Azure, an external data source in SQL Server, and a remote endpoint for the database, as shown in Figure 4-7. User logins query the stretch table in the local SQL Server database, and Stretch Database rewrites the query to run local and remote queries according to the locality of the data. Because only system processes can access the external data source and the remote endpoint, user queries cannot be issued directly against the remote database.

Security and Stretch Database

One of the biggest concerns about cloud computing is the security of data leaving an organization’s data center. In addition to the world-class physical security provided at Azure data centers, Stretch Database includes several additional security measures. If required, you have the option to enable Transparent Data Encryption to provide encryption at rest. All traffic into and out of the remote database is encrypted and certificate validation is mandatory. This ensures that data never leaves SQL Server in plain text and the target in Azure is always verified.

Broader data access

As the cost to store data continues to drop and the number of data formats commonly used by applications continues to change, you need the ability both to manage access to historical data relationally and to seamlessly integrate relational data with semistructured and unstructured data. SQL Server 2016 includes several new features that support this evolving environment by providing access to a broader variety of data. The introduction of temporal tables enables you to maintain historical data in the database, to transparently manage data changes, and to easily retrieve data values at a particular point in time. In addition, SQL Server allows you to import JavaScript Object Notation (JSON) data into relational storage, export relational data as JSON structures, and even to parse, aggregate, or filter JSON data. For scalable integration of relational data with semistructured data in Hadoop or Azure storage, you can take advantage of SQL Server PolyBase, which is no longer limited to the massively parallel computing environment that it was when introduced in SQL Server 2014.

Temporal data

A common challenge with data management is deciding how to handle changes to the data. At a minimum, you need an easy way to resolve an accidental change without resorting to a database restore. Sometimes you must be able to provide an audit trail to document how a row changed over time and who changed it. If you have a data warehouse, you might need to track historical changes for slowly changing dimensions. Or you might need to perform a trend analysis to compare values for a category at different points in time or find the value of a business metric at a specific point in time.
To address these various needs for handling changes to data, SQL Server 2016 now supports temporal tables, which were introduced as a new standard in ANSI SQL 2011. In addition, Transact-SQL has been extended to support the creation of temporal tables and the querying of these tables relative to a specific point in time.

A temporal table allows you to find the state of data at any point in time. When you create a temporal table, the system actually creates two tables. One table is the current table (also known as the temporal table), and the other is the history table. The history table is created as a page-compressed table by default to reduce storage utilization. As data changes in the current table, the database engine stores a copy of the data as it was prior to the change in the history table.

The use of temporal tables has a few limitations. First, system versioning and the FileTable and FILESTREAM features are incompatible. Second, you cannot use CASCADE options when a temporal table is the referencing table in a foreign-key relationship. Last, you cannot use INSTEAD OF triggers on the current or history table, although you can use AFTER triggers on the current table.


PolyBase was introduced in SQL Server 2014 as an interface exclusively for Microsoft Analytics Platform System (APS; formerly known as Parallel Data Warehouse), with which you could access data stored in Hadoop Distributed File System (HDFS) by using SQL syntax in queries.
In SQL Server 2016, you can now use PolyBase to query data in Hadoop or Azure Blob Storage and combine the results with relational data stored in SQL Server. To achieve optimal performance, PolyBase can dynamically create columnstore tables, parallelize data extraction from Hadoop and Azure sources, or push computations on Hadoop-based data to Hadoop clusters as necessary. After you install the PolyBase service and configure PolyBase data objects, your users and applications can access data from nonrelational sources without any special knowledge about Hadoop or blob storage.
Installing PolyBase

SQL Server 2016 - Polybase

You can install only one instance of PolyBase on a single server, which must also have a SQL Server instance installed because the PolyBase installation process adds the following three databases: DWConfiguration, DWDiagnostics, and DWQueue. The installation process also adds the PolyBase engine service and PolyBase data movement service to the server.
Before you can install PolyBase, your computer must meet the following requirements:
  • Installed software: Microsoft .NET Framework 4.5 and Oracle Java SE RunTime Environment (JRE) version 7.51 or higher (64-bit)
  • Minimum memory: 4 GB
  • Minimum hard-disk space: 2 GB
  • TCP/IP connectivity enabled
Polybase: Hadoop Integration in SQL Server PDW V2

To install PolyBase by using the SQL Server Installation Wizard, select PolyBase Query Service For External Data on the Feature Selection page. Then, on the Server Configuration page, you must configure the SQL Server PolyBase engine service and the SQL Server PolyBase data movement service to run under the same account. (If you create a PolyBase scale-out group, you must use the same service account across all instances.) Next, on the PolyBase Configuration page, you specify whether your SQL Server instance is a standalone PolyBase instance or part of a PolyBase scale-out group. As we describe later in this chapter, when you configure a PolyBase scale-out group, you specify whether the current instance is a head node or a compute node. Last, you define a range with a minimum of six ports to allocate to PolyBase.

Scaling out with PolyBase

Because data sets can become quite large in Hadoop or blob storage, you can create a PolyBase scale-out group, as shown in Figure 5-9, to improve performance.

A PolyBase scale-out group has one head node and one or more compute nodes. The head node consists of the SQL Server database engine, the PolyBase engine service, and the PolyBase data movement service, whereas each compute node consists of a database engine and data movement service. The head node receives the PolyBase queries, distributes the work involving external tables to the data movement service on the available compute nodes, receives the results from each compute node, finalizes the results in the database engine, and then returns the results to the requesting client. The data movement service on the head
node and compute nodes is responsible for transferring data between the external data sources and
SQL Server and between the SQL Server instances on the head and compute nodes.

More analytics

Better and faster analytics capabilities have been built into SQL Server 2016. Enhancements to tabular models provide greater flexibility for the design of models, and an array of new tools helps you develop solutions more quickly and easily. As an option in SQL Server 2016, you can now use SQL Server R Services to build secure, advanced-analytics solutions at enterprise scale. By using R Services, you can explore data and build predictive models by using R functions in-database. You can then deploy these models for production use in applications and reporting tools.

Tabular enhancements

In general, tabular models are relatively easy to develop in SQL Server Analysis Services. You can build such a solution directly from a wide array of sources in their native state without having to create a set of tables as a star schema in a relational database. You can then see the results of your modeling within the design environment. However, there are some inherent limitations in the scalability and complexity of the solutions you can build. In the latest release of SQL Server, some of these limitations have been removed to better support enterprise requirements. In addition, enhancements to the modeling process make controlling the behavior and content of your model easier. In this section, we review the following enhancements that help you build better analytics solutions in SQL Server 2016:
  •  More data sources accessible in DirectQuery mode
  •  Choice of using all, some, or no data during modeling in DirectQuery mode
  •  Calculated tables
  •  Bidirectional cross-filtering
  •  Formula bar enhancements
  •  New Data Analysis Expressions (DAX) functions
  •  Using DAX variables
R integration

R is a popular open-source programming language used by data scientists, statisticians, and data analysts for advanced analytics, data exploration, and machine learning. Despite its popularity, the use of R in an enterprise environment can be challenging. Many tools for R operate in a single-threaded, memory-bound desktop environment, which puts constraints on the volume of data that you can analyze. In addition, moving sensitive data from a server environment to the desktop removes it from the security controls built into the database.

R Services in SQL Server 2016 

SQL Server R Services, the result of Microsoft’s acquisition in 2015 of Revolution Analytics, resolves these challenges by integrating a unique R distribution into the SQL Server platform. You can execute R code directly in a SQL Server database when using R Services (In-Database) and reuse the code in another platform, such as Hadoop. In addition, the workload shifts from the desktop to the server and maintains the necessary levels of security for your data. In Enterprise Edition, R Services performs multithreaded, multicore, and parallelized multiprocessor computations at high speed. Using R Services, you can build intelligent, predictive applications that you can easily deploy to production.

Installing and configuring R Services

To use SQL Server R Services, you must install a collection of components to prepare a SQL Server instance to support the R distribution. In addition, each client workstation requires an installation of the R distribution and libraries specific to R Services.

Server configuration

R Services is available in the Standard, Developer, and Enterprise editions of SQL Server 2016 or in Express Edition with Advanced Services. Only the Enterprise edition supports execution of R packages in a high-performance, parallel architecture. In the server environment, you install one of the following components from the SQL Server installation media:
  • R Services (In-Database) A database-engine feature that configures the database service to use R jobs and installs extensions to support external scripts and processes. It also downloads Microsoft R Open (MRO), an open-source R distribution. This feature requires you to have a default or named instance of SQL Server 2016.
  • R Services (Standalone) A standalone component that does not require a database-engine instance and is available only in the Enterprise edition of SQL Server 2016. It includes enhanced R packages and connectivity tools from Revolution Analytics and open-source R tools and base packages. Selection of this component also downloads and installs MRO.

Better reporting

For report developers, Reporting Services in SQL Server 2016 has a more modern development environment, two new data visualizations, and improved parameter layout options. In addition, it includes a new development environment to support mobile reports. Users also benefit from a new web portal that supports modern web browsers and mobile access to reports. In this chapter, we’ll explore these new features in detail.

What's New in Microsoft SQL Server 2016 Reporting 

Report content types

This release of Reporting Services includes both enhanced and new report content types:
  •  Paginated reports Paginated reports are the traditional content type for which Reporting Services is especially well suited. You use this content type when you need precise control over the layout, appearance, and behavior of each element in your report. Users can view a paginated report online, export it to another format, or receive it on a scheduled basis by subscribing to the report. A paginated report can consist of a single page or hundreds of pages, based on the data set associated with the report. The need for this type of report continues to persist in most organizations, as well as the other report content types that are now available in the Microsoft reporting platform.
  •  Mobile reports In early 2015, Microsoft acquired Datazen Software to make it easier to deploy reports to mobile devices, regardless of operating system and form factor. This content type is best when you need touch-responsive and easy-to-read reports that are displayed on smaller screens, communicate key metrics effectively at a glance, and support drill-through to view supporting details. In SQL Server 2016, users can view both paginated and mobile reports through the web portal interface of the on-premises report server.
  •  Key performance indicators (KPIs) A KPI is a simple type of report content that you can add to the report server to display metrics and trends at a glance. This content type uses colors to indicate progress toward a goal and an optional visualization to show how values trend over time.

Improved Azure SQL Database

Microsoft Azure SQL Database was one of the first cloud services to offer a secure, robust, and flexible database platform to host applications of all types and sizes. When it was introduced, SQL Database had only a small subset of the features available in the SQL Server database engine. With the introduction of version V12 and features such as elastic database pools, SQL Database is now an enterprise-class platform-as-a-service (PaaS) offering. Furthermore, its rapid development cycle is beneficial to both SQL Database and its on-premises counterpart. By integrating new features into SQL Database ahead of SQL Server, the development team can take advantage of a full testing and telemetry cycle, at scale, that allows them to add features to both products much faster. In fact, several of the features in SQL Server 2016 described in earlier chapters, such as Always Encrypted and Row-Level Security, result from the rapid development cycle of SQL Database.

Introduction to SQL Database

Microsoft Azure SQL Database is one of many PaaS offerings available from Microsoft. It was introduced in March 2009 as a relational database-as-a-service called SQL Azure, but it had a limited feature set and data volume restrictions that were useful only for very specific types of small applications. Since then, SQL Database has evolved to attain greater parity with its on-premises predecessor, SQL Server. If you have yet to implement a cloud strategy for data management because of the initial limitations of SQL Database, now is a good time to become familiar with its latest capabilities and discover how best to start integrating it into your technical infrastructure.

Elastic database features

Microsoft has introduced elastic database features into SQL Database to simplify the implementation and management of software-as-a-service (SaaS) solutions. To optimize and simplify the management of your application, use one or more of the following features:
  •  Elastic scale This feature allows you to grow and shrink the capacity of your database to accommodate different application requirements. One way to manage elasticity is to partition your data across a number of identically structured databases by using a technique called sharding. You use the elastic database tools to easily implement sharding in your database.
  •  Elastic database pool Rather than explicitly allocate DTUs to a SQL Database, you can use an elastic database pool to allocate a common pool of DTU resources to share across multiple databases. That way you can support multiple types of workloads on demand without monitoring your databases individually for changes in performance requirements that necessitate intervention.
  •  Elastic database jobs You use an elastic database job to execute a T-SQL script against all databases in an elastic database pool to simplify administration for repetitive tasks such as rebuilding indexes. SQL Database automatically scales your script and applies built-in retry logic when necessary.
  •  Elastic query When you need to combine data from multiple databases, you can create a single connection string and execute a single query. SQL Database then aggregates the data into one result set.

Managing elastic scale

Sharding is not a new concept, but it has traditionally been challenging to implement because it often requires custom code and adds complexity to the application layer. Elastic database tools are available to simplify creating and managing sharded applications in SQL Database by using an elastic database client library or the Split-Merge service. These tools are useful whether you distribute your database across multiple shards or implement one shard per end customer, as shown in Figure 8-7.

You should consider sharding your application if either of the following conditions apply:
  •  The size of application data exceeds the limitations of SQL Database.
  •  Different shards of the database must reside in different geographies for compliance, performance, or geopolitical reasons.

Where You Can Get Additional Information

Below are some additional resources that you can use to find out more information about SQL Server 2016.

SQL Server 2016 Early Access Web Site: https://www.microsoft.com/en/server-cloud/products/sql-server-2016/

SQL Server 2016 data sheet: http://download.microsoft.com/download/F/D/3/FD33C34D-3B65-4DA9-8A9F-0B456656DE3B/SQL_Server_2016_datasheet.pdf


SQL Server 2016 release notes: https://msdn.microsoft.com/en-US/library/dn876712.aspx

What’s new in SQL Server, September Update: https://msdn.microsoft.com/en-US/library/bb500435.aspx
















15 May 2016

Blockchain-Powered Internet of Things

IBM Reveals Concept for Blockchain-Powered Internet of Things

What is blockchain?

Blockchain is a technology for a new generation of transactional applications that establishes trust, accountability and transparency while streamlining business processes. A blockchain has two main concepts. A business network, where members exchange items of value through a ledger, which each member possesses and whose content is always in sync with the others.

Blockchain a New Disruption in Financial Servies


Blockchain is a technology for a new generation of transactional applications that establishes trust, accountability and transparency while streamlining business processes. It is a design pattern made famous by bitcoin, but its uses go far beyond. With it, we can re-imagine the world's most fundamental business interactions and open the door to invent new styles of digital interactions. It has the potential to vastly reduce the cost and complexity of cross-enterprise business processes. The distributed ledger makes it easier to create cost-efficient business networks where virtually anything of value can be tracked and traded—without requiring a central point of control.
The application of this emerging technology is showing great promise across a broad range of business applications.


For example, blockchain allows securities to be settled in minutes instead of days. It can also be used to help companies manage the flow of goods and related payments, or enable manufacturers to share production logs with OEMs and regulators to reduce product recalls.

“ Over the past two decades, the Internet has revolutionized many aspects of business and society–making individuals and organizations more productive. Yet the basic mechanics of how people and organizations execute transactions with one another have not been updated for the 21st century. Blockchain could bring to those processes the openness and efficiency we have come to expect in the Internet Era. ”
—Arvind Krishna, Senior VP, IBM Research

IBM joins Linux Foundation to advance Blockchain

Blockchain has huge potential to transform a wide range of industries. But work needs to be done to arrive at a blockchain fabric that is standards-based and ready for the enterprise. IBM joins the Linux Foundation to help accelerate this exciting technology in a new open source community.

Technical Introduction to IBM's Open Blockchain (OBC)

Watch this animation for a sneak peek at the possibilities and how a blockchain-enabled internet can make a radical difference in your industry. Read this post to see how IBM is contributing to this pursuit: ibm.biz/ThinkBlockchain

Bitcoin explained

The Problem and the solution


Asset ownership and transfer between businesses is currently inefficient, slow, costly and vulnerable to manipulation. Everyone has their own ledger where discrepancies between business parties can increase settlement times.
A new way is needed for Internet-age market enablement.


Blockchain technologies can be used to share a ledger across the business network. The network will be private to the parties concerned, permissioned so only authorized parties are allowed to join, and secure using cryptographic technology to ensure that participants only see what they are allowed to see.
The shared ledger will be more robust, since it is replicated and distributed. All transactions against the ledger will require consensus across the network, where provenance of information is clear and transparent. Transactions will be immutable (unchangeable) and final.
The business network participants will be the same - disintermediation is not a natural consequence of blockchain usage.
Goods and services are provided more efficiently, with the potential to lower costs on all levels.

Key concepts of blockchain

A blockchain has two main concepts. A business network, in which members exchange items of value through a ledger, which each member possesses and whose content is always in sync with the others.

Understanding Blockchains

A business network

  • A decentralized peer-to-peer architecture with nodes consisting of market participants (such as banks and securities firms).
  • Protocol peers validate and commit transactions in order to reach consensus.

Shared ledger

It can act as a source of truth for businesses doing transactions on a blockchain:

  • Records all transactions across the business network
  • Is shared among participants
  • Is replicated so each participant has their own copy
  • Is permissioned, so participants see only appropriate transactions

Often, companies have multiple ledgers for multiple business networks in which they participate. It can be used for recording and totaling financial transactions.

The benefits of blockchain

Blockchain can help radically improve industries, beginning with banking and insurance. However the opportunities for blockchain go far beyond this. We predict that this technology will be used to create smart(er), more efficient systems for supply chains, Internet of Things networks, gaming, multi-media rights management, car rental, Government proof of identity (or license) creation and insurance record management.

  • More efficient
  • Less risky
  • More cost-effective
  • Legal contracts
  • Corporate treasury, accounts payable and receivable
  • Trade finance, letters of credit
  • Smart Property
  • International payments
  • Internal cash management

Next Generation IoT Technologies Using The Block Chain

This session will include findings from the recent research study conducted by IBM’s Institute for Business Value on a decentralized Internet of Things. We present a revolutionary approach to address the problems of cost, privacy and longevity of smart devices on the Internet Things. We make the case for rethinking the technology strategy at the foundation of the IoT to be secure, scalable and efficient. Our approach leverages the “block chain” – the technology platform underlying Bitcoin - to create a decentralized platform for the Internet of Things.

An overview of how Samsung and IBM are thinking about the next generation of IoT infrastructure, why we're using the Block Chain (derived from Bitcoin) and what to expect at our January proof of concept presentation in CES will be presented.

IBM Bluemix Garage for Blockchain

Making blockchain real for business

To help accelerate the design and development of blockchain applications, connect with us or try out the IBM Bluemix garages for blockchain. IBM Bluemix garage for blockchain combines industry expertise with blockchain technology along with proven methods Design Thinking and Agile Development, to deliver business solutions that work. We know it’s important to be close to our customers and we have IBM Bluemix for blockchain garage locations in New York, London, Singapore and Tokyo. Additionally, we have Bluemix garage offices in San Francisco, Toronto, Nice, Melbourne, and wherever you are.

Featured sample applications

Simple Data Pipe

Data movement in the cloud made easy
The Simple Data Pipe allows you to connect to data behind web APIs and land it all in one staging ground in its native form using IBM Cloudant NoSQL DB. Bluemix provides prebuilt connections to data sources as well as integrations with both IBM dashDB for data warehousing and IBM Analytics for Apache Spark™ for advanced analytics processing.

Twitter Sentiment Analysis

Decode social conversations using Spark and Watson
Tweets can tell you a lot about how your customers feel. With Twitter Sentiment Analysis, you can easily deploy Spark Streaming along with IBM Watson to analyze emotional, social, and language tones across the Twittersphere. This open source app uses Spark Streaming to capture tweets in real time, score them with the Watson Tone Analyzer service on Bluemix, and visualize the results with an iPython Notebook. It's simple social sentiment scoring, for free!

Getting started with Blockchain 

With the IBM® Blockchain service on Bluemix, you can quickly spin up a blockchain network and circumvent the complexities involved with manually creating a development environment. Rather than creating and managing a network, developers can spend their time generating applications and working with chaincode. The service is a peer-to-peer permissioned network built on top of the Linux Foundation's Hyperledger fabric code.

You can use a blockchain network to exchange financial records through a shared ledger. For more information about shared ledgers and business networks, see the About Blockchain topic.

You can get to the Console through this Link:

To get started, follow these steps to create and deploy an unbound service instance of a Blockchain network. Once complete, you will have your own development environment with validating nodes and a security service. From there, you can deploy chaincode, see results, and build your applications:

From the landing page for the Blockchain DevOps Service, complete the following fields in the Add Service window:

  1. Choose dev from the Space drop-down window.
  2. Leave the App field as Leave unbound.
  3. Change the Service name to myblockchain123, or some value unique to you.
  4. Leave the Credential name field as its default value.
  5. Leave the Selected Plan as its default value.
  6. Click CREATE.

You are now on the Service Dashboard screen for your new service. From here you can Manage your instance of the network:

  1. Click LAUNCH to see the blockchain monitor for your Blockchain network.

The blockchain monitor displays network details, live logs, current ledger state, APIs, and chaincode templates. Use the dashboard for any of the following functions:

  1. Access Discovery and API routes for the peers on your network.
  2. View any currently-running chaincode containers.
  3. View real-time logs and troubleshoot chaincode that fails to execute.
  4. View the world state for your ledger.
  5. Access the Swagger UI and interact with your network via the REST API.
  6. Deploy one of three available chaincode examples.

Jerry Cuomo says Blockchain is OPEN for business

Jerry Cuomo (IBM Fellow, VP, and CTO of Middleware) opens his remarks by saying, "We believe will fundamentally change the way we do business." He says that IBM's efforts in blockchain are "open by design."

Jerry Cuomo starts with some "blockchain 101." He discusses some common use cases that may be radically changed by blockchain technology:

Common blockchain use casesJerry Cuomo says the common element in these use cases is that they take several days to "settle" and that, "Blockchain can reimagine the world's most fundamental business interactions and open the door to invent new styles of digital interactions." Jerry says the game changing element of blockchain is it's ability to reduce the time it takes to settle multiparty transactions. Blockchain can also reduce the cost of these transactions and reduce risk of tampering with the transactions.

Jerry Cuomo explains the concept of a "ledger" in blockchain technology:

Jerry Cuomo introduces the "Hyperledger project," which is being launched through the Linux Foundation. It's goal is to make blockchain ready for business. They are looking at enhancements that will be needed for business applications such as a permission model, audit mechanisms. Jerry says that IBM has offered 45,000 lines of code to the Hyperledger project to help make blockchain technology ready for business.

More Information:
















17 April 2016

SUSE Enterprise Linux 12 and Docker Containers

Propel your enterprise to the next level of productivity and competitiveness with SUSE Linux Enterprise 12 Service Pack 1.

Service Pack 1 further adds to SUSE Linux Enterprise making it the most interoperable platform for mission-critical computing across physical, virtual and cloud environments.

SUSE Linux Enterprise 12 Install and overview | The Advanced Foundation for Enterprise Computing  

Solutions based on SUSE Linux Enterprise 12 Service Pack 1 (SP1) feature unique Docker and hardware support along with new and updated capabilities so you can:
  • Achieve SLAs for application uptime
  • Run highly efficient data center development and operations
  • Bring innovative solutions to market faster
Docker in SUSE Linux Enterprise Server

Increase Uptime

SUSE Enterprise Linux DownTime isn’t and Option

Minimize planned and unplanned downtime and maximize service availability. Take advantage of our rugged reliability, high availability and live kernel patching to meet service-level agreements and keep your business running. Learn more about how SUSE helps you achieve 99.999% availability and move towards zero downtime.
  • SUSE Linux Enterprise works perfectly on a variety of hardware platforms that can prevent hardware downtime
  • Maximize service availability with high availability clustering, geo clustering and live kernel patching
  • Minimize human mistakes with a wide range of tools and services including system rollback of SUSE Linux Enterprise service packs

SUSE Linux Enterprise Server 12 Zero Downtime

Improve Operational Efficiency

Boost your efficiency by simplifying systems management and by ensuring high levels of resource utilization.
  • Stay ahead on implementations with container technologies. Take advantage of SUSE’s enterprise ready solution of Docker. See how the ecosystem of SUSE applications creates additional value for your business, so you can just focus on building the apps.
  • Save time and resources with JeOS (Just enough Operating System), a lightweight Linux OS that needs fewer resources than the full OS but provides the same enterprise-grade performance and availability.
  • Reduce IT maintenance workload with easy-to-use management tools such as YaST/AutoYaST (single system management), Wicked (network management), and HAWK (cluster resource management).
  • Maximize your efficiency with virtualization technologies of Xen and KVM.

The Evolution of Linux Containers and Integration of Docker with SLES 12 

Accelerate Innovation

Harness the power of the newest CPUs on the market. Get fast, timely access to abundant open source and partner innovations. Reduce time to value through SUSE-certified quality and ease of integration.
  • Get the benefits of the latest open source innovation sooner by updating with modules
  • Get partner innovation quickly through SUSE SolidDriver Program
  • Reduce time to value through SUSE certifications for hardware and applications

Welcome Docker to SUSE Linux Enterprise Server

Lightweight virtualization is a hot topic these days. Also called “operating system-level virtualization,” it allows you to run multiple applications or systems on one host without a hypervisor. The advantages are obvious: not having a hypervisor, the layer between the host hardware and the operating system and its applications, is eliminated, allowing a much more efficient use of resources. That, in turn, reduces the virtualization overhead while still allowing for separation and isolation of multiple tasks on one host. As a result, lightweight virtualization is very appealing in environments where resource use is critical, like server hosting or outsourcing business.

One specific example of operating system-level virtualization is Linux Containers, also sometimes called “LXC” for short. We already introduced Linux Containers to SUSE customers and users in February 2012 as a part of SUSE Linux Enterprise Server 11 SP2. Linux Containers employ techniques like Control Groups (cgroups) to perform resource isolation to control CPU, memory, network, block I/O and namespaces to isolate the process view of the operating system, including users, processes or file systems. That provides advantages similar to those of “regular” virtualization technologies – such as KVM or Xen –, but with much smaller I/O overhead, storage savings and the ability to apply dynamic parameter changes without the need to reboot the system. The Linux Containers infrastructure is supported in SUSE Linux Enterprise 11 and will remain supported in SUSE Linux Enterprise 12.

Full system roll-back and systemd in SUSE 

Now, we are taking a next step to further enhance our virtualization strategy and introduce you to Docker. Docker is built on top of Linux Containers with the aim of providing an easy way to deploy and manage applications. It packages the application including its dependencies in a container, which then runs like  a virtual machine. Such packaging allows for application portability between various hosts, not only across one data center, but also to the cloud. And starting with SUSE Linux Enterprise Server 12 we plan to make Docker available to our customers so they can start using it to build and run their containers. 

SUSE Linux Enterprise Live Patching Roadmap: Live Kernel Patching using kGraft   

This is the another step in enhancing the SUSE virtualization story, building on top of what we have already done with Linux Containers. Leveraging the SUSE ecosystem, Docker and Linux Containers are not only a great way to build, deploy and manage applications; the idea nicely plugs into tools like Open Build Service and Kiwi for easy and powerful image building or SUSE Studio, which offers a similar concept already for virtual machines. Docker easily supports rapid prototyping and a fast deployment process; thus when combined with Open Build Service, it’s a great tool for developers aiming to support various platforms with a unified tool chain. This is critical for the future because those platforms easily apply also to clouds, public, private and hybrid. Combining Linux Containers, Docker, SUSE’s development and deployment infrastructures and SUSE Cloud, our OpenStack-based cloud infrastructure offering, brings flexibility in application deployment to a completely new level.

SUSE Linux Enterprise High Availability Roadmap: Secure your Data and Service from Local to Geo 

Introducing Docker follows the SUSE philosophy by offering choice in the virtualization space, allowing for flexibility, performance and simplicity for Linux in data centers and the cloud.

Securing Your System Hardening and Tweaking SUSE Linux Enterprise Server 12

More Information:

SUSE Embedded Offers a Medical Device Operating System