• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

22 August 2017

Oracle Database 12c Release 2

Oracle Database 12c Release 2 (12.2), is now available everywhere 

Ask Tom Answer Team (Connor McDonald and Chris Saxon) on Oracle Database 12c Release 2 New Features



Oracle Database 12.2c Architecture Diagram
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/pdf/Database%20Architecture.pdf


The latest generation of the world's most popular database, Oracle Database 12c Release 2 (12.2), is now available everywhere - in the Cloud, with Oracle Cloud at Customer, and on-premises.  This latest release provides organizations of all sizes with access to the world’s fastest, most scalable and reliable database technology in a cost-effective, hybrid Cloud environment. 12.2 also includes a series of innovations that helps customers easily transform to the Cloud while preserving their investments in Oracle Database technologies, skills and resources.

Oracle RAC 12c Release 2 New Features

Database Security - Comprehensive Defense in Depth

Partner Webcast – Oracle Identity Cloud Service: Introducing Secure, On-Demand Identity Management



Oracle Database 12c provides multi-layered security including controls to evaluate risks, prevent unauthorized data disclosure, detect and report on database activities and enforce data access controls in the database with data-driven security. Oracle Database 12c Release 2 (12.2), now available in the Cloud and on-premises, introduces new capabilities such as on-line and off-line tablespace encryption and database privilege analysis. Combined with Oracle Key Vault and Oracle Audit Vault and Database Firewall, Oracle Database 12c provides unprecedented defense-in-depth capabilities to help organizations address existing and emerging security and compliance requirements.

Partner Webcast – Enabling Oracle Database High Availability and Disaster Recovery with Oracle Cloud


Database Cloud Services

Oracle Cloud provides several Oracle Cloud Service deployment choices. These choices allow you to start at the cost and capability level suitable to your use case and then gives you the flexibility to adapt as your requirements change over time. Choices include: single schemas, dedicated pluggable databases, virtualized databases, bare metal databases and databases running on world class engineered infrastructure.

The Oracle Exadata Cloud Service offers the largest most business-critical database workloads a place to run in Oracle Cloud. With all the infrastructure components including hardware, networking, storage, database and virtualization in place, access to secured, highly available and powerful performance is easily provisioned in a few clicks. Exadata Cloud Service is engineered to support OLTP, Data Warehouse / Real-Time Analytic and Mixed database workloads at any scale. With this service, you maintain control of your database while Oracle manages the hardware, storage and networking infrastructure letting you focus on growing your business.



https://cloud.oracle.com/database

Oracle Database Exadata Cloud Machine delivers the world’s most advanced database cloud to customers who require their databases to be located on-premises. Exadata Cloud Machine uniquely combines the world’s #1 database technology and Exadata, the most powerful database platform, with the simplicity, agility and elasticity of a cloud-based deployment. It is identical to Oracle’s Exadata public cloud service, but located in customers’ own data centers and managed by Oracle Cloud Experts. Every Oracle Database and Exadata feature and option is included with the Exadata Cloud Machine subscription, ensuring highest performance, best availability, most effective security and simplest management. Databases deployed on Exadata Cloud Machine are 100% compatible with existing on-premises databases, or databases that are deployed in Oracle’s public cloud. Exadata Cloud Machine is ideal for customers desiring cloud benefits but who cannot move their databases to the public cloud due to sovereignty laws, industry regulations, corporate policies, or organizations that find it impractical to move databases away from other tightly coupled on-premises IT infrastructure.

Oracle Database 12c Release 2 Sharded Database Overview and Install (Part 1)


Oracle Sharding Part 2


Oracle Sharding Part 3


Oracle Sharding with Suresh Gandhi

Overview of Oracle‘s Big Data Management System

As today's enterprises embrace big data, their information architectures are evolving. The new information architecture in the big data era embraces emerging technologies such as Hadoop, but at the same time leverages the core strengths of previous data warehouse architectures.

Partner Webcast – Oracle Ravello Cloud Service: Easy Deploying of Big Data VM on Cloud



The data warehouse, built upon Oracle Database 12c Release 2 and Exadata, will continue to be the primary analytic database for storing core transactional data: financial records, customer data, point- of-sale data and so forth (see Key Data Warehousing and Big Data Capabilities for more information).

However, the data warehouse will be augmented by a big-data system (built upon Oracle Big Data Appliance), which functions as a ‘data reservoir’. This will be the repository for the new sources of large volumes of data: machine-generated log files, social-media data, and videos and images -- as well as a repository for more granular transactional data or older transactional data which is not stored in the data warehouse.

Data flows between the big data system and the data warehouse to create a unified foundation: the Oracle Big Data Management System.

The transition from the Enterprise Data Warehouse centric architecture to the Big Data Management System - both on-premise, on the Cloud, or in hybrid Cloud systems - is going to revolutionize any companies information management architecture. Oracle's Statement of Direction outlines Oracle's vision for delivering innovative new technologies for building the information architecture of tomorrow.

Partner Webcast – Docker Agility in Cloud: Introducing Oracle Container Cloud Service





Big data is in many ways an evolution of data warehousing. To be sure, there are new technologies used for big data, such as Hadoop and NoSQL databases. And the business benefits of big data are potentially revolutionary. However, at its essence, big data requires an architecture that acquires data from multiple data sources, organizes and stores that data in a suitable format for analysis, enables users to efficiently analyze the data and ultimately helps to drive business decisions. These are the exact same principles that IT organizations have been following for data warehouses for years.




The new information architecture that enterprises will pursue in the big data era is an extension of their previous data warehouse architectures. The data warehouse, built upon a relational database, will continue to be the primary analytic database for storing much of a company’s core transactional data, such as financial records, customer data, and sales transactions. The data warehouse will be augmented by a big-data system, which functions as a ‘data lake’. This will be the repository for the new sources of large volumes of data: machine-generated log files, social- media data, and videos and images -- as well as the repository for more granular transactional data or older transactional data which is not stored in the data warehouse. Even though the new information architecture consists of multiple physical data stores (relational, Hadoop, and NoSQL), the logical architecture is a single integrated data platform, spanning the relational data warehouse and the Hadoop-based data lake.

Technologies such as Oracle Big Data SQL make this distributed architecture a reality; Big Data SQL1 provides data virtualization capabilities, so that SQL can be used to access any data, whether in relational databases or Hadoop or NoSQL. This virtualized SQL layer also enables many other languages and environments, built on top of SQL, to seamlessly access data across the entire big data platform.

Oracle Database 12c Release 2 and Oracle Exadata: A Data Warehouse as a Foundation for Big Data

Even as new big data architectures emerge and mature, business users will continue to analyze data by directly leveraging and accessing data warehouses. The rest of this paper describes how Oracle Database 12c Release 2 provides a comprehensive platform for data warehousing that combines industry-leading scalability and performance, deeply-integrated analytics, and advanced workload management – all in a single platform running on an optimized hardware configuration.


Hot cloning and refreshing PDBs in Oracle 12cR2

Exadata

The bedrock of a solid data warehouse solution is a scalable, high-performance hardware infrastructure. One of the long-standing challenges for data warehouses has been to deliver the IO bandwidth necessary for large-scale queries, especially as data volumes and user workloads have continued to increase. While the Oracle Exadata Database Machine is designed to provide the optimal database environment for every enterprise database, the Exadata architecture also provides a uniquely optimized storage solution for data warehousing that delivers order-of- magnitude performance gains for large-scale data warehouse queries and very efficient data storage via compression for large data volumes. A few of the key features of Exadata that are particularly valuable to data warehousing are:

  • » Exadata Smarts Scans. With traditional storage, all database intelligence resides on the database servers. However, Exadata has database intelligence built into the storage servers. This allows database operations, and specifically SQL processing, to leverage the CPU’s in both the storage servers and database servers to vastly improve performance. The key feature is “Smart Scans”, the technology of offloading some of the data-intensive SQL processing into the Exadata Storage Server: specifically, row-filtering (the evaluation of where-clause predicates) and column-filtering (the evaluation of the select-list) are executed on Exadata storage server, and a much smaller set of filtered data is returned to the database servers. “Smart scans” can improve the query performance of large queries by an order of magnitude, and in conjunction with the vastly superior IO bandwidth of Exadata’s architecture delivers industry-leading performance for large-scale queries.
  • » Exadata Storage Indexes. Completely automatic and transparent, Exadata Storage Indexes maintain each column’s minimum and maximum values of tables residing in the storage server. With this information, Exadata can easily filter out unnecessary data to accelerate query performance.
  • » Hybrid Columnar Compression. Data can be compressed within the Exadata Storage Server into a highly efficient columnar format that provides up to a 10 times compression ratio, without any loss of query performance. And, for pure historical data, a new archival level of hybrid columnar compression can be used that provides up to 40 times compression ratios.

Partner Webcast - Oracle Cloud Machine Technical Overview (Part1)



Partner Webcast - Oracle Cloud Machine Technical Overview (Part 2)


Oracle Database In-Memory

While Exadata tackles one major requirement for high-performance data warehousing (high-bandwidth IO), Oracle Database In-Memory tackles another requirement: interactive, real-time queries. Reading data from memory can be orders of magnitude faster than reading from disk, but that is only part of the performance benefits of In-Memory: Oracle additionally increases in-memory query performance through innovative memory-optimized performance techniques such as vector processing and an optimized in-memory aggregation algorithm. Key features include:

  • » In-memory (IM) Column Store. Data is stored in a compressed columnar format when using Oracle Database In-Memory. A columnar format is ideal for analytics, as it allows for faster data retrieval when only a few columns are selected from a table(s). Columnar data is very amenable to efficient compression; in-memory data is typically compressed 2-20x, which enables larger volumes of raw data to be stored in the in-memory column store.
  • » SIMD Vector Processing. When scanning data stored in the IM column store, Database In-Memory uses SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. In this way, SIMD vector processing enables the Oracle Database In-Memory to scan and filter billion of rows per second.
  • » In-Memory Aggregation. Analytic queries require more than just simple filters and joins. They require complex aggregations and summaries. Oracle Database In-Memory provides an aggregation algorithm specifically optimized for the join-and-aggregate operations found in typical star queries. This algorithm allows dimension tables to be joined to the fact table, and the resulting data set aggregated, all in a single in-memory pass of the fact table.

Oracle Database In-Memory is useful for every data-warehousing environment. Oracle Database In-Memory is entirely transparent to applications and tools, so that it is simple to implement. Unlike a pure in-memory database, not all of the objects in an Oracle database need to be populated in the IM column store. The IM column store should be populated with the most performance-critical data, while less performance-critical data can reside on lower cost flash or disk. Thus, even the largest data warehouse can see considerable performance benefits from In- Memory.

Query Performance

Oracle provides performance optimizations for every type of data warehouse environment. Data warehouse workloads are often complex, with different users running vastly different operations, with similarly different expectations and requirements for query performance. Exadata and In-Memory address many performance challenges, but many other fundamental performance capabilities are necessary for enterprise-wide data warehouse performance.
Oracle meets the demands of data warehouse performance by providing a broad set of optimization techniques for every type of query and workload:

  • » Advanced indexing and aggregation techniques for sub-second response times for reporting and dashboard queries. Oracle’s bitmap and b-tree indexes and materialized views provide the developer and DBA’s with tools to make pre-defined reports and dashboards execute with fast performance and minimal resource requirements.
  • » Star query optimizations for dimensional queries. Most business intelligence tools have been optimized for star- schema data models. The Oracle Database is highly optimized for these environments; Oracle Database In- Memory provides fast star-query performance leverage its in-memory aggregation capabilities. For other database environments, Oracle’s “star transformation” leverages bitmap indexes on the fact table to efficiently join multiple dimension tables in a single processing step. Meanwhile, Oracle OLAP is a complete multidimensional analytic engine embedded in the Oracle Database, storing data within multidimensional cubes inside the database accessible via SQL. The OLAP environment provides very fast access to aggregate data in a dimensional environment, in addition to sophisticated calculation capabilities (the latter is discussed in a subsequent section of this paper).
  • » Scalable parallelized processing. Parallel execution is one of the fundamental database technologies that enable users to query any amount of data volumes. It is the ability to apply multiple CPU and IO resources to the execution of a single database operation. Oracle’s parallel architecture allows any query to be parallelized, and Oracle dynamically chooses the optimal degree of parallelism for every query based on the characteristics of the query, the current workload on the system and the priority of requesting user.
  • » Partition pruning and partition-wise joins. Partition pruning is perhaps one of the simplest query-optimization techniques, but also one of the most beneficial. Partition pruning enables a query to only access the necessary partitions, rather than accessing an entire table – frequently, partition-pruning alone can speed up a query by two orders of magnitude. Partition-wise joins provide similar performance benefits when joining tables that are partitioned by the same key. Together these partitioning optimizations are fundamental for accelerating performance for queries on very large database objects.

Oracle Database 12c Release 2 Rapid Home Provisioning and Maintenance


The query performance techniques described here operate in a concerted fashion, and provide multiplicative performance gains. For example, a single query may be improved by 10x performance via partition-pruning, by 5x via parallelism, by 20x via star query optimization, and by 10x via Exadata smart scans – a net improvement of 10,000x compared to a naïve SQL engine.
Orchestrating the query capabilities of the Oracle database are several foundational technologies. Every query running in a data warehouse benefit from:

  • » A query optimizer that determines the best strategy for executing each query, from among all of the available execution techniques available to Oracle. Oracle’s query optimizer provides advanced query-transformation capabilities, and, in Oracle Database 12c, the query optimizer adds Adaptive Query Optimization, which enables the optimizer to make run-time adjustments to execution plans.
  • » A sophisticated resource manager for ensuring performance even in databases with complex, heterogeneous workloads. The Database Resource Manager allows end-users to be grouped into ‘resource consumer groups’, and for each group, the database administrator can set policies to govern the amount of CPU and IO resources that can be utilized, as well as specify policies for proactive query governing, and for query queuing. With the Database Resource Manager, Oracle provides the capabilities to ensure that data warehouse can address the requirements of multiple concurrent workloads, so that a single data warehouse platform can, for example, simultaneously service hundreds on online business analysts doing ad hoc analysis in a business intelligence tool, thousands of business users viewing a dashboard, and dozens of data scientists doing deep data exploration.
  • » Management Packs to automate the ongoing performance tuning of a data warehouse. Based upon the ongoing performance and query workload, management packs provide recommendations for all aspects of performance, including indexes and partitioning.

More Information:

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/poster.html

https://docs.oracle.com/en/database/

https://docs.oracle.com/database/122/ADMIN/title.htm

http://docs.oracle.com/database/121/CNCPT/cdbovrvw.htm#CNCPT89234

https://docs.oracle.com/database/122/whatsnew.htm

https://docs.oracle.com/database/122/NEWFT/title.htm

https://docs.oracle.com/database/122/NEWFT/toc.htm

https://docs.oracle.com/database/122/INMEM/title.htm

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-windows-3633015.html

https://docs.oracle.com/database/122/LADBN/toc.htm#LADBN-GUID-2404CE5F-6894-4B26-9213-8A47DC262109

http://www.oracle.com/us/corporate/analystreports/ovum-cloud-first-strategy-oracle-db-3520721.pdf

The NEW Oracle Database Appliance Portfolio   https://go.oracle.com/LP=55375?elqCampaignId=52477&src1=ad:pas:go:dg:oda&src2=wwmk160603p00096c0015&SC=sckw=WWMK160603P00096C0015&mkwid=sFw6OzrF5%7Cpcrid%7C215765003921%7Cpkw%7Coracle%20database%7Cpmt%7Cp%7Cpdv%7Cc%7Csckw=srch:oracle%20database

https://broadcast.oracle.com/odatouchcastEN

Oracle Database 12c Release 2 - Get Started with Oracle Database   https://docs.oracle.com/database/122/index.htm

http://www.oracle.com/technetwork/database/security/overview/index.html

http://www.oracle.com/technetwork/database/bi-datawarehousing/data-warehousing-wp-12c-1896097.pdf

http://www.oracle.com/technetwork/database/upgrade/overview/upgrading-oracle-database-wp-122-3403093.pdf

http://www.oracle.com/technetwork/database/upgrade/overview/index.html

25 July 2017

DB2 12 for z/OS – The #1 Enterprise Database

Some IBM DB2 12 highlights:


  • Improved business insight: highly concurrent queries run up to 100x faster.
  • Faster mobile support: 6 million transactions per minute via RESTful API.
  • Enterprise scalability, reliability and availability for IoT apps: 11.7 million inserts per second, 256 trillion rows per table.
  • Reduced cost: 23 percent lower CPU cost through advanced in-memory techniques?

DB2 12 Overview Nov UG 2016 Final




Links for the above video:
https://www-01.ibm.com/support/docview.wss?uid=swg27047206#db2z12new
http://www.mdug.org/Presentations/DB2%2012%20Overview%20Nov%20UG%202016%20FINAL.pdf


Strategy and Directions for the IBM® Mainframe


Machine Learning for z/OS



Temporal Tables, Transparent Archiving in DB2 for z/OS and IDAA

IBM Z Software z14 Announcement




Throughout the development of the all new IBM z14, we have worked closely with dozens of clients around the world to understand what they need to accelerate their digital transformation, securely. What we learned was data security was foundational to everything they do, they’re striving to leverage that data to gain a competitive edge, and ultimately everybody’s trying to move faster to compete at the speed of business.

Data is the New Security Perimeter with Pervasive Encryption



Job #1 is protecting their confidential data and that of their clients from both internal and external threats.  The z14 introduces pervasive encryption as the new standard with 100% of the data encrypted at-rest and in-motion, uniquely able to bulk encrypt 100% of their data in both IBM Information Management System (IMS), IBM DB2 for z/OS, and Virtual Storage Access Method (VSAM) with no changes to their applications and no impact to the SLAs.
IBM MQ for z/OS already encrypts messages from end-to-end with its Advanced Message Security feature. On the new z14, MQ can scale to greater heights with the 7X boost in on-chip encryption performance compared to z13.

Additionally, with secure services containers, z14 can prevent data breaches by rogue administrators by restricting root access via graphical user interfaces. One of the many differentiating security features provided with IBM’s Blockchain High Security Business Network delivered in the IBM Cloud.

DB2 12 Technical Overview PART 1


DB2 12 Technical Overview PART 2



Ever-evolving Intelligence with Machine Learning

Data is the world’s next great natural resource. Our clients are looking to gain a competitive edge with the vast amounts of data they have and turn insights into actions in real time when it matters.  IBM Machine Learning for z/OS can decrease the time businesses take to continuously build, train, and deploy intelligent behavioral models by keeping the data on IBM Z where it is secure.  They can also take advantage of IBM DB2 Analytics Accelerator for z/OS’s new Zero Latency technology, which uses a just-in-time protocol for data coherency for analytic requests to train and retrain their models on the fly.

IBM Z provides the agility to continuously deliver new function via microservices, API’s or more traditional applications.
Innovate with Microservices and leverage open source.

Microservices can be built on z14 with Node.js, Java, Go, Swift, Python, Scala, Groovy, Kotlin, Ruby, COBOL, PL/I, and more.  They can be deployed in Docker containers where a single z14 can scale out to 2 million Docker containers.  These services can run up to 5X faster when co-located with the data they need on IBM Z.  The data could be existing data on DB2 or IMS or it could be using open source technologies such as MariaDB, Cassandra, or MongoDB.  On z14, a single instance of MongoDB can hold 17 TB of data without sharding!

What's new from the optimizer in DB2 12 for z/OS?



Another DB2 LUW Version 11.1 highlight is the capability to now deploy DB2 pureScale on IBM Power systems with little endien Linux operating systems. This approach works with both the vanilla Transmission Control Protocol/Internet Protocol (TCPIP)—aka sockets—as well as higher-speed Remote direct memory Access (RDMA) over Converged Ethernet (RoCE) networks. And, expectedly, it provides all of DB2 pureScale’s availability advantages, including online member recovery and rolling updates, and DB2 pureScale’s very strong scalability attributes. Here is an example of the throughput scaling experienced in the lab for an example OLTP workload running both TCP/IP, and RoCE:



What's new in the DB2 12 base release

DB2® 12 for z/OS® takes DB2 to a new level, both extending the core capabilities and empowering the future. DB2 12 extends the core with new enhancements to scalability, reliability, efficiency, security, and availability. DB2 12 also empowers the next wave of applications in the cloud, mobile, and analytics spaces.
This information might sometimes also refer to DB2 12 for z/OS as "DB2" or "Version 12."

DB2 12 for z/OS - Catch the wave early and stay ahead!


Continuous delivery and DB2 12 function levels

DB2 12 introduces continuous delivery of new capabilities and enhancements in a single service stream as soon as they are ready. The result is that you can benefit from new capabilities and enhancements without waiting for an entire new release. Function levels enable you to control the timing of the activation and adoption of new features, with the option to continue to apply corrective and preventative service without adopting new feature function.

New capabilities and enhancements in the DB2 12 base release
Most new capabilities in DB2 12 are introduced in DB2 12 function levels. However, some become available immediately in the base DB2 12 release, or when you apply maintenance.

Highlighted new capabilities in the DB2 12 base release
After the initial release, most new capabilities in DB2® 12 are introduced in DB2 12 function levels. However, some new capabilities become available immediately in the base DB2 12 release.

For information about new capabilities and enhancements in DB2 12 function levels, see What's new in DB2 12 function levels. The following sections describe new capabilities and enhancements introduced in the DB2 base (function levels 100 or 500) after general availability of DB2 12.

DevOps with DB2: Automated deployment of applications with IBM Urban Code Deploy:
With Urban Code Deploy, you can easily automate the deployment and configuration of database schema changes in DB2 11 and DB2 12. The automation reduces the time, costs, and complexity of deploying and configuring your business-critical apps, getting you to business value faster and more efficiently.

Modern language support DB2 for z/OS application development:
DB2 11 and DB2 12 now support application development in many modern programming and scripting languages. Application developers can use languages like Python, Perl, and Ruby on Rails to write DB2 for z/OS applications. Getting business value from your mainframe applications is now more accessible than ever before.

DB2 REST services improve efficiency and security:
The DB2 REST service provider, available in DB2 11 and DB2 12, unleashes your enterprise data and applications on DB2 for z/OS for the API economy. Mobile and cloud app developers can efficiently create consumable, scalable, and RESTful services. Mobile and cloud app developers can consume these services to securely interact with business-critical data and transactions, without special DB2 for z/OS expertise.

Overview of DB2 12 new function availability

The availability of new function depends on the type of enhancement, the activated function level, and the application compatibility levels of applications. In the initial DB2 12 release, most new capabilities are enabled only after the activation of function level 500 or higher.

Virtual storage enhancements
Virtual storage enhancements become available at the activation of the function level that introduces them or higher. Activation of function level 100 introduces all virtual storage enhancements in the initial DB2 12 release. That is, activation of function level 500 introduces no virtual storage enhancements.

Temporal Tables, Transparent Archiving in DB2 for z/OS and IDAA


Subsystem parameters
New subsystem parameter settings are in effect only when the function level that introduced them or a higher function level is activated. All subsystem parameter changes in the initial DB2 12 release take effect in function level 500. For a list of these changes, see Subsystem parameter changes in the DB2 12 base release.

Optimization enhancements
Optimization enhancements become available after the activation of the function level that introduces them or higher, and full prepare of the SQL statements. When a full prepare occurs depends on the statement type:

For static SQL statements, after bind or rebind of the package
For non-stabilized dynamic SQL statements, immediately, unless the statement is in the dynamic statement cache
For stabilized dynamic SQL statements, after invalidation, free, or changed application compatibility level

Activation of function level 100 introduces all optimization enhancements in the initial DB2 12 release. That is, function level 500 introduces no optimization enhancements.

SQL capabilities
New SQL capabilities become after the activation of the function level that introduces them or higher, for applications that run at the equivalent application compatibility level or higher. New SQL capabilities in the initial DB2 12 release become available in function level 500 for applications that run at the equivalent application compatibility level or higher. You can continue to run SQL statements compatibly with lower function levels, or previous DB2 releases, including DB2 11 and DB2 10.

The demands of the mobile economy and the greater need for faster business insights, combined with the explosive growth of data, present unique opportunities and challenges for companies wanting to take advantage of their mission-critical resources. Built on the proven, trusted availability, security, and scalability of DB2 11 for z/OS and the z Systems platform, the gold standard in the industry, DB2 12 gives you the capabilities needed to securely meet the business demands of mobile workloads and increased mission-critical data. It delivers world-class analytics and OLTP performance in real-time.

DB2 for z/OS delivers innovations in these key areas:

Scalable, low-cost, enterprise OLTP and analytics

DB2 12 continues to improve upon the value offered with DB2 11 with further CPU savings and performance improvements utilizing more memory optimization. Compared to DB2 11, DB2 12 clients can achieve up to 10% CPU savings for various traditional OLTP, and heavy concurrent INSERT query workloads may see higher benefits, with up to 30% CPU savings and even more benefit for select query workload utilizing UNION ALL, large sort, and selective user-defined functions (UDFs).

DB2 12 provides more cost reduction with more zIIP eligibility of DB2 REORG and LOAD utility.

DB2 12 provides deep integration with the IBM z13, offering the following benefits:

More efficient use of compression
Support for compression of LOB data (also available with the IBM zEnterprise EC12)
Faster XML parsing through the use of SIMD technology
Enhancements to compression aids DB2 utility processing by reducing elapsed time and CPU consumption with the potential to improve data and application availability. Hardware exploitation to support compression of LOB data can significantly reduce storage requirements and improve overall efficiency of LOB processing.

DB2 12 includes the new SQL TRANSFER OWNERSHIP statement, enabling better security and control of objects that contain sensitive data. In addition, DB2 12 enables system administrators to migrate and install DB2 systems while preventing access to user data.

The real-world proven, system-wide resiliency, availability, scalability, and security capabilities of DB2 and z Systems continue to be the industry standard, keeping your business running when other solutions may not. This is especially important as enterprises support dynamic mobile workloads and the explosion of data in their enterprises. DB2 12 continues to excel and extend the unique value of z Systems, while empowering the next wave of applications.

Easy access, easy scale, and easy application development for the mobile enterprise:

In-memory performance improvements

As enterprises manage the emergence of the next generation of mobile applications and the proliferation of the IoT, database management system (DBMS) performance can become a critical success factor. To that end, DB2 12 contains many features that exploit in- memory techniques to deliver world-class performance, including:

  • In-memory fast index traverse
  • Contiguous and larger buffer pools
  • Use of in-memory pipes for improved insert performance
  • Increased sort and hash in-memory to improve sort and join performance
  • Caching the result of UDFs
  • In-memory optimization in Declare Global Temporary Table (DGTT) to improve declare performance
  • In memory optimization in Resource Limit Facility to improve RLF checking

DB2 12 offers features to facilitate the successful deployment of new analytics and mobile workloads. Workloads connecting through the cloud or from a mobile device may not have the same performance considerations as do enterprise workloads. To that end, DB2 12 has many features to help ensure that new application deployments are successful. Improvements for sort-intensive workloads, workloads that use outer joins, UNION ALL, and CASE expressions can experience improved performance and increased CPU parallelism offload to zIIP.

Easy access to your enterprise systems of record

DB2 12 is used to connect RESTful web, mobile, and cloud applications to DB2 for z/OS, providing an environment for service, management, discovery, and invocation. This feature works with IBM z/OS Connect Enterprise Edition (z/OS Connect EE,5655-CEE) and other RESTful providers to provide a RESTful solution for REST API definition and deployment.

The IBM Data Studio product, which can be used as the front-end tooling to create, deploy, or remove DB2 for z/OS services, is supported. Alternatively, new RESTful management services and BIND support are provided to manage services created in DB2 for z/OS. This capability was first made available in the DB2 Adapter for z/OS Connect feature of the DB2 Accessories Suite for z/OS, V3.3 (5697-Q04) product, working with both DB2 10 for z/OS and DB2 11 for z/OS.

Overview of DB2 features

DB2 12 for z/OS consists of the base DB2 product with a set of optional separately orderable features. Select features QMF Enterprise Edition V12 and QMF Classic Edition V12 are also made available as part of DB2 11 for z/OS (5615-DB2). Some of these features are available at no additional charge and others are chargeable:

Chargeable features for QMF V12 (features of DB2 12 for z/OS and DB2 11 for z/OS)

QMF Enterprise Edition provides a complete business analytics solution for enterprise-wide business information across end-user and database platforms. QMF Enterprise Edition consists of the following capabilities:

  • QMF for TSO and CICS
  • QMF Enhanced Editor (new)
  • QMF Analytics for TSO
  • QMF High Performance Option (HPO)
  • QMF for Workstation
  • QMF for WebSphere
  • QMF Data Service, including QMF Data Service Studio (new)
  • QMF Vision (new)

New enhancements for each capability are as follows:

QMF for TSO and CICS has significant improvements for the QMF for TSO/CICS client.

The QMF process of saving database tables, traditionally accomplished through the QMF SAVE DATA command, has been enhanced. QMF SAVE DATA intermediate results can now be saved to IBM DB2 Analytics Accelerator for z/OS 'Accelerator-only tables'. The ability to save intermediate results in Accelerator-only tables is also available for the command IMPORT TABLE and the new QMF RUN QUERY command with the TABLE keyword. This exploitation of the Accelerator may result in benefits such as improved performance, reduced batch window allocation for QMF applications, and reduced storage requirements.
By using the new TABLE keyword on the RUN QUERY command, you can now save data, using the SAVE DATA command, without needing to return and complete a data object. The RUN QUERY command with the TABLE keyword operates completely within the database to both retrieve data and insert rows without returning a report to the user.
Usability of the TSO client is improved by the enhanced editor feature (see the QMF Enhanced Editor section for more detail.).
Both the TSO and CICS clients now have the ability to organize queries, procedures, forms, and analytics into groups called folders, aiding in productivity and usability. QMF commands such as LIST, SAVE, ERASE, and RENAME have been updated to work with folders.
QMF TSO and CICS clients now have additional report preview options. After proper setting of the DSQDC_DISPLAY_RPT global variable, users will be able to enter a report mini-session, where queries can be run to view potential output without actually committing the results. The report mini-session can be useful for running and testing SELECT with change type queries. Upon exiting the report mini-session, the user will be prompted to COMMIT or ROLLBACK the query.
With Version 12, QMF's TSO and CICS clients deliver significant performance and storage improvements.
Using the new QMF program parameter option DSQSMTHD, users can make use of a second data base thread. The second thread is to be used for RUN QUERY and DISPLAY TABLE command processing. Usage of a second data base thread can assist with performance issues on SAVE operations with an incomplete report outstanding. Additionally, usage of the second thread can reduce storage requirements for SAVE DATA commands on large report objects because rows will not need to reside in storage but can be retrieved from the data base and inserted into the new table as needed.
Using the DSQEC_BUFFER_SIZE global variable, the QMF internal storage area used to fetch data base row data can be increased. By changing the default from 4 kilobytes to a value up to 256 kilobytes, QMF can increase the amount of data fetched in a single call to the data base. Less calls to the data base reduces the amount of time it takes to complete the report, which can result in significant performance improvements.
QMF's TSO and CICS clients now integrates with QMF Data Service, enabling users of this interface to access a broader range of data sources. The support enables access to z/OS and non-z/OS data sources, including relational and nonrelational data sources (see the QMF Data Service section for a description of accessible data types). This capability is available only through QMF Enterprise Edition.
QMF Enhanced Editor (new) provides usability improvements to the TSO client by bringing customizable highlighting and formatting for SQL syntax, reserved words, functions, and data types, and parenthesis checking. The new query assist feature provides table name suggestions, column name and data type information, and suggested column value information, plus a preview pane.

QMF Analytics for TSO has been enhanced as follows:

Three new statistics models have been added: Wilcoxon Signed-Rank Test, Mann-Whitney U Test, and the F-Test model.
A user-defined mapping capability has been added. OpenGIS WKT map definitions are available in either DB2 tables or exported data sets, which can be read to format user-specific maps.
Maps for Africa, North America, South America, and Germany have been added to the existing library of predefined maps.
The ability to choose columns for use in analytical analyses has been improved with enhanced data type targeting and information.
Mouse (graphics cursor) support is added for quicker interaction with the QMF Analytics for TSO functionality.
Saving analytics has been updated to display a list of existing analytics objects.
QMF for Workstation and QMF for WebSphere add additional support for DB2 Analytics Accelerator and enable QMF objects to be used as virtual tables in QMF Data Service.

Administrators now have the ability to specify whether the DB2 Analytics Accelerator should be used by QMF users when available (by database and query) through new resource limit options on the data source or object.
QMF Workstation and QMF for WebSphere can now write data to the DB2 Analytics Accelerator. Data can be saved as Accelerator-only tables or Accelerator-shadow tables. Queries could then be created against this data, enabling them to take advantage of the DB2 Analytics Accelerator.
QMF will detect DB2 Analytics Accelerator appliances and display these appliances under the data source. Users can also see tables that exist on the DB2 Analytics Accelerator and even add additional tables to the DB2 Analytics Accelerator by dragging and dropping tables into the appliance folder.
QMF-prepped data will be made accessible as virtual tables or stored procedures to external applications through data service connectors such as:
Mainframe Data Service for Apache Spark on z/OS
Rocket DV
Rocket Mainframe Data Service on IBM Bluemix
IBM DB2 Analytics Accelerator Loader
QMF Data Service enables DB2 QMF to access numerous data sources and greatly eliminates the need to move data in order to perform your analytics. It enables you to obtain real-time analytics insights using a high-performance in-memory mainframe solution.

The need for real-time information requires a high-performance data architecture that can handle the extreme volumes and unique requirements of mainframe data and that is transparent to the business user. DB2 QMF 's new data service includes several query optimization features, such as parallel I/O and MapReduce. Multiple parallel threads handle input requests, continually streaming and buffering data to the client. The mainframe MapReduce technology greatly reduces the elapsed time of the query by accessing the database with multiple threads that read the file in parallel.

Data definitions and schema information are extracted from a variety of places to create virtual tables. All of the implementation details are hidden to the user, presented instead as a single logical data source. The logical data source is easily administered through the new Eclipse-based QMF Data Service Studio. With QMF Data Service Studio, DB2 QMF now supports a broader range of data sources, including:

Mainframe: Relational/nonrelational databases and file structures: ADABAS, DB2,VSAM, and Physical Sequential; CICS and IMS
Distributed: Databases running on Linux, UNIX, and Microsoft Windows platforms: DB2, Oracle, Informix, Derby, and SQL Server
Cloud and big data: Cloud-based relational and nonrelational data, and support for Hadoop
Data prepared in QMF will be made accessible as virtual tables to external applications through Data Service connectors such as:

Mainframe Data Service for Apache Spark on z/OS
Rocket DV
Rocket Mainframe Data Service on IBM Bluemix
DB2 Analytics Accelerator Loader
QMF Vision (new) is a web client visualization interface that enables you to create, modify, and drill down on data visualizations that are displayed on a dashboard. Users have the ability to drag and drop whatever dimensions or measures are needed, or add more variables for increased drill-down capability. Column, pie, treemap, geo map, line,scatter charts and many more chart objects are available. This gives a business user the ability to analyze data and provide insights that might not be readily apparent.

The most commonly requested guided analytics capabilities, such as outlier detection and cardinality, are now provided out of the box. These capabilities are integrated into the architecture for an intuitive analysis experience. For one-off decision making, you can quickly create simple reports using the tabular chart, which gives you a line-by-line view of summary data. Reports can be formatted to produce multilevel grouping, hierarchical structures, and dynamic cross tabulations, all for greater readability.

This enhancement simplifies the sharing of insights and collaboration with other users. Dashboards can be dropped into the chat window and other users can immediately start collaborating. They can discuss performance results, strategy, and opportunities and discover new insights about the data. Users can connect to new data sources as well as work with existing QMF queries and tables.

QMF Classic Edition supports users working entirely on traditional mainframe terminals and emulators, including IBM Host On Demand, to access DB2 databases. QMF Classic Edition consists of the following capabilities in V12:

QMF for TSO and CICS
QMF Enhanced Editor
QMF Analytics for TSO
QMF High Performance Option (HPO)

Get the most out of DB2 for z/OS with modern application development language support



More Information:

http://www.idug.org/p/bl/et/blogid=278&blogaid=593

https://www-01.ibm.com/support/docview.wss?uid=swg27047206

http://www.ibmsystemsmag.com/Blogs/DB2utor/October-2016/Thoughts-on-DB2-12/

http://www.idug.org/p/bl/et/blogid=477&blogaid=495

http://www.ibmbigdatahub.com/blog/new-ibm-db2-release-simplifies-deployment-and-key-management

https://developer.ibm.com/mainframe/2017/07/17/ibm-z-software-z14-announcement/

https://www-03.ibm.com/press/us/en/pressrelease/52805.wss

https://www.ibm.com/us-en/marketplace/z14

https://www-03.ibm.com/systems/z/solutions/enterprise-security.html

https://www.youtube.com/user/IBMDB2forzOS

http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_sm/2/760/ENUS5650-DB2/index.html&request_locale=en

http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_sm/2/760/ENUS5650-DB2/index.html&request_locale=en

https://www.ibm.com/analytics/us/en/technology/db2/db2-12-for-zos.html

https://www.ibm.com/support/knowledgecenter/SSEPEK_12.0.0/java/src/tpc/imjcc_rjv00010.html

https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.7.0/com.ibm.db2.luw.qb.server.doc/doc/r0008865.html

https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSource&supplier=897&letternum=ENUS216-378

http://ibmsystemsmag.com/Blogs/DB2utor/January-2017/DB2-12-REORG-Enhancements/

http://www.ibmsystemsmag.com/Blogs/DB2utor/November-2015/IBM-Launching-DB2-12-for-z-OS-ESP/

https://www.facebook.com/Db2community/

https://www.facebook.com/Db2community/videos/10154713389235872/

https://www.ibm.com/analytics/us/en/events/machine-learning/?cm_mmc=OSocial_Twitter-_-Analytics_Database+-+Data+Warehousing+-+Hadoop-_-WW_WW-_-Twitter+organic&cm_mmca1=000000TA&cm_mmca2=10000659&

https://www-03.ibm.com/services/learning/ites.wss/zz-en?pageType=course_description&cc=&courseCode=CL206G

28 June 2017

Red Hat OpenShift and Orchestrating Containers With KUBERNETES!


Red Hat OpenShift and  Orchestrating Containers With KUBERNETES!

OVERVIEW

Kubernetes is a tool for orchestrating and managing Docker containers. Red Hat provides several ways you can use Kubernetes that include:

  • OpenShift Container Platform: Kubernetes is built into OpenShift, allowing you to configure Kubernetes, assign host computers as Kubernetes nodes, deploy containers to those nodes in pods, and manage containers across multiple systems. The OpenShift Container Platform web console provides a browser-based interface to using Kubernetes.
  • Container Development Kit (CDK): The CDK provides Vagrantfiles to launch the CDK with either OpenShift (which includes Kubernetes) or a bare-bones Kubernetes configuration. This gives you the choice of using the OpenShift tools or Kubernetes commands (such as kubectl) to manage Kubernetes.
  • Kubernetes in Red Hat Enterprise Linux: To try out Kubernetes on a standard Red Hat Enterprise Linux server system, you can install a combination of RPM packages and container images to manually set up your own Kubernetes configuration.

Resilient microservices with Kubernetes - Mete Atamel


Kubernetes, or k8s (k, 8 characters, s...get it?), or “kube” if you’re into brevity, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications. In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across public, private, or hybrid clouds.

The Illustrated Children's Guide to Kubernetes


Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.) Google generates more than 2 billion container deployments a week—all powered by an internal platform: Borg. Borg was the predecessor to Kubernetes and the lessons learned from developing Borg over the years became the primary influence behind much of the Kubernetes technology.

Fun fact: The seven spokes in the Kubernetes logo refer to the project’s original name, “Project Seven of Nine.”

Kubernetes & Container Engine


Red Hat was one of the first companies to work with Google on Kubernetes, even prior to launch, and has become the 2nd leading contributor to Kubernetes upstream project. Google donated the Kubernetes project to the newly formed Cloud Native Computing Foundation in 2015.

An Introduction to Kubernetes


Why do you need Kubernetes?

Real production apps span multiple containers. Those containers must be deployed across multiple server hosts. Kubernetes gives you the orchestration and management capabilities required to deploy containers, at scale, for these workloads. Kubernetes orchestration allows you to build application services that span multiple containers, schedule those containers across a cluster, scale those containers, and manage the health of those containers over time.


Kubernetes also needs to integrate with networking, storage, security, telemetry and other services to provide a comprehensive container infrastructure.

Of course, this depends on how you’re using containers in your environment. A rudimentary application of Linux containers treats them as efficient, fast virtual machines. Once you scale this to a production environment and multiple applications, it's clear that you need multiple, colocated containers working together to deliver the individual services. This significantly multiplies the number of containers in your environment and as those containers accumulate, the complexity also grows.

Hands on Kubernetes 


Kubernetes fixes a lot of common problems with container proliferation—sorting containers together into a ”pod.” Pods add a layer of abstraction to grouped containers, which helps you schedule workloads and provide necessary services—like networking and storage—to those containers. Other parts of Kubernetes help you load balance across these pods and ensure you have the right number of containers running to support your workloads.

With the right implementation of Kubernetes—and with the help of other open source projects like Atomic Registry, Open vSwitch, heapster, OAuth, and SELinux— you can orchestrate all parts of your container infrastructure.

What can you do with Kubernetes?

The primary advantage of using Kubernetes in your environment is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines. More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments. And because Kubernetes is all about automation of operational tasks, you can do many of the same things that other application platforms or management systems let you do, but for your containers.

Red Hat is Driving Kubernetes/Container Security Forward - Clayton Coleman


Kubernetes’ features provide everything you need to deploy containerized applications. Here are the highlights:


  • Container Deployments & Rollout Control. Describe your containers and how many you want with a “Deployment.” Kubernetes will keep those containers running and handle deploying changes (such as updating the image or changing environment variables) with a “rollout.” You can pause, resume, and rollback changes as you like.
  • Resource Bin Packing. You can declare minimum and maximum compute resources (CPU & Memory) for your containers. Kubernetes will slot your containers into where ever they fit. This increases your compute efficiency and ultimately lowers costs.
  • Built-in Service Discovery & Autoscaling. Kubernetes can automatically expose your containers to the internet or other containers in the cluster. It automatically load-balances traffic across matching containers. Kubernetes supports service discovery via environment variables and DNS, out of the box. You can also configure CPU-based autoscaling for containers for increased resource utilization.
  • Heterogeneous Clusters. Kubernetes runs anywhere. You can build your Kubernetes cluster for a mix of virtual machines (VMs) running the cloud, on-prem, or bare metal in your datacenter. Simply choose the composition according to your requirements.
  • Persistent Storage. Kubernetes includes support for persistent storage connected to stateless application containers. There is support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and many, many more.
  • High Availability Features. Kubernetes is planet scale. This requires special attention to high availability features such as multi-master or cluster federation. Cluster federation allows linking clusters together so that if one cluster goes down containers can automatically move to another cluster.

These key features make Kubernetes well suited for running different application architectures from monolithic web applications, to highly distributed microservice applications, and even batch driven applications.

With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running how you deployed them.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.

Kubernetes, however, relies on other projects to fully provide these orchestrated services. With the addition of other open source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):


  • Registry, through projects like Atomic Registry or Docker Registry.
  • Networking, through projects like OpenvSwitch and intelligent edge routing.
  • Telemetry, through projects such as heapster, kibana, hawkular, and elastic.
  • Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multi-tenancy layers.
  • Automation, with the addition of Ansible playbooks for installation and cluster life-cycle management.
  • Services, through a rich catalog of precreated content of popular app patterns.
  • Get all of this, prebuilt and ready to deploy, with Red Hat OpenShift

Container Management with OpenShift Red Hat - Open Cloud Day 2016



Learn to speak Kubernetes

Like any technology, there are a lot of words specific to the technology that can be a barrier to entry. Let's break down some of the more common terms to help you understand Kubernetes.

Master: The machine that controls Kubernetes nodes. This is where all task assignments originate.

Node: These machines perform the requested, assigned tasks. The Kubernetes master controls them.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage away from the underlying container. This lets you move containers around the cluster more easily.

Replication controller:  This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod—no matter where it moves to in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.

kubectl: This is the command line configuration tool for Kubernetes.

Check out the Kubernetes Reference: https://kubernetes.io/docs/reference/

Using Kubernetes in production

Kubernetes is open source. And, as such, there’s not a formalized support structure around that technology—at least not one you’d trust your business on. If you had an issue with your implementation of Kubernetes, while running in production, you’re not going to be very happy. And your customers probably won’t, either.

Performance and Scalability Tuning Kubernetes for OpenShift and Docker by Jeremy Eder, Red Hat


That’s where Red Hat OpenShift comes in. OpenShift is Kubernetes for the enterprise—and a lot more. OpenShift includes all of the extra pieces of technology that makes Kubernetes powerful and viable for the enterprise, including: registry, networking, telemetry, security, automation, and services. With OpenShift, your developers can make new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration that can turn a good idea into new business quickly and easily.

Best of all, OpenShift is supported and developed by the #1 leader in open source, Red Hat.


Kubernetes runs on top of an operating system (Red Hat Enterprise Linux Atomic Host, for example) and interacts with pods of containers running on the nodes. The Kubernetes master takes the commands from an administrator (or DevOps team) and relays those instructions to the subservient nodes. This handoff works with a multitude of services to automatically decide which node is best suited for the task. It then allocates resources and assigns the pods in that node to fulfill the requested work.

So, from an infrastructure point of view, there is little change to how you’ve been managing containers. Your control over those containers happens at a higher level, giving you better control without the need to micromanage each separate container or node. Some work is necessary, but it’s mostly a question of assigning a Kubernetes master, defining nodes, and defining pods.


What about docker?
The docker technology still does what it's meant to do. When kubernetes schedules a pod to a node, the kubelet on that node will instruct docker to launch the specified containers. The kubelet then continuously collects the status of those containers from docker and aggregates that information in the master. Docker pulls containers onto that node and starts and stops those containers as normal. The difference is that an automated system asks docker to do those things instead of the admin doing so by hand on all nodes for all containers.


OpenStack Compute for Containers
While many customers are already running containers on Red Hat Enterprise Linux 7 as an OpenStack guest operating system, we are also seeing greater interest in Red Hat Enterprise Linux Atomic Host as a container-optimized guest OS option. And while most customers run their containers in guest VMs driven by Nova, we are also seeing growing interest in customers who want to integrate with OpenStack Ironic to run containers on bare metal hosts. With OpenStack, customers can manage both virtual and physical compute infrastructure to serve as the foundation for their container application workloads.

Earlier this year we also demonstrated how OpenStack administrators could use Heat to deploy a cluster of Nova instances running Kubernetes. The Heat templates contributed by Red Hat simplify the provisioning of new container host clusters, which are ready to run container workloads orchestrated by Kubernetes. Heat templates also serve at the foundation for OpenStack Magnum API to make container orchestration engines like Kubernetes available as first class resources in OpenStack. We also recently created Heat templates to deploy OpenShift 3 and added them to the OpenStack Community App Catalog. Our next step is to make elastic provisioning and deprovisioning of Kubernetes nodes based on resource demand a reality.

Building Clustered Applications with Kubernetes and Docker - Stephen Watt, Red Hat


Linux
Linux is at the foundation of OpenStack and modern container infrastructures. While we are excited to see Microsoft invest in Docker to bring containers to Windows, they are still Linux containers after all. Red Hat’s first major contribution was bringing containers to enterprise Linux and RPM-based distributions like Fedora, Red Hat Enterprise Linux and CentOS. Since then we launched Project Atomic and made available Red Hat Enterprise Linux Atomic Host as a lightweight, container-optimized, immutable Linux platform for enterprise customers. With the recent surge in new container-optimized Linux distributions being announced, we see this as more than just a short term trend. This year we plan to release Red Hat Enterprise Linux Atomic Host 7.2 and talk about how customers are using it as the foundation for a containerized application workloads.

Red Hat Container Strategy


Docker
Docker has defined the packaging format and runtime for containers, which has now become the defacto standard for the industry, as embodied in OCI and the runC reference implementation. Red Hat continues to contribute extensively to the Docker project and is now helping to drive governance of OCI and implementation of runC. We are committed to helping to make Docker more secure, both in the container runtime and content and working with our partners to enable customers to safely containerize their most mission critical applications.

Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1


Kubernetes
Kubernetes is Red Hat’s choice for container orchestration and management and it is also seeing significant growth with more than 500 contributors and nearly 20,000 commits to the Kubernetes project in just over a year. While there is a lot of innovation in the container orchestration space, we see Kubernetes as another emerging standard given the combination of Google’s experience running container workloads at massive scale, Red Hat’s contributions and experience making open source work in enterprise environments, and the growing community surrounding it.

Microservices with Docker, Kubernetes, and Jenkins


This “LDK” stack is the foundation of Red Hat OpenShift 3 and Atomic Enterprise Platform announced recently at Red Hat Summit. It’s also the foundation of the Google Container Engine which is now generally available and other vendor and customer solutions that were featured recently at LinuxCon during the Kubernetes 1.0 launch.

Red Hat has helped drive innovation in this new Container stack while also driving integration with OpenStack. We have focused our efforts on integrating in the three core pillars of OpenStack – compute, networking and storage. Here’s how:

OpenStack Networking for Containers
Red Hat leverages Kubernetes networking model to enable networking across multiple containers, running across multiple hosts. In Kubernetes, each container (or “pod”) has its own IP address and can communicate with other containers/pods, regardless of which host they run on. Red Hat integrated RHEL Atomic Host with Flannel for container networking and also developed a new OVS-based SDN solution that is included in OpenShift 3 and Atomic Enterprise Platform. But in OpenStack environments, users may want to leverage Neutron and its rich ecosystem of networking plugins to handle networking for containers. We’ve been working in both the OpenStack and Kubernetes community to integrate Neutron with Kubernetes networking to enable this.

OpenShift Enterprise 3.1 vs kubernetes


OpenStack Storage for Containers
Red Hat also leverages Kubernetes storage volumes to enable users to run stateful services in containers like databases, message queues and other stateful apps. Users map their containers to persistent storage clusters, leveraging Kubernetes storage plugins like  NFS, iSCSI, Gluster, Ceph, and more. The OpenStack Cinder storage plugin currently under development will enable users to map to storage volumes managed by OpenStack Cinder.

Linux, Docker, and Kubernetes form the core of Red Hat’s enterprise container infrastructure. This LDK stack integrates with OpenStack’s compute, storage and networking services to provide an infrastructure platform for running containers. In addition to these areas, there are others that we consider critical for enterprises who are building a container-based infrastructure. A few of these include:

  • Container Security – Red Hat is working with Docker and the Open Containers community on container security. Security is commonly cited as one of the leading concerns limiting container adoption and Red Hat is tackling this on multiple levels. The first is multi-tenant isolation to help prevent containers from exploiting other containers or the underlying container host. Red Hat contributed SELinux integration to Docker, to provide a layered security model for container isolation and is also contributing to the development of features like privileged containers and user namespaces. The second area is securing container images to verify trusted content, which is another key concern. Red Hat has driven innovation in areas like image signing, scanning and certification and we recently announced our work with Black Duck to help make application containers free from known vulnerabilities
  • Enterprise Registry – Red Hat provides a standard Docker registry as a fully integrated component of both OpenShift and Atomic. This enables customers to more securely store and manage their own Docker images for enterprise deployments. Administrators can manage who has access to images, determine which images can be deployed and manage image updates.
  • Logging & Metrics – Red Hat has already integrated the ELK stack with Red Hat Enterprise Linux OpenStack Platform. It is doing the same in OpenShift and Atomic to provide users with aggregate logging for containers. This will enable administrators to get aggregated logs across the platform and also simplify log access for application developers. This work extends into integrated metrics for containerized applications and infrastructure.
  • Container Management – Red Hat CloudForms enables infrastructure and operations teams to manage application workloads across many different deployment fabrics – physical, virtual, public cloud and also private clouds based on OpenStack. CloudForms is being extended to manage container-based workloads in its next release. This will provide a single pane of glass to manage container-based workloads on OpenStack infrastructure.
Ultimately the goal of containers is to provide a better way to package and deploy your applications and enable application developers. Containers provide many benefits to developers like portability, fast deployment times and a broad ecosystem of packaged container images for a wide array of software stacks. As applications become more componentized and highly distributed with the advent of microservices architectures, containers provide an efficient way to deploy these microservices without the overhead of traditional VMs.

Red Hat OpenShift Container Platform Overview


But to provide a robust application platform and enable DevOps and Continuous Delivery, we also need to solve other challenges. Red Hat is tackling many of these in OpenShift, which is a containerized application platform that natively integrates Docker and is built on Red Hat’s enterprise container stack. These challenges include:

Build Automation – Developers moving to containerize their applications will likely need to update their build tools and processes to build container images. Red Hat is working on automating the Docker image build process at scale and has developed innovations like OpenShift source-to-image which enables users to push code changes and patches to their application containers, without being concerned with the details of Dockerfiles or Docker images.
Deployment Automation and CI/CD – Developers will also need to determine how containers will impact their deployment workflows and integrate with their CI/CD systems. Red Hat is working on automating common application deployment patterns with containers like rolling, canary and A/B deployments. We are also working to enable CI/CD with containers with work underway in OpenShift upstream projects like Origin and Fabric8
Containerized Middleware and Data Services – Administrators will need to provide their developers with trusted images to build their applications. Red Hat provides multiple language runtime images in OpenShift including Java, Node.js, Python, Ruby and more. We are also providing containerized middleware images like JBoss EAP, A-MQ and Fuse as well as database images from Red Hat’s Software Collections including MongoDB, Postgres and MySQL.
Developer Self Service – Ultimately developers want to access all of these capabilities without having to call on IT. With OpenShift, developers can access self-service Web, CLI and IDE interfaces to build and deploy containerized applications. OpenShift’s developer and application-centric view provide a great complement to OpenStack.

Containers Anywhere with OpenShift by Red Hat


This is just a sampling of the work we are doing in Containers and complements all the great work Red Hat contributes to in the OpenStack community. OpenStack and Containers are two examples of the tremendous innovation happening in open source and this week we are showcasing how they are great together.

More Information:

http://events.linuxfoundation.org/events/cloudnativecon-and-kubecon-north-america

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_orchestrating_containers_with_kubernetes

https://www.redhat.com/en/containers/what-is-kubernetes

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html-single/getting_started_with_kubernetes/

http://rhelblog.redhat.com/tag/kubernetes/

http://redhatstackblog.redhat.com/tag/kubernetes/

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/

https://www.redhat.com/en/services/training/do180-introduction-containers-kubernetes-red-hat-openshift

https://blog.openshift.com/red-hat-chose-kubernetes-openshift/

https://www.openshift.com/container-platform/kubernetes.html

https://www.openshift.com

https://cloudacademy.com/blog/what-is-kubernetes/

https://keithtenzer.com/2015/04/15/containers-at-scale-with-kubernetes-on-openstack/