22 March 2020

OpenShft Hybrid Cloud Vision of Red Hat









OpenShift Container Platform (OCP) is the multi-cloud/ Hybrid Cloud Standard



Choosing how to build an open hybrid cloud is perhaps the most strategic decision that CIOs and other IT leaders will make this decade. It’s a choice that will determine their organization’s competitiveness, flexibility, and IT economics for the next ten years.

That’s because, done right, a cloud services model delivers strategic advantages to the organization by redirecting resources from lights-on to innovation and meaningful business outcomes.

Open hybrid cloud and the future of innovative cloud computing

Only an open hybrid cloud delivers on the full strategic business value and promise of cloud computing. Only by embracing clouds that are open across the full gamut of characteristics can organizations ensure that their cloud:


  • Enables portability of applications and data across clouds.
  • Fully leverages existing IT investments and infrastructure and avoids creating new silos.
  • Makes it possible to build an open hybrid cloud that spans physical servers, multiple virtualization platforms, and public clouds running a variety of technology stacks.
  • Allows IT organizations to evolve to the cloud, gaining incremental value at each step along the way.
  • Puts the customer in charge of their own technology strategy.


OpenShft Hybrid Cloud Vision of Red Hat

The Upside Opportunity

When the term “cloud computing” first appeared on the scene, it described a computing utility. The clear historical analog was electricity. Generated by large service providers. Delivered over a grid. Paid for when and in the amount used.

This concept has given rise to public clouds that deliver computing resources in the form commonly called Infrastructure-as-a-Service (IaaS) offerings — based upon OpenStack, the leading open source cloud platform. Many characteristics of these public clouds are compelling relative to some traditional aspects of enterprise IT. Cost per virtual machine is much lower.

Users, such as developers, can use a credit card to get access to IT resources in minutes, rather than waiting months for a new server to be approved and provisioned. All this in turn leads to new applications and business services coming online more quickly and reducing the time to new revenue streams.

The Road to Open Hybrid Cloud: Part 1 - Bare Metal to Private Cloud


However, at the same time, most organizations are not yet ready to move all of their applications onto public cloud providers. Often this is because of real or perceived concerns around compliance and governance, especially for mission-critical production applications. Nor do public clouds typically provide the ability to customize and optimize around unique business needs.

Whatever the reasons in an individual case, there is great interest in the idea of building hybrid clouds spanning both on-premise and off-premise resources to deliver the best of both worlds — public cloud economics and agility optimized for enterprise needs such as audit, risk management, and strong policy management.

Technically Speaking: Automation and hybrid cloud


Choosing an open hybrid cloud enables organizations to:


  • Bring new applications and services online more quickly for faster time-to-revenue.
  • Respond more quickly to opportunities and threats.
  • Reduce risk by maintaining ongoing compliance and runtime management while preserving strategic flexibility.

Red Hat OpenShift 4.3

Red Hat announced the general availability of Red Hat OpenShift 4.3, the newest version of the industry’s most comprehensive enterprise Kubernetes platform. With security a paramount need for nearly every enterprise, particularly for organizations in the government, financial services and healthcare sectors, OpenShift 4.3 delivers FIPS (Federal Information Processing Standard) compliant encryption and additional security enhancements to enterprises across industries.



Combined, these new and extended features can help protect sensitive customer data with stronger encryption controls and improve the oversight of access control across applications and the platform itself.

This release also coincides with the general availability of Red Hat OpenShift Container Storage 4, which offers greater portability, simplicity and scale for data-centric Kubernetes workloads.

Encryption to strengthen the Security of Containerized Applications on OpenShift

As a trusted enterprise Kubernetes platform, the latest release of Red Hat OpenShift brings stronger platform security that better meets the needs of enterprises and government organizations handling extremely sensitive data and workloads with FIPS (Federal Information Processing Standard) compliant encryption (FIPS 140-2 Level 1). FIPS validated cryptography is mandatory for US federal departments that encrypt sensitive data. When OpenShift runs on Red Hat Enterprise Linux booted in FIPS mode, OpenShift calls into the Red Hat Enterprise Linux FIPS validated cryptographic libraries. The go-toolset that enables this functionality is available to all Red Hat customers.

Operators on OpenShift Container Platform 4.x


OpenShift 4.3 brings support for encryption of etcd, which provides additional protection for secrets at rest. Customers will have the option to encrypt sensitive data stored in etcd, providing better defense against malicious parties attempting to gain access to data such as secrets and config maps stored in ectd.

NBDE (Network-Bound Disk Encryption) can be used to automate remote enablement of LUKS (Linux Unified Key Setup-on-disk-format) encrypted volumes, making it easier to protect against physical theft of host storage.
Together, these capabilities enhance OpenShift’s defense-in-depth approach to security.

Better access controls to comply with company security practices

OpenShift is designed to deliver a cloud-like experience across all environments running on the hybrid cloud.

OpenShift 4.3 adds new capabilities and platforms to the installer, helping customers to embrace their company’s best security practices and gain greater access control across hybrid cloud environments. Customers can deploy OpenShift clusters to customer-managed, pre-existing VPN / VPC (Virtual Private Network / Virtual Private Cloud) and subnets on AWS, Microsoft Azure and Google Cloud Platform. They can also install OpenShift clusters with private facing load balancer endpoints, not publicly accessible from the Internet, on AWS, Azure and GCP.

With “bring your own” VPN / VPC, as well as with support for disconnected installs, users can have more granular control of their OpenShift installations and take advantage of common best practices for security used within their organizations.

In addition, OpenShift admins have access to a new configuration API that allows them to select the cipher suites that are used by the Ingress controller, API server and OAuth Operator for Transport Layer Security (TLS). This new API helps teams adhere to their company security and networking standards easily.

OpenShift Container Storage 4 across the cloud

Available alongside OpenShift 4.3 today is Red Hat OpenShift Container Storage 4, which is designed to deliver a comprehensive, multicloud storage experience to users of OpenShift Container Platform. Enhanced with multicloud gateway technology from Red Hat’s acquisition of NooBaa, OpenShift Container Storage 4 offers greater abstraction and flexibility. Customers can choose data services across multiple public clouds, while operating from a unified Kubernetes-based control plane for applications and storage.

OpenShift 4, the smarter Kubernetes platform



To help drive security across disparate cloud environments, this release brings enhanced built-in data protection features, such as encryption, anonymization, key separation and erasure coding. Using the multicloud gateway, developers can more confidently share and access sensitive application data in a more secure, compliant manner across multiple geo-locations and platforms.

OpenShift Container Storage 4 is deployed and managed by Operators, bringing automated lifecycle management to the storage layer, and helping with easier day 2 management.

Automation to enhance day two operations with OpenShift

OpenShift helps customers maintain control for day two operations and beyond when it comes to managing Kubernetes via enhanced monitoring, visibility and alerting. OpenShift 4.3 extends this commitment to control by making it easier to manage the machines underpinning OpenShift deployments with automated health checking and remediation. This area of automated operations capabilities is especially helpful to monitor for drift in state between machines and nodes.

Red Hat Enterprise Linux 8



OpenShift 4 also enhances automation through Kubernetes Operators. Customers already have access to Certified and community Operators created by Red Hat and ISVs, but customers have also expressed interest in creating Operators for their specific internal needs. With this release, this need is addressed with the ability to register a private Operator catalog within OperatorHub. Customers with air-gapped installs can find this especially useful in order to take advantage of Operators for highly-secure or sensitive environments.

Getting started with Red Hat OpenShift


With this release the Container Security Operator for Red Hat Quay is generally available on OperatorHub.io and embedded into OperatorHub in Red Hat OpenShift. This brings Quay and Clair vulnerability scanning metadata to Kubernetes and OpenShift. Kubernetes cluster administrators can monitor known container image vulnerabilities in pods running on their Kubernetes cluster. If the container registry supports image scanning, such as Quay with Clair, then the Operator will expose any vulnerabilities found via the Kubernetes API.

Red Hat OpenShift - Much more than Kubernetes


OpenShift 4.3 is based on Kubernetes 1.16. Red Hat supports customer upgrades from OpenShift 4.2 to 4.3. Other notable features in OpenShift 4.3 include application monitoring with Prometheus (TP), forwarding logs off cluster based on log type (TP), Multus enhancements (IPAM), SR-IOV (GA), Node Topology Manager (TP), re-size of Persistent Volumes with CSI (TP), iSCSI raw block (GA) and new extensions and customizations for the OpenShift Console.

OpenShift Hybrid Cloud Vision of Red Hat

EXECUTIVE SUMMARY

Choosing how to build a hybrid cloud is perhaps the most strategic decision IT leaders will make this decade. It is a choice that will determine their organization’s competitiveness, flexibility, and IT economics for the next 10 years.

Hybrid- and Multi-Cloud by design - IBM Cloud and your journey to Cloud



Public clouds have set the benchmark for on-demand access to resources. But most organizations that use public clouds do so in concert with a variety of on-premise computing resources, albeit modernized and increasingly operated in a manner that provides self-service, dynamic scaling, and policy-based automation. Heterogeneous environments, both public and private, are today’s face of hybrid cloud.

Getting started with deploying apps on Azure Red Hat OpenShift


Whatever the optimal mix for a given organization, a well-planned cloud strategy delivers strategic advantages to the business by redirecting resources from lights-on to innovation. But only an open cloud delivers on the full strategic business value and promise of cloud computing. By embracing open clouds, organizations ensure that their cloud:

Enables portability of applications and data across clouds.
Fully takes advantage of existing IT investments and infrastructure and avoids creating new silos.
Makes it possible to build a hybrid cloud that spans physical servers, multiple virtualization platforms, private clouds, and public clouds running a variety of technology stacks.
Provides incremental value as they incrementally add new capabilities.
Puts them in charge of their own technology strategy.

INTRODUCTION

When the term “cloud computing” first appeared on the scene, it described a computing utility. The clear historical analog was electricity. Generated by large service providers. Delivered over a grid. Paid for when and in the amount used. This concept was reflected by the early public clouds that delivered raw computing resources in the form commonly called Infrastructure-as-a-Service (IaaS).

Certain characteristics of these public clouds were compelling, relative to traditional aspects of enterprise IT. Cost per virtual machine could be lower. Users, such as business analysts, could use a credit card to get access to IT resources in minutes, rather than waiting months for a new server to be approved and provisioned. All this in turn led to new applications and business services coming online more quickly and reducing the time to new revenue streams.

However, at the same time, most organizations cannot move all of their applications onto public cloud providers. Often this is because of real or perceived concerns around compliance and governance, especially for critical production applications. Nor do public clouds typically provide the ability to customize and optimize around unique business needs.

OpenShift - A look at a container platform: what's in the box



A private cloud, typically based on OpenStack® technology, provides a proven option for those who want to maintain direct ownership and control over their systems, or a subset thereof. Certain workloads and data storage may be cheaper on-premise. The ability to customize and co-locate compute and data can simplify integration with existing applications and data stores. And the proper handling, including adherence to data locality requirements, of sensitive customer data always needs to be taken into account.

Private cloud implementations often take place alongside IT optimization projects, such as creating standard operating environments (SOE), tuning and modernizing existing virtualization footprints, and improving management and integration across heterogeneous infrastructures.

Whatever the reasons in an individual case, the reality is that most organizations will have a hybrid and heterogeneous IT environment. Keeping such an environment from fracturing into isolated silos requires embracing openness across multiple dimensions.

Fundamentally, an open hybrid cloud is about helping organizations across all industries:
Build new, composable, integrated cloud-native apps for new revenue streams.
Develop apps and respond to the market more quickly with DevOps agility.
Deploy on a scalable and flexible cloud infrastructure that quickly adapts to change.
Protect the business with management, security, and assurance capabilities.

WHY A HYBRID CLOUD?

A hybrid cloud originally just meant a cloud that combined private and public cloud resources. But, as cloud computing has evolved, users think of hybrid in broader terms.

Today, hybrid also covers heterogeneous on-premise resources, including private clouds, traditional virtualization, bare-metal servers, and containers. It encompasses multiple providers and types of public clouds.

Multicloud vs. hybrid cloud and how to manage it all



In short, IT infrastructures, and the services that run on them, are hybrid across many dimensions. There is a simultaneous requirement in most organizations to both modernize and optimize their software-defined datacenters (SDDC) and deploy new cloud-native infrastructure. Most organizations use services from several public clouds. And there is a widespread need to bridge and integrate across these different infrastructures to allow for consistent processes and business rules, as well as for picking the best infrastructure for a given workload.

However, hybrid should not mean silos of capacity. Adding cloud silos increases complexity rather than reducing it.

This is not to say that we cannot start our journey to a cloud on a subset of infrastructure. In most cases, a pilot project or proof-of-concept using a subset of applications will indeed be the prudent path. The difference is that a proof-of-concept is a first step; a new silo is a dead end.

Taking an open approach to cloud is a key way to avoid a siloed cloud future.

INNOVATION THROUGH OPEN SOURCE

Entire new categories of software are open source by default. That’s because the community development model works. Open source underpins the infrastructure of some of the most sophisticated web-scale companies, like Facebook and Google. Open source stimulates many of the most significant advances in the worlds of cloud infrastructures, cloud-native applications, and big data.

Hybrid Multicloud Use Cases


Open source enables contributions and collaboration within communities, with more contributors collaborating with less friction. Furthermore, as new computing architectures and approaches rapidly evolve for cloud computing, big data, and the Internet of Things (IoT), it is also becoming evident that the open source development model is extremely powerful because of how it allows innovations from multiple sources to be recombined and remixed in powerful ways. To give just one example, the complete orchestration, resource placement, and policy-based management of a microservices-based, containerized environment can draw on code from many different communities and combine it in different ways depending upon the requirements.

Exploring OpenShift 4.x Clusters


The open source development model and open source communities help to:


  • Provide the interoperability and workload portability that cloud users need.
  • Enable software-defined, cloud-native infrastructures, their applications, and DevOps processes for developing and operating those applications.
  • Create the bridges between new infrastructures and workloads and classic IT—for example, by connecting back-end systems to new applications through business rules and message busses.
  • Preserve existing investments while providing IT with the strategic flexibility to deploy on their infrastructure of choice, whether physical servers, legacy virtualization, private clouds, or public clouds.

BEYOND OPEN SOURCE IN THE CLOUD

The “open” in open hybrid cloud is about more than open source code. As we have discussed, it is also about engaging with innovative communities. It is about interoperability, workload portability, and strategic flexibility. And it is about making open source suitable for critical deployments through quality assurance and integration, working within upstream projects, and having predictable and stable life-cycle support.


  • Open source allows adopters to control their particular implementation and does not restrict them to the technology and business roadmap of a specific vendor.
  • A viable, independent community is the single most important element of many open source projects. Delivering maximum innovation means having the right structures and organization in place to fully take advantage of the open source development model.
  • Open standards do not necessarily require formal standardization efforts, but they do require a consensus among communities of developers and users. Approaches to interoperability that are not under the control of individual vendors, or tied to specific platforms, offer important flexibility.
  • Freedom to use intellectual property (IP) is needed to use technology without constraints. Even “reasonable and non-discriminatory” license terms can still require permission or impose other restrictions.


Platform choice lets operations and application development teams use the right infrastructure. Tools like cloud management should not be tied to a specific virtualization or other foundational technology. For example, at one time, managing just physical servers and virtual machines was a reasonable goal for a management product. Then private cloud and public cloud. Then more public clouds. Now containers as well.

Portability can be a tradeoff. Sometimes, using a feature that is specific to a particular public cloud provider is the right business decision. Nonetheless, technologies such as container and cloud management platforms can maximize the degree to which applications and services can be deployed across a variety of infrastructure. And redeployed elsewhere if needs or conditions change.

HOW RED HAT DELIVERS OPEN SOURCE VALUE

At Red Hat, our focus is on making open source technologies consumable and supportable by enterprise IT. Red Hat’s business model is 100% open source—no bait-and-switch, and no open core holding back valuable bits as proprietary add-on software.

Developing and Deploying Applications in a Multi-Site Hybrid Cloud


We collaborate through upstream projects because doing so is at the heart of the economic and business model that makes open source such an effective way to develop software. Working upstream lets Red Hat engage closely with the open source community and influence technology choices in ways that are important to our customers, our partners, and us. It helps ensure that we use the strengths of open source development and maintain the technology expertise to provide fast and knowledgable product support, while also working with the community to encourage innovation.

OpenShift Multi-Cloud Gateway Deep Dive with Eran Tamir (Red Hat)


Red Hat has a well-established process for turning open source projects into enterprise subscription products that satisfy the demands of some of the most challenging and critical applications in markets such as financial services, government, and telecommunications. Red Hat is also focused on creating value through a portfolio of products and an ecosystem of partners.

CONCLUSION

To meet the challenges brought by the digitization of the business, IT needs to simultaneously close three serious gaps. It needs to build a comprehensive cloud-native infrastructure to close the gap between what the business requires and what traditional IT can deliver. It needs to deliver applications, services, and access to infrastructure that is in line with what both customers and employees have come to expect from consumer devices and public cloud services. And it needs to do this iteratively and quickly, while maintaining and connecting back to the classic IT on which core business services are running.

Individual organizations will achieve these various goals in a variety of ways. But the vast majority will do so in a hybrid manner. They will modernize and optimize existing assets to retain and extend their value. They will build and deploy new cloud-native infrastructures to provide the best platform for quickly and iteratively delivering needed business services for internal and external customers. They will use resources from a variety of public clouds.

But making the most effective use of these disparate types of technology means that taking an open approach to cloud is not a nice-to-have for IT organizations. It is a must-have.

Red Hat has moved to make storage a standard element of a container platform with the release of version 3.10 of Red Hat OpenShift Container Storage (OCS), previously known as Red Hat Container Native Storage.

Irshad Raihan, senior manager for product marketing for Red Hat Storage, says Red Hat decided to rebrand its container storage offering to better reflect its tight integration with the Red Hat OpenShift platform. In addition, the term “container native” continues to lose relevance given all the different flavors of container storage that now exist, adds Raihan.

The latest version of the container storage software from Red Hat adds arbiter volume support to enable high availability with efficient storage utilization and better performance, enhanced storage monitoring and configuration via the Red Hat implementation of the Prometheus container monitoring framework, and block-backed persistent volumes (PVs) that can be applied to both general application workloads and Red Hat OpenShift Container Platform (OCP) infrastructure workloads. Support for PVs is especially critical because to in the case of Red Hat OCS organizations can deploy more than 1,000 PVs per cluster, which helps to reduce cluster sprawl within the IT environment, says Raihan.

Raihan says Red Hat supports two types of OCS deployments that reflect the changing nature of the relationship between developers and storage administrators. A converged approach enables developers to provision storage resources on a cluster, while the independent approach makes OCS available on legacy storage systems managed by storage administrators. In both cases, storage administrators still managed the underlying physical storage. But in the case of the converged approach, developers can provision storage resources on their own as part of an integrated set of DevOps processes, he says.

Raihan adds that developers are also pushing their organizations toward the converged approach because of the I/O requirements of containerized applications. That approach also allows organizations to rely more on commodity storage rather than having to acquire and license software for an external storage array, he says, noting the approach also enables IT organizations to extend the skillsets of a Linux administrator instead of having to hire a dedicated storage specialist.

Longer term, Raihan says it’s now only a matter of time before DevOps processes and an emerging set of DataOps processes begin converging. Data scientists are driving adoption of DataOps processes to make it easier for applications to access massive amounts of data. In time, those processes will become integrated with applications being developed that in most cases are trying to access the same data, says Raihan.

As adoption of container continues to mature across the enterprise it’s clear that storage issues are now on the cusp of being addressed. Stateful applications based on containers require high-speed access to multiple forms of persistent storage. Sometime that storage may reside locally or in a public cloud. Regardless of where that data resides, however, the amount of time it takes to provision access to that data is no longer an acceptable bottleneck within the context of larger set of DevOps processes.



Global hotel group enhances digital guest experience with cloud-based IT

As the hospitality industry expands and evolves, leading global hospitality company Hilton is focused on continuing to enhance its innovative guest services and amenities with digital offerings.

The company decided to build an agile hybrid cloud computing environment to speed application development and deployment with continuous integration and continuous delivery (CI/CD) and automation capabilities.

Built with enterprise Linux® container and management technology from Red Hat, Hilton’s new cloud environment supports an award-winning mobile application with features such as self-service digital room selection, check-in, and keys—with efforts for additional digital features and expansion to new markets underway.


More Information:

https://www.redhat.com/en/topics/cloud

https://www.redhat.com/en/success-stories/hilton

https://www.redhat.com/cms/managed-files/rh-hilton-customer-case-study-f12789cw-201809-en_0.pdf

https://www.hilton.com/en/

https://www.redhat.com/en/blog/customers-realize-multi-cloud-benefits-openshift?source=bloglisting

https://blog.openshift.com/introducing-red-hat-openshift-4-3-to-enhance-kubernetes-security/

https://www.admin-magazine.com/Articles/Red-Hat-s-Cloud-Tools

https://www.redhat.com/en/resources/future-of-cloud-is-open-whitepaper

https://containerjournal.com/topics/container-management/red-hat-advances-container-storage/

https://blog.openshift.com/journey-multi-cloud-environment/

https://blog.openshift.com/journey-openshift-multi-cloud-environment-part-2/

https://blog.openshift.com/journey-openshift-multi-cloud-environment-part-3/

https://hybridcloudjournal.wordpress.com/future-of-the-cloud/



Share:

0 reacties:

Post a Comment