• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

21 June 2022

IBM Hybrid Multi-Cloud Strategy

 


Hybrid MultiCloud

IBM Hybrid MultiCloud Strategy

From time to time, we invite industry thought leaders to share their opinions and insights on current technology trends to the IBM Systems IT Infrastructure blog. The opinions in these posts are their own, and do not necessarily reflect the views of IBM.

New technologies breed new buzzwords and terminology, and sometimes it can be difficult to keep up with what it all means. For example, I’m sure you’ve heard the term “hybrid multicloud,” but have you ever really stopped to think about what it means and what it implies for IT in your organizations?

Developing Secure Multi-Cloud Kubernetes Applications


What does it mean?

First let’s take a moment to break down the term Hybrid Multicloud.

Hybrid implies something heterogeneous in origin or composition. In other words, it is something that is composed of multiple other things. Multicloud is pretty simple, and refers to using more than one cloud computing service.

So, when you use the term “hybrid” in conjunction with “multicloud,” it implies an IT infrastructure that uses a mix of on premises and/or private / public cloud from multiple providers.

This is a sensible approach for many organizations because it enables you to maintain and benefit from the systems and data that you have built over time. And, to couple it with current best practices for reducing cost and scaling with cloud services where and when it makes sense.

No one single system or technology is the right solution for every project. No matter what the prognosticators are saying, we will not be moving everything to the cloud and abandoning every enterprise computing system we ever built in the past. But the cloud offers economies of scale and flexibility that make it a great addition to the overall IT infrastructure for companies of all sizes.

With a hybrid multicloud approach, you can choose what makes sense for each component, task, and project that you tackle. Maintain existing platforms to benefit from their rich heritage and integrate them with new capabilities and techniques when appropriate.

Another way of saying this is that you utilize the appropriate platform and technology for the task at hand.

The mainframe is a vital component

For large enterprises, the mainframe has been a vital cog in their IT infrastructure for more than 50 years. Mainframes continue to drive a significant portion of mission critical workload for big business.

Mainframes house more of the world’s structured enterprise data than any other platform. A large percentage of all enterprise transactions run on or interact with the mainframe to conduct business. The mainframe is used by 44 out of the top 50 worldwide banks, 10 out of the top 10 insurers and 18 out of the top 25 retailers.[1]

Clearly the mainframe continues to be an important platform for hosting and developing critical business applications. As such, it is a critical component that should be considered for enterprise hybrid multicloud implementations.

Application-Level Data Protection on Kubernetes

Challenges of system change

As we embark on our hybrid multicloud journey, we must embrace the challenges that are involved in integrating, managing, and utilizing a complex heterogeneous system of different platforms and technologies.

The goal is to bring order, control and insight to disparate environments. This means building resiliency and business continuity into the applications and systems. An outage anywhere in the hybrid multicloud should not cause transactions and business to cease operating.

Furthermore, security and data protection must be part of your strategy. Your customers do not care about the technology you use–they expect to be able to access your systems easily and for their data to be protected. Furthermore, with regulations like HIPAA, PCI-DSS, GDPR and more, your hybrid multicloud must be secure.

It is also challenging to manage systems that rely on multiple cloud service providers, Each provider will have different configuration and security requirements, along with separate development and deployment techniques and requirements.

And let’s not forget that we are integrating many disparate components in a hybrid multicloud infrastructure, not just cloud providers. These are typically implemented, managed, and monitored in different ways, using different technologies. It is imperative that you build and acquire management solutions that can be used to manage and orchestrate the activities and projects across your environment with minimal disruption.

A rigorous plan for choosing multicloud management solutions that understand the cloud providers and on-premises technology that you use can be the difference between success and failure. Plan wisely!

The bottom line

Tackling modern technology is not as simple as “throw out the old and bring in the new.” You have to integrate the old and the new in order to continue to build business value. That means adopting a hybrid multicloud approach. This can deliver the most value to your organization, but it also requires being cognizant of the challenges and making plans to overcome them for your business.

To learn more about IT infrastructure for your hybrid multicloud environment, read this Forrester paper, Assess The Pain-Gain Tradeoff of Multicloud Strategies.

https://www.ibm.com/it-infrastructure/us-en/resources/hybrid-multicloud-infrastructure-strategy/

Multi-Cloud Strategy

What is a Multi-Cloud Strategy?

Why use a Multi-Cloud Strategy?

What are the Benefits of Multi-Cloud Strategy?

How does a Multi-Cloud Strategy enable Digital Transformation?

How do you Develop a Multi-Cloud Strategy?

What are the Key Success Factors for a Multi-Cloud Strategy?

What is a Multi-Cloud Strategy?

A multi-cloud strategy is the utilization of two or more cloud computing services from any number of cloud providers, that are compatible with and extend an organization’s private cloud capabilities. Generally, this means consuming Infrastructure-as-a-Service (IaaS) services are provided by more than one cloud vendor as well as by on-premises or private cloud infrastructure.

Many organizations adopt a multi-cloud strategy for redundancy or to prevent vendor lock-in, while others adopt a multi-cloud approach for best fit-for-purpose to meet application needs for example to take advantage of capacity or features available from a particular cloud provider, or to utilize services offered in a particular geography.

Why use a Multi-Cloud Strategy?

Organizations adopt an enterprise multi-cloud strategy for a number of reasons. Utilizing multiple cloud services from a variety of providers offers these advantages, amongst others:

Modernization: As organizations increasingly adopt cloud-native applications based on containers, microservices, and APIs, a multi-cloud strategy gives access to the broadest array of services while composing new applications.

Flexibility and Scalability: Using multiple cloud providers can prevent vendor lock-in, can provider leverage during vendor negotiations, and can also expose the organization to new capabilities unique to a second or third provider. Additionally, as demand varies, multi-cloud providers can support increase or decrease in capacity virtually instantaneously.

Enhance Best Practices: Leverage best practices learned working with one cloud to other public and private clouds.

Regulatory Compliance: Not all cloud providers provide services or store data in every geography. A multi-cloud strategy can help ensure that an organization is in compliance with the broad range of regulatory and governance mandates, such as GDPR in Europe.

Deploy resilient and secure Kubernetes apps across multi-cloud

What are the Benefits of MultiCloud Strategy?

Agility and Choice: Organizations adopting a multi-cloud strategy can support the needs of an entire application portfolio, and overcome challenges of legacy infrastructure and limited in-house capacity to achieve agility and flexibility needed to remain competitive in their markets. A solid multi-cloud strategy enables organizations to methodically migrate workloads and modernize their application portfolio with cloud-specific services best suited for each application.

Utilize Best of Breed Services: Organizations can pick the best cloud platform that offers the best possible technology solution at the most attractive price. Organizations can select from the physical location, database, service level agreement, pricing, and performance characteristics of each provider while crafting an overall cloud solution to meet pressing business needs.

Modernization and Innovation: Modern orchestration tools can automate management of a multi-cloud strategy, including cloud and on-premises workloads. This can free up valuable IT resources to focus on code modernization and innovation based on new services, products, and platforms that become available on a continual basis.

Enhanced Security: Multi-cloud strategies often include adopting a zero-trust approach to cloud security, which can help ensure the security of every cloud transaction and interaction. Although every major cloud provider offers state of the art physical security, logical security remains the responsibility of each organization using cloud providers for their IaaS platforms.

Price Negotiations: Utilizing multiple cloud providers offers pricing leverage to organizations, as providers are increasingly under competitive pressure to offer IaaS services to an increasingly savvy customer base. Organizations can compare different providers to secure the best possible price and payment terms for each contract.

Risk Reduction: Utilizing multiple cloud providers helps protect against infrastructure failure or cyber-attack. Organizations can rapidly failover workloads from one cloud provider to another and fail them back once the problem is solved.

Introduction to VMware Multi-Cloud Architecture and Strategy

How does a Multi-Cloud Strategy enable Digital Transformation?

Digital Transformation is achieved by utilizing applications to deliver services to customers, and to optimize business processes and supply chain operations. As organizations undertake their digital transformation journey, application modernization and a multi-cloud strategy supports the needs of applications – new and old. Digital transformation and application modernization is an ongoing process, not a one-time task, and so new services and products offered by a range of cloud providers will factor into the continuous improvement of enterprise applications as digital transformation evolves into digital maturity.

IT organizations may find that certain workloads perform better on a given platform, while others work better with a service that is uniquely offered by a specific vendor. A multi-cloud strategy enables the development of the best possible platform for a given function.

How do you Develop a Multi-Cloud Strategy?

Organizations should start on their multi-cloud strategy by first taking an assessment of application needs, as well as technical and business requirements – both cloud and on-premises based - to understand the motivation for adopting a multi-cloud strategy. Popular motivators include:

  • Lowering overall infrastructure costs by migration of workloads to the cloud provider with the most aggressive pricing models
  • Speeding application delivery by provisioning development resources when needed
  • Driving IT efficiency by freeing up manpower formerly utilized managing on-premises resources
  • Moving to OpEx from CapEx by eliminating in-house infrastructure entirely.
  • Once needs are assessed, organizations should plan which cloud services will best fill those needs. A multi-cloud strategy should consider:
  • Existing applications, and whether they currently reside in a cloud provider
  • Unique benefits of each cloud provider and how they map to current needs
  • Overall relationship with existing cloud provider portfolio
  • Whether there are concerns regarding vendor lock-in
  • Strategic or business benefits from a multi-cloud strategy, such as compliance or governance issues that would be solved or addressed.

It is important to consider what roadblocks could impede a multi-cloud strategy. One of the major issues is siloed data that is locked into standalone databases, data warehouses or data lakes with both structured and unstructured data, and block storage used for persistent volumes, all of which can be difficult to migrate. Organizations must also ensure that there are more than one instance of any data set; otherwise it will be impossible to determine which is the source of truth’ and which is an echo. Also, different cloud providers have different architectures and constructs that prevent simple migration of workloads, unless there is an abstraction layer that provides a consistent infrastructure environment.

Organizations should plan on implementing a multi-cloud governance strategy to ensure that policies are applied uniformly enterprise-wide and that business units are not utilizing ‘shadow IT’ resources instead of utilizing sanctioned platforms.

In this manner, IT becomes more of a broker than developer, making cloud resources available and applying policies and best practices to ensure that each instance and deployment adhere to defined policies.

A major issue to avoid is utilizing older offerings or platform-as-a-Service (PaaS) when simple compute is required. Although PaaS offers many benefits, most offerings are not easily portable between cloud providers and should be avoided. Since many organizations utilize a multi-cloud strategy as part of an overall modernization effort, PaaS deployments should be migrated to containerized applications which inherently support multi-cloud strategies.

Finally, when selecting services, avoid the need to find the exact perfect match for every application or function. Platforms that meet all the defined needs are all an organization needs; searching for the ultimate cloud provider offering for a given application can lead to adoption of a number of one-off providers when the job could have been done just as well with existing cloud partner offerings. The old adage that ‘99 percent done is done’ should be applied.

Organizations should then utilize development of multi-cloud pilots to gain competency in managing a multi-cloud strategy to execution, including offering necessary training and education for all stakeholders as to what will change in their day to day activities.

Normalizing Multi-cloud Security Notifications

What are the Key Success Factors for a Multi-Cloud Strategy?

Know the Why of multi-cloud. Organizations must keep their objectives top of mind, whether it is modernization, cost savings, reducing vendor lock-in or eliminating on-premises IT infrastructure. This also should include buy-in from all stakeholders including executives.

Keep an eye on costs. Cloud platforms are different. Without an abstraction layer or way to create consistent operations, operations, security, and governance costs can grow with the addition of each cloud.

Plan for needed skills. Multi-cloud adds complexity – perhaps two to three times more complex than utilizing a traditional single-sourced cloud environment. Although management tools can mitigate some of this complexity new skills will be required to manage a multi-cloud environment and to take advantage of the benefits of cloud-native application strategies. Whether these skills come from training existing teams, hiring from outside, or by leveraging integration partners they will be required to get a multi-cloud strategy off the ground.

Measure Progress. Organization leaders will want to determine if a multi-cloud strategy is achieving its stated goals. Look for ways to measure the payback of this approach, either through return on investment (ROI) or by demonstrating reduced total cost of ownership (TCO) for IT over a given timeframe.

Document and report on outcomes and share the reports with stakeholders to grow confidence in the strategy enterprise-wide.

Think Modernization. If achieving modern, cloud-native operations is a goal, embrace modernization and encourage thinking outside the box as development, DevOps and deployment times all accelerate. Innovation that leads to better employee and customer engagement can pay off in improved revenue and profits, so embrace new methods of interacting such as chatbots and mobile applications.

Mastering the Hybrid Multicloud World

It’s Critical that On Premises and Cloud Work Together

It’s easy to see why today’s organizations are flocking to the cloud. Hyperscalers give software developers access to a wide scope of resources for creating and managing applications. They also enable rapid scaling, and foster innovation by making it easy for developers to incorporate new features. As millions of customers provide feedback, new iterations are constantly being built and deployed. 

For organizations, it makes sense to take advantage of the cloud’s innovations and scaling capabilities by using SaaS applications for standard business functions such as Customer Relationship Management (CRM), office productivity software, and video conferencing. Fifty-four percent of businesses say they have moved on-premises applications to the cloud, and 46% have created applications expressly built for cloud use, according to the IDG 2020 Cloud Computing Survey.

In the multi-tenant public cloud, organizations also avoid the heavy capital expenses of purchasing infrastructure and pay-as-you-go pricing also allows them to avoid spending money on unused capacity.

Still, many organizations prefer to hold the more individualized and sensitive parts of their business processes – applications controlling finance, manufacturing, or supply chain operations, for example – in the data center.  This hybrid cloud model allows IT to focus on hosting internally the services that make the company unique.

Hybrid- and Multi-Cloud by design - IBM Cloud and your journey to Cloud

From Hybrid Cloud to Multicloud

The norm for today’s enterprises is the MultiCloud. Fifty-five percent of companies are using at least two public clouds in addition to their own data centers, the IDG survey found.

One reason is that AWS, Microsoft Azure, and the Google Cloud each have different features and pricing structures. IT managers make choices based on the performance and services a platform offers, which vary according to application type. IT leaders also optimize costs by selecting the storage options best suited to their needs.

And because the public cloud is a dynamic environment, with providers continually creating new services, a multicloud strategy allows organizations to avoid vendor lock-in and take advantage of these innovations as they are introduced.

Multi-Cloud Kubernetes Management and Operations


Management and Data Challenges

The sprawling multi-cloud-and-on-premises environment gives IT leaders a wide array of choices for managing resources and data. While having more options is a boon, 46% of technology leaders in the survey said it has also increased management complexity.

IT teams must constantly evaluate the environment and decide where it is best to locate workloads. Some decisions are relatively straightforward. Security or compliance regulations keep certain applications on premises. Another big issue is lag. Kim Stevenson, Senior Vice President and General Manager of NetApp Foundational Data Services, points out that “Some applications don’t tolerate even a nanosecond of latency.”

But for many applications, decisions aren’t so clear-cut. Technology leaders must weigh their options, running calculations to determine the advantages and disadvantages of on premises versus Cloud A or Cloud B.

Sometimes it makes sense to move applications permanently to the cloud. Other times, it may be better to shuttle them between cloud and data center as the organization grows, tries out new services, or responds to changing demands.

“If you’re in retail, you need to do a lot more credit card processing on Black Friday. If you’re an accounting firm, you need to do a lot of tax application processing in the first quarter. At the end of the fiscal year, you may want to tier off older data to object storage,” Stevenson says.

But applications and data don’t always move smoothly between the on-premises environment and the cloud. Inconsistent data formatting can lead to confusion and errors.

For example, dates can be expressed in several different formats, making data containing them difficult to transfer.  Customer records may contain 16 characters in some data stores and 20 in others. If a company moves them from a 20-character to a 16-character format, IT must pause to determine whether any important information will be lost, and if so, what to do about it.

Because data about application use and costs is scattered across public clouds and the data center, it’s tough for IT to see the big picture. Different clouds use different management tools, making it even harder to have visibility into actual IT resource usage and spend forecast predictability.

Taming Multi-Cloud, Hybrid Cloud, Docker, and Kubernetes

Improving Operations with Unified Management

Today’s technology makes managing the multicloud, hybrid environment much easier. Solutions such as NetApp ONTAP standardize data architecture, so companies can move applications at will and automatically tier off old data to cheaper storage without worrying about quality control. They have strong and consistent security protections surrounding their data wherever it goes.

IT leaders can also see and manage infrastructure both at home and across multiple public clouds – all from one central control plane. A unified management platform also enables cloud features like automation and advanced AI algorithms to be extended to applications in the data center.

“A single management console helps you do two things,” Stevenson says. “It diagnoses problems and shows you where they’re located, and it gives you the tools to solve them.”

Administrators can manage everything with a single toolset, making training easier and avoiding the confusion that can arise when switching among on-premises and public clouds.

Managers can view resources across the entire organization or parse them according to business unit or service type. This unparalleled visibility enables them to avoid guesswork when creating a technology strategy, as well as make informed decisions based on reliable and timely operational data.

The state of the cloud csa survey webinar

Businesses can also increase agility by scaling compute and storage resources separately, helping them respond better to shifting workloads and customer demands. Remote teams can collaborate seamlessly using data from both on-premises storage and the cloud.

Making Better Choices

The hybrid, multicloud environment gives companies choices, but without a coherent framework, conflicts and inefficiencies are bound to arise.

Today’s technology allows IT leaders to literally see what they’re doing and judge how one move on the chessboard will affect other pieces of the business. Whether they’re building their own private clouds or deploying resources in public ones, they can make sound, data-driven decisions about operations, costs, scaling, and services. By bringing the best capabilities of the cloud to the data center, IT leaders can finally achieve their elusive goal of aligning IT strategy with business strategy.

“The cloud and the on-premises environments will continue to coexist for a long time,” Stevenson says. “Organizations that enable them to work well together will realize the full benefits of both, giving them a competitive edge.”

Multi-Cloud Connectivity and Security Needs of Kubernetes Applications

Application initiatives are driving better business outcomes, an elevated customer experience, innovative digital services, and the anywhere workforce. Organizations surveyed by VMware report that 90% of app initiatives are focused on modernization(1). Using a container-based microservices architecture and Kubernetes, app modernization enables rapid feature releases, higher resiliency, and on-demand scalability. This approach can break apps into thousands of microservices deployed across a heterogeneous and often distributed environment. VMware research also shows 80% of surveyed customers today deploy applications in a distributed model across data center, cloud, and edge(2).

Enterprises are deploying their applications across multiple clusters in the data center and across multiple public or private clouds (as an extension of on-premises infrastructure) to support disaster avoidance, cost reduction, regulatory compliance, and more.

 Applications Deployed in a Distributed Model

Fig 1: Drivers for Multi-Cloud Transformation 


The Challenges in Transitioning to Modern Apps 

While app teams can quickly develop and validate Kubernetes applications in dev environments, a very different set of security, connectivity, and operational considerations awaits networking and operations teams deploying applications to production environments. These teams face new challenges as they transition to production with existing applications — even more so when applications are distributed across multiple infrastructures, clusters, and clouds. Some of these challenges include:

Application connectivity across multi-cluster, multi-cloud, and VM environments 

Application teams developing new applications using a microservices architecture need to be concerned about how to enable connectivity between microservices deployed as containers and distributed across multiple clouds and hybrid environments (data centers and public clouds).

Private cloud in the Hybrid Era

Additionally, some of these application components reside in VM environments. For example, a new eCommerce app designed with a microservices architecture may need to contact a database running in a VMware vSphere environment or in the cloud. The lack of seamless connectivity between these heterogeneous environments (container-based vs. VM-based) is one of the reasons that prevent enterprises from meeting time-to-market requirements and slows down their app modernization initiatives, as they are unable to re-use their existing application components.

Consistent end-to-end security policies and access controls 

With heterogeneous application architectures and infrastructure environments, the trusted perimeter has dissolved, and enterprises are seeing breaches that continue to grow via exploits, vulnerabilities, phishing attacks, and more. Modern applications raise several security challenges, such as how to secure connectivity not only from end-users into Kubernetes clusters, but across clusters, availability zones, and sites and between containerized and virtual machine environments.

Consistent end-to-end security policies and access controls 

Fig 2: Increased Attack Surface 

Teams need to more effectively ensure that users are given the right access permissions to applications; that application components are properly ring-fenced; and that communications across hybrid infrastructures and workloads are secured. Identity based on IP addresses, and intent based on ports, are insufficient for modern applications. What is needed is end-to-end deep visibility from end-users to apps to data, and an extension of the principles of zero trust network access (ZTNA) to these modern applications.

Operational complexity — multiple disjointed products, no end-to-end observability 

The responsibility for secure, highly available production rollouts of Kubernetes falls on application platform teams. However, they are confronted with a vast array of open-source components that must be stitched together to achieve connectivity, availability, security, and observability — including global and local load balancers, ingress controllers, WAF, IPAM, DNS, sidecar proxies, policy frameworks, identity frameworks, and more.

Multiple disjointed products, no end-to-end observability

Fig: 3 Multiple components need to be managed separately 

Platform teams need a way to centrally control traffic management and security policies across the full application operating environment. They also need a way to gain end-to-end visibility across multiple K8s environments and entire application topologies, including application dependencies, metrics, traces, and logs. The end-result of this complexity is usually a compromise consisting of partial visibility, automation, and scalability, which ends up making a lot of projects fail.

All these challenges and more are driving us to further evolve our networking and security thinking for modern apps. We simply cannot afford to continue to rely solely on the network architectures of the last decade. More versatile and flexible models are needed to address connectivity, security, and operational requirements in this rapidly evolving world.

VMware Modern Apps Connectivity Solution   

VMware is introducing a new solution that brings together the advanced capabilities of Tanzu Service Mesh and VMware NSX Advanced Load Balancer (formerly Avi Networks) to address today’s unique enterprise challenges.

The VMware Modern Apps Connectivity solution offers a rich set of integrated application delivery services through unified policies, monitoring, visualizations, and observability. These services include enterprise-grade L4 load balancing, ingress controller, global load balancing (GSLB), web application security, integrated IPAM and DNS, end-to-end service visibility and encryption, and an extensible policy framework for intelligent traffic management and security. Through the integrated solution, operators can centrally manage end-to-end application traffic routing, resiliency, and security policies via Tanzu Service Mesh.

This solution speeds the path to app modernization with connectivity and better security across hybrid environments and hybrid app architectures. It is built on cloud-native principles and enables a set of important use-cases that automates the process of connecting, observing, scaling, and better-securing applications across multi-site environments and clouds.

VMware Modern Apps Connectivity Solution  

The VMware Modern App Connectivity solution works with VMware Tanzu, Amazon EKS, and upstream Kubernetes today, and is in preview with Red Hat OpenShift, Microsoft Azure AKS, and Google GKE(3). As a leader in delivering the Virtual Cloud Network, VMware understands the challenges of creating operationally simple models for modern app connectivity and security. The solution closes the dev-to-production gap caused by the do-it-yourself approach forced on many networking teams who are under pressure to launch reliable, business-critical services that work consistently across heterogeneous architectures and environments.

More Information:

https://blogs.vmware.com/networkvirtualization/2021/05/multi-cloud-connectivity-security-kubernetes.html/

https://open-security-summit.org/sessions/2021/mini-summits/nov/kubernetes/developing-secure-multi-cloud-applications/

https://www.redhat.com/en/about/press-releases/red-hat-extends-foundation-multicloud-transformation-and-hybrid-innovation-latest-version-red-hat-enterprise-linux

https://www.ibm.com/cloud/blog/distributed-cloud-vs-hybrid-cloud-vs-multicloud-vs-edge-computing-part-1

https://www.ibm.com/blogs/systems/hybrid-multicloud-a-mouthful-but-the-right-approach/

https://www.ibm.com/cloud/architecture/architectures/public-cloud/

 https://www.ibm.com/blogs/think/2019/11/how-a-hybrid-multicloud-strategy-can-overcome-the-cloud-paradox/


23 May 2022

What's new in Red Hat Enterprise Linux 9

 


RHEL 9’s Release Indicates a Turning Point for Red Hat

Can RHEL 9 put the company's Cent OS debacle behind it? The future's at least more secure.

The latest release of Red Hat Enterprise Linux version 9 into production marks a significant point in the company’s history. It’s the first version preceded in developmental terms by CentOS Stream, an OS that’s production-ready yet is something of a late-stage testing ground for new-ish features (trialed initially in Fedora) that will eventually percolate to grown-up Red Hat Enterprise Linux.

Red Hat Enterprise Linux 9: Stable anywhere. Available everywhere.


The biggest changes in RHEL 9 are in security and compliance, the latter in particular for so long the ugly step-sister of enterprise Linux, yet increasingly becoming a core pillar on which businesses can operate legally and more securely.

To help companies do more than engage in box-ticking exercises for governance like PCI-DSS compliance, security options now include smartcard authentication, more detailed SSSD logging, use of OpenSSL3 by default, and removal of root access to a RHEL box via SSH. Kernel patches on servers are also possible now without rebooting from a sys admin’s web console, and there are built-in checks against hardware layer vulnerabilities like Meltdown and Spectre.

There are improvements for Red Hat-flavored containers in Podman, and UBI images have been updated in their standard, micro, mini, and init forms. Container validation is improved, so there’s less danger of time-poor developers pulling rogue containers from spoofed domains.

Red Hat’s official press releases of RHEL version 9 stress the edge capabilities of the OS under the hood, making it easier for organizations to create canonical images that can be rolled out quickly at scale. There’s also a Podman roll-back capability that detects if new containers won’t start and will quietly replace the new with the (working) old.

To developers, of interest are newer versions of Python (3.9) and GCC (11) by default, plus there are the latest versions of Rust and Go. Applications in Flatpaks are fully welcomed (the current vogue for immutable Linux distributions takes another step towards mainstream), but RPMs are clearly not going anywhere just yet.

Red Hat’s other significant turning point is that RHEL 9 might just draw to a close the absolute class-A public relations SNAFU the company presided over when CentOS was discontinued. Or, to be more particular, when it was transitioned from an OS running in parallel with RHEL to a leading-edge, semi-rolling version of the more stable, licensed, production-ready RHEL OS.

German company SUSE does well, also consolidating container security player NeuVector in line with its Rancher acquisition in 2021

SUSE support goes multi-colored off Q4 2021 results

The phrase “mis-communication” tends to cover up any number of mistakes in business environments, ranging from a misdirected email, to, in Red Hat’s case, a full-on mishandling of product announcements that had incendiary effects in the business technology community.

But “mis-communications” aside, the Red Hat stable’s lineup appears to be more settled and accepted than a year ago. Registered users can run RHEL on a dozen or so instances without forking out for license fees, and Stream 8 is gradually finding itself in production too. The company’s Matthew Hicks (executive VP for products and technologies) said, “[…] Red Hat Enterprise Linux 9 extends wherever needed […], pairing the trusted backbone of enterprise Linux with the innovative catalysts of open source communities.” The community’s “innovative catalysts” are only just finished licking their wounds inflicted by a Red Hat marketing division that many would expect to have experienced a few personnel changes of late.

Red Hat Insights Overview

Red Hat Enterprise Linux (RHEL) 9 is now generally available (GA). This release is designed to meet the needs of the hybrid cloud environment, and is ready for you to develop and deploy from the edge to the cloud. It can run your code efficiently whether deployed on physical infrastructure, in a virtual machine, or in containers built from Red Hat Universal Base Images (UBIs).

RHEL 9 can be downloaded for free as part of the Red Hat Developer program subscription. In this article, you'll learn some of the ways that RHEL 9 can improve the developer experience.

Get access to the latest language runtimes and tools

Red Hat Enterprise Linux 9 is built with a number of the latest runtimes and compilers, including GCC 11.2 and updated versions of LLVM (12.0.1), Rust (1.54.0), and Go (1.16.6), enabling developers to modernize their applications.

RHEL 9 ships with updated versions of core developer toolchains such as GCC (11.2), glibc (2.34), and binutils (2.35). The new features in the GCC compiler help users better track code flow, improve debugging options, and write optimized code for efficient hardware usage. The new GCC compiler comes with modifications for C and C++ code compilation, along with new debugging messages for logs. That gives developers a better handle on how their code performs.

With next-generation application streams, developers will have more choices when it comes to versions of popular languages and tools. Red Hat Enterprise Linux 9 improves the application streams experience by providing initial application stream versions that can be installed as RPM packages using the traditional yum install command. Developers can select from multiple versions of user-space components as application streams that are easy to update, providing greater flexibility to customize RHEL for their development environment. Application stream contents also include tools and applications that move very fast and are updated frequently. These application streams, called rolling streams, are fully supported for the full life of RHEL 9.

In the Clouds (E23) | Red Hat Enterprise Linux 9 preview

Red Hat Enterprise Linux 9 extends RHEL 8's module packaging features. With RHEL 9, all packaging methods, such as Red Hat Software Collections, Flatpaks, and traditional RPMs, have been incorporated into application streams, making it easier for developers to use their preferred packages.

Support for newer versions of language runtimes

Python 3.9 gets lifetime support in Red Hat Enterprise Linux 9 and comes with a host of new features, including timezone-aware timestamps, new string prefix and suffix methods, dictionary union operations, high-performance parsers, multiprocessing improvements, and more. These features will help developers modernize their applications easily.

Node.js 6 provides changes that include an upgrade to the V8 engine to version 9.2, a new Timer Promises API, a new experimental web streams API, and support for npm package manager version 7.20.3. Node.js is now compatible with OpenSSL 3.0.

Ruby 3.0.2 provides several performance improvements, along with bug and security fixes. Some of the important improvements include concurrency and parallelism, static analysis, pattern matching with case/in expressions, redesigned one-line pattern matching, and find pattern matching.

Perl 5.32 provides a number of bug fixes and enhancements, including Unicode version 13, a new experimental infix operator, faster feature checks, and more.

PHP 8.0 provides several bug fixes and enhancements, such as the use of structured metadata syntax, newly named arguments that are order-independent, improved performance for Just-In-Time compilation, and more.

High Performance Computing (HPC) with Red Hat


Build Red Hat Enterprise Linux images for development and testing

Image builder is a tool that allows users to create custom RHEL system images in a variety of formats for major and minor releases. These images are compatible with major cloud providers and virtualization technologies popular in the market. This enables users to quickly spin up customized RHEL development environments on local, on-premise, or cloud platforms.

With image builder, custom filesystem configurations can be specified in blueprints to create images with a specific disk layout, instead of using the default layout configuration.

Image builder can be used to create bootable ISO installer images. These images consist of a tarball that contains a root filesystem that you can use to install directly to a bare metal server, which is ideal for bringing up test hardware for edge developments.

Monitor and maintain Red Hat Enterprise Linux environments

The Red Hat Enterprise Linux 9 web console has an enhanced performance metrics page that helps identify potential causes of high CPU, memory, disk, and network resource usage spikes. In addition, subsystem metrics can be easily exported to a Grafana server.

RHEL 9 also now supports kernel live patching via the web console. The latest critical kernel security patches and updates can be applied immediately without any need for scheduled downtime, and without disrupting ongoing development or production applications.

Build containers with Universal Base Images

Red Hat Enterprise Linux 9 ships with control groups (cgroup) and a recent release of Podman with improved defaults. Signature and container short-name validation are enabled by default and containerized applications can be tested on the out-of-the-box RHEL 9 configuration.

The RHEL 9 UBI is available in standard, micro, minimal or init image configurations, which range in size from as small as 7.5MB up to 80MB. Learn more about how to build, run, and manage containers.

Identity and security

With Red Hat Enterprise Linux 9, root user authentication with a password over SSH has been disabled by default. The OpenSSH default configuration disallows root user login with a password, thereby preventing attackers from gaining access through brute-force password attacks. Instead of using the root password, developers can access remote development environments using SSH keys to log in.

OpenSSL 3.0 adds a provider concept, a new versioning scheme, and an improved HTTPS. Providers are collections of algorithm implementations. Developers can programmatically invoke any providers based on application requirements. Built-in RHEL utilities have been recompiled to utilize OpenSSL 3. This allows users to take advantage of new security ciphers for encrypting and protecting information.

We are excited to announce the availability of Red Hat Enterprise Linux 9 (RHEL 9), the latest release of the world’s leading enterprise Linux platform. RHEL 9 provides a more flexible and stable foundation to support hybrid cloud innovation and a faster, more consistent experience for deploying applications and critical workloads across physical, virtual, private and public cloud and edge deployments.

Proactive Threat Hunting in Red Hat Environments With CrowdStrike


What’s new?

RHEL 9 includes features and enhancements to help achieve long-term IT success by using a common, flexible foundation to support innovation and accelerate time to market.

Primary features and benefits

Here are a few highlights of what’s included in RHEL 9.

A new platform for developers today and in the future

- Completing the migration to Python 3, version 3.9 will be the default Python for the life of RHEL 9. Python 3.9 brings several new enhancements, including timezone-aware timestamps, the recent string prefix, suffix methods and dictionary union operations to help developers modernize existing apps.

- RHEL 9 is also built with GCC 11 and the latest versions of LLVM, Rust and Go compilers. RHEL 9 is based on glibc 2.34 for 10+ years of enterprise-class platform stability.

- And finally, for the first time in RHEL, Link Time Optimization (LTO) will be enabled by default in userspace for deeper optimization of application code to help build smaller, more efficient executables.

Easy contribution path to future versions of RHEL

Organizations can now develop, test and contribute to a continuously-delivered distribution that tracks just ahead of RHEL. CentOS Stream, an upstream open source development platform, provides a seamless contribution path to the next minor release. RHEL 9 is the first RHEL major release built from CentOS Stream, and the RHEL 9 Beta was first available as CentOS Stream 9. All future RHEL 9 releases will be built from CentOS Stream.

Next-generation application streams

Building on the introduction of application streams and module packaging in RHEL 8, all packaging methods in RHEL 9 are incorporated into application streams, including modules, SCLs, Flatpacks and traditional RPMs, making them much easier to use.

Continuing commitment to multiple architecture support

Open source software gives users greater control over their digital future by preventing workloads from being locked into a specific vendor. RHEL extends this control beyond the source code by enabling diverse CPU architectures for users that need an evolving business environment. Whether you're running your workload on x86_64, aarch64, IBM POWER9, Power10, or IBM Z, we have you covered.

Container improvements

If you're building applications with universal base image (UBI) container images, you'll want to check out the RHEL 9 UBI images. The standard UBI image is available, as are micro, minimal and the init image. To get the entire experience, test the UBI images on a fully subscribed RHEL 9 container host, allowing you to pull additional RPMs from the RHEL 9 repositories.

Ansible 101: An introduction to Automating Everything with Red Hat Training.

RHEL for edge

RHEL 9 introduces automatic container updates and rollbacks, which expands the capacity to update container images automatically. Podman can now detect if an updated container fails to start and automatically roll the configuration back. Together with existing OS-level rollbacks, this provides new levels of reliability for applications.

Image Builder as-a-Service

Enhancements to Image Builder in RHEL 9 help organizations save time and drive system consistency at scale. With the new Image Builder as-a-Service, organizations can now build a standardized and optimized operating system image through our hosted service and deploy it to a cloud provider of choice.

Identity and security

New capabilities added to RHEL 9 help simplify how organizations manage security and compliance when deploying new systems or managing existing infrastructure. RHEL 9 now offers Integrity Measurement Architecture (IMA) to dynamically verify the integrity of the OS to detect if it has been compromised. RHEL 9 has also been enhanced to include digital signatures and hashes that help organizations detect rogue modifications across the infrastructure.

Automation and management

Organizations now have access to the enhanced performance metrics page in the RHEL 9 web console to help identify potential causes of high CPU, memory, disk and network resource usage spikes. In addition, customers can more easily export metrics to a Grafana server. Kernel live patch management is also available via the web console to significantly reduce the complexity of performing critical maintenance. The console also adds a simplified interface for applying kernel updates without using command line tooling. 

Why You Should Migrate to Red Hat Linux from CentOS

Predictive analytics

Red Hat Insights now encompasses Resource Optimization, which enables right-sizing RHEL in the public cloud. Resource Optimization does this by evaluating performance metrics to identify workload utilization. Insights then provides visibility and recommendations for optimizing to a more suitable instance for the workload needs. Insights also adds Malware Detection, a security assessment that analyzes RHEL systems across the enterprise for known malware signatures and provides detailed visibility into the risk.

Red Hat Enterprise Linux (RHEL) has been the Linux for business for a generation now. Today, RHEL touches more than $13 trillion of the global economy. Remember when people used to think Linux couldn't handle big business? Ha! With the release of RHEL 9 at the Red Hat Summit in Boston, Red Hat improved its offerings from the open hybrid cloud to bare metal servers to cloud providers and the farthest edge of enterprise networks. 

RHEL 9 Customers want better security, and Red Hat will deliver it. Beyond the usual RHEL hardening, testing, and vulnerability scanning, RHEL 9 incorporates features that help address hardware-level security vulnerabilities like Spectre and Meltdown. This includes capabilities to help user-space processes create memory areas that are inaccessible to potentially malicious code. The platform provides readiness for customer security requirements as well, supporting PCI-DSS, HIPAA, and more.

Specific security features:

- Smart Card authentication: Users can make use of smart card authentication to access remote hosts through the RHEL web console (Sudo, SSH, etc.).

- Additional security profiles: You can improve your security intelligence gathering and remediation services such as Red Hat Insights and Red Hat Satellite with security standards such as PCI-DSS and HIPAA.

- Detailed SSSD logging: SSSD, the enterprise single-sign-on framework, now includes more details for event logging. This includes time to complete tasks, errors, authentication flow, and more. New search capabilities also enable you to analyze performance and configuration issues.

- Integrated OpenSSL 3: It supports the new OpenSSL 3 cryptographic frameworks. RHEL's built-in utilities have been recompiled to utilize OpenSSL 3.

SSH root password login disabled by default: Yes, I know you ssh into your server with root passwords all the time. But it's never been a smart idea.  By default, RHEL  won't let you do this. Yes, this is annoying, but it's even more annoying to hackers trying to log in as `root` using brute force password attacks. All-in-all, this is a win in my book.

In this release, Red Hat also introduces Integrity Measurement Architecture (IMA) digital hashes and signatures. With IMA, users can verify the integrity of the operating system with digital signatures and hashes. With this, you can detect rogue infrastructure modifications, so you can stop system compromises in their tracks.

MEC: Multi-access Edge Computing: state of play from ETSI MEC and network automation perspectives

Red Hat is also adopting, via Kubernetes, Sigstore for signing artifacts and verifying signatures. Sigstore is a free software signing service that improves software supply chain security by making it easy to sign release files, container images, and binaries cryptographically. Once signed, the signing record is kept in a tamper-proof public log. The Sigstore will be free to use by all developers and software providers. This gives software artifacts a safer chain of custody that can be secured and traced back to their source. Looking ahead, Red Hat will adopt Sigstore in OpenShift. Podman and other container technologies.

This release has many new edge features. These include:

- Comprehensive edge management, delivered as a service, to oversee and scale remote deployments with greater control and security functionality, encompassing zero-touch provisioning, system health visibility and more responsive vulnerability mitigations all from a single interface.

- Automatic container roll-back with Podman, RHEL's integrated container management technology. This automatically detects if a newly-updated container fails to start. In this case, it then rolls the container back to the previous working version.

- The new RHEL also includes an expanded set of RHEL Roles, These enable you to create specific system configurations automatically. So, for instance, if you need RHEL set up just for Postfix, high-availability clusters, firewall, Microsoft SQL Server, or a web console, you're covered.

- Besides roles, RHEL 9 makes it easier to build new images: You can build RHEL 8 and RHEL 9 images via a single build nod. It also includes better support for customized file systems (non-LVM mount points) and bare-metal deployments. 

- If you're building Universal Base Image (UBI) containers, You can create them not only with standard UBI images but with micro, minimal, and init images as well. You'll need a fully subscribed RHEL 9 container host to do this. This enables you to pull additional RPMs from the RHEL 9 repositories. 

- RHEL now uses cgroup2 containers by default: Podman, Red Hat's drop-in daemonless container engine replacement for Docker, uses signature and short-name (e.g., ubi8 instead of registry.access.redhat.com/ubi8/ubi) validation by default when pulling container images. 

- And, of course, Red Hat being Red Hat, RHEL 9 Beta ships with GCC 11 and the latest versions of LLVM, Rust, and Go compilers. Looking ahead, Python 3.9 will also be RHEL 9's default version of Python.

Thinking of the console, the new RHEL also supports kernel live patching from the console. With this, you can apply patches across large, distributed system deployments without having to write a shell program. And, since it's live patching, your RHEL instances can keep running even as they're being patched.

Put it all together, and you get a solid business Linux for any purpose. Usually, we wait before moving from one major release to another. This time you may want to go ahead and jump to RHEL 9 sooner than later. 

More Information:

https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/try-it

https://developers.redhat.com/articles/2022/05/18/whats-new-red-hat-enterprise-linux-9

https://www.redhat.com/en/blog/hot-presses-red-hat-enterprise-linux-9

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9

https://www.zdnet.com/article/red-hat-enterprise-linux-9-security-baked-in/

https://www.unixsysadmin.com/rhel-9-resources/


25 April 2022

Azure DDOS Security and DDOS Attack Prevention

 

Azure DDoS Protection | Distributed Denial of Service

Azure Security Center

Fundamental best practices

The following sections give prescriptive guidance to build DDoS-resilient services on Azure.

Design for security

Ensure that security is a priority throughout the entire lifecycle of an application, from design and implementation to deployment and operations. Applications can have bugs that allow a relatively low volume of requests to use an inordinate amount of resources, resulting in a service outage.

To help protect a service running on Microsoft Azure, you should have a good understanding of your application architecture and focus on the five pillars of software quality. You should know typical traffic volumes, the connectivity model between the application and other applications, and the service endpoints that are exposed to the public internet.

Ensuring that an application is resilient enough to handle a denial of service that's targeted at the application itself is most important. Security and privacy are built into the Azure platform, beginning with the Security Development Lifecycle (SDL). The SDL addresses security at every development phase and ensures that Azure is continually updated to make it even more secure.

Microsoft Security Virtual Training Day: Security, Compliance and Identity Fundamentals 1

Design for scalability

Scalability is how well a system can handle increased load. Design your applications to scale horizontally to meet the demand of an amplified load, specifically in the event of a DDoS attack. If your application depends on a single instance of a service, it creates a single point of failure. Provisioning multiple instances makes your system more resilient and more scalable.

For Azure App Service, select an App Service plan that offers multiple instances. For Azure Cloud Services, configure each of your roles to use multiple instances. For Azure Virtual Machines, ensure that your virtual machine (VM) architecture includes more than one VM and that each VM is included in an availability set. We recommend using virtual machine scale sets for autoscaling capabilities.

Defense in depth

The idea behind defense in depth is to manage risk by using diverse defensive strategies. Layering security defenses in an application reduces the chance of a successful attack. We recommend that you implement secure designs for your applications by using the built-in capabilities of the Azure platform.

For example, the risk of attack increases with the size (surface area) of the application. You can reduce the surface area by using an approval list to close down the exposed IP address space and listening ports that are not needed on the load balancers (Azure Load Balancer and Azure Application Gateway). Network security groups (NSGs) are another way to reduce the attack surface. You can use service tags and application security groups to minimize complexity for creating security rules and configuring network security, as a natural extension of an application’s structure.

You should deploy Azure services in a virtual network whenever possible. This practice allows service resources to communicate through private IP addresses. Azure service traffic from a virtual network uses public IP addresses as source IP addresses by default. Using service endpoints will switch service traffic to use virtual network private addresses as the source IP addresses when they're accessing the Azure service from a virtual network.

We often see customers' on-premises resources getting attacked along with their resources in Azure. If you're connecting an on-premises environment to Azure, we recommend that you minimize exposure of on-premises resources to the public internet. You can use the scale and advanced DDoS protection capabilities of Azure by deploying your well-known public entities in Azure. Because these publicly accessible entities are often a target for DDoS attacks, putting them in Azure reduces the impact on your on-premises resources.

Azure DDoS Protection | Distributed Denial of Service

Azure DDoS Protection Standard features

The following sections outline the key features of the Azure DDoS Protection Standard service.

Always-on traffic monitoring

DDoS Protection Standard monitors actual traffic utilization and constantly compares it against the thresholds defined in the DDoS Policy. When the traffic threshold is exceeded, DDoS mitigation is initiated automatically. When traffic returns below the thresholds, the mitigation is stopped.

Azure DDoS Protection Standard Mitigation

During mitigation, traffic sent to the protected resource is redirected by the DDoS protection service and several checks are performed, such as:

Ensure packets conform to internet specifications and are not malformed.

Interact with the client to determine if the traffic is potentially a spoofed packet (e.g: SYN Auth or SYN Cookie or by dropping a packet for the source to retransmit it).

Rate-limit packets, if no other enforcement method can be performed.

DDoS protection drops attack traffic and forwards the remaining traffic to its intended destination. Within a few minutes of attack detection, you are notified using Azure Monitor metrics. By configuring logging on DDoS Protection Standard telemetry, you can write the logs to available options for future analysis. Metric data in Azure Monitor for DDoS Protection Standard is retained for 30 days.

Adaptive real time tuning

The complexity of attacks (for example, multi-vector DDoS attacks) and the application-specific Behaviors of tenants call for per-customer, tailored protection policies. The service accomplishes this by using two insights:

Automatic learning of per-customer (per-Public IP) traffic patterns for Layer 3 and 4.

Minimizing false positives, considering that the scale of Azure allows it to absorb a significant amount of traffic.

Diagram of how DDoS Protection Standard works, with "Policy Generation" circled

DDoS Protection telemetry, monitoring, and alerting

DDoS Protection Standard exposes rich telemetry via Azure Monitor. You can configure alerts for any of the Azure Monitor metrics that DDoS Protection uses. You can integrate logging with Splunk (Azure Event Hubs), Azure Monitor logs, and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface.

DDoS mitigation policies

In the Azure portal, select Monitor > Metrics. In the Metrics pane, select the resource group, select a resource type of Public IP Address, and select your Azure public IP address. DDoS metrics are visible in the Available metrics pane.

DDoS Protection Standard applies three autotuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the metric Inbound packets to trigger DDoS mitigation.

Available metrics and metrics chart

The policy thresholds are autoconfigured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.

Metric for an IP address under DDoS attack

If the public IP address is under attack, the value for the metric Under DDoS attack or not changes to 1 as DDoS Protection performs mitigation on the attack traffic.

We recommend configuring an alert on this metric. You'll then be notified when there’s an active DDoS mitigation performed on your public IP address.

For more information, see Manage Azure DDoS Protection Standard using the Azure portal.

Azure Network Security: DDoS Protection

Web application firewall for resource attacks

Specific to resource attacks at the application layer, you should configure a web application firewall (WAF) to help secure web applications. A WAF inspects inbound web traffic to block SQL injections, cross-site scripting, DDoS, and other Layer 7 attacks. Azure provides WAF as a feature of Application Gateway for centralized protection of your web applications from common exploits and vulnerabilities. There are other WAF offerings available from Azure partners that might be more suitable for your needs via the Azure Marketplace.

Even web application firewalls are susceptible to volumetric and state exhaustion attacks. We strongly recommend enabling DDoS Protection Standard on the WAF virtual network to help protect from volumetric and protocol attacks. For more information, see the DDoS Protection reference architectures section.

Protection Planning

Planning and preparation are crucial to understand how a system will perform during a DDoS attack. Designing an incident management response plan is part of this effort.

If you have DDoS Protection Standard, make sure that it's enabled on the virtual network of internet-facing endpoints. Configuring DDoS alerts helps you constantly watch for any potential attacks on your infrastructure.

Monitor your applications independently. Understand the normal behavior of an application. Prepare to act if the application is not behaving as expected during a DDoS attack.

Receiving Distributed Denial of Service (DDoS) attack threats?

DDoS threats have seen a significant rise in frequency lately, and Microsoft stopped numerous large-scale DDoS attacks last year. This guide provides an overview of what Microsoft provides at the platform level, information on recent mitigations, and best practices.

Getting started with Azure DDoS Protection - Azure Network Security webinar

Microsoft DDoS platform

Microsoft provides robust protection against layer three (L3) and layer four (L4) DDoS attacks, which include TCP SYN, new connections, and UDP/ICMP/TCP floods.

Microsoft DDoS Protection utilizes Azure’s global deployment scale, is distributed in nature, and offers 60Tbps of global attack mitigation capacity.

All Microsoft services (including Microsoft365, Azure, and Xbox) are protected by platform level DDoS protection. Microsoft's cloud services are intentionally built to support high loads, which help to protect against application-level DDoS attacks.

All Azure public endpoint VIPs (Virtual IP Address) are guarded at platform safe thresholds. The protection extends to traffic flows inbound from the internet, outbound to the internet, and from region to region.

Microsoft uses standard detection and mitigation techniques such as SYN cookies, rate limiting, and connection limits to protect against DDoS attacks. To support automated protections, a cross-workload DDoS incident response team identifies the roles and responsibilities across teams, the criteria for escalations, and the protocols for incident handling across affected teams.

Microsoft also takes a proactive approach to DDoS defense. Botnets are a common source of command and control for conducting DDoS attacks to amplify attacks and maintain anonymity. The Microsoft Digital Crimes Unit (DCU) focuses on identifying, investigating, and disrupting malware distribution and communications infrastructure to reduce the scale and impact of botnets.

At Microsoft, despite the evolving challenges in the cyber landscape, the Azure DDoS Protection team was able to successfully mitigate some of the largest DDoS attacks ever, both in Azure and in the course of history.

Last October 2021, Microsoft reported on a 2.4 terabit per second (Tbps) DDoS attack in Azure that we successfully mitigated. Since then, we have mitigated three larger attacks.

In November 2021, Microsoft mitigated a DDoS attack with a throughput of 3.47 Tbps and a packet rate of 340 million packets per second (pps), targeting an Azure customer in Asia. As of February 2022, this is believed to be the largest attack ever reported in history. It was a distributed attack originating from approximately 10,000 sources and from multiple countries across the globe, including the United States, China, South Korea, Russia, Thailand, India, Vietnam, Iran, Indonesia, and Taiwan.

Azure Network Security webinar: Getting started with Azure DDoS Protection

Protect your applications in Azure against DDoS attacks in three steps:

Customers can protect their Azure workloads by onboarding to Azure DDoS Protection Standard. For web workloads it is recommended to use web application firewall in conjunction with DDoS Protection Standard for extensive L3-L7 protection.

1. Evaluate risks for your Azure applications. This is the time to understand the scope of your risk from a DDoS attack if you haven’t done so already.

a. If there are virtual networks with applications exposed over the public internet, we strongly recommend enabling DDoS Protection on those virtual networks. Resources in a virtual network that requires protection against DDoS attacks are Azure Application Gateway and Azure Web Application Firewall (WAF), Azure Load Balancer, virtual machines, Bastion, Kubernetes, and Azure Firewall. Review “DDoS Protection reference architectures” to get more details on reference architectures to protect resources in virtual networks against DDoS attacks.

Enabling DDOS Protection Standard on a VNET

2. Validate your assumptions. Planning and preparation are crucial to understanding how a system will perform during a DDoS attack. You should be proactive to defend against DDoS attacks and not wait for an attack to happen and then act.

a. It is essential that you understand the normal behavior of an application and prepare to act if the application is not behaving as expected during a DDoS attack. Have monitors configured for your business-critical applications that mimic client behavior and notify you when relevant anomalies are detected. Refer to monitoring and diagnostics best practices to gain insights on the health of your application.

b. Azure Application Insights is an extensible application performance management (APM) service for web developers on multiple platforms. Use Application Insights to monitor your live web application. It automatically detects performance anomalies. It includes analytics tools to help you diagnose issues and to understand what users do with your app. It's designed to help you continuously improve performance and usability.

c. Finally, test your assumptions about how your services will respond to an attack by generating traffic against your applications to simulate DDoS attack. Don’t wait for an actual attack to happen! We have partnered with Ixia, a Keysight company, to provide a self-service traffic generator (BreakingPoint Cloud) that allows Azure DDoS Protection customers to simulate DDoS test traffic against their Azure public endpoints.

3. Configure alerts and attack analytics. Azure DDoS Protection identifies and mitigates DDoS attacks without any user intervention.

a. To get notified when there’s an active mitigation for a protected public IP, we recommend configuring an alert on the metric under DDoS attack or not. DDoS attack mitigation alerts are automatically sent to Microsoft Defender for Cloud.

b. You should also configure attack analytics to understand the scale of the attack, traffic being dropped, and other details.

Microsoft Azure Security Overview

DDOS attack analytics

Best practices to be followed

Provision enough service capacity and enable auto-scaling to absorb the initial burst of a DDoS attack.

Reduce attack surfaces; reevaluate the public endpoints and decide whether they need to be publicly accessible.

If applicable, configure Network Security Group to further lock-down surfaces.

If IIS (Internet Information Services) is used, leverage IIS Dynamic IP Address Restrictions to control traffic from malicious IPs.

Setup monitoring and alerting if you have not done so already.

Some of the counters to monitor:

  • TCP connection established
  • Web current connections
  • Web connection attempts

Optionally, use third-party security offerings, such as web application firewalls or inline virtual appliances, from the Azure Marketplace for additional L7 protection that is not covered via Azure DDoS Protection and Azure WAF (Azure Web Application Firewall).

When to contact Microsoft support

During a DDoS attack if you find that the performance of the protected resource is severely degraded, or the resource is not available. Review step two above on configuring monitors to detect resource availability and performance issues.

You think your resource is under DDoS attack, but DDoS Protection service is not mitigating the attack effectively.

You're planning a viral event that will significantly increase your network traffic.

Microsoft denial-of-service Defense Strategy

Denial-of-service Defense Strategy

Microsoft's strategy to defend against network-based distributed denial-of-service (DDoS) attacks is unique due to a large global footprint, allowing Microsoft to utilize strategies and techniques that are unavailable to most other organizations. Additionally, Microsoft contributes to and draws from collective knowledge aggregated by an extensive threat intelligence network, which includes Microsoft partners and the broader internet security community. This intelligence, along with information gathered from online services and Microsoft's global customer base, continuously improves Microsoft's DDoS defense system that protects all of Microsoft online services' assets.

The cornerstone of Microsoft's DDoS strategy is global presence. Microsoft engages with Internet providers, peering providers (public and private), and private corporations all over the world. This engagement gives Microsoft a significant Internet presence and enables Microsoft to absorb attacks across a large surface area

As Microsoft's edge capacity has grown over time, the significance of attacks against individual edges has substantially diminished. Because of this decrease, Microsoft has separated the detection and mitigation components of its DDoS prevention system. Microsoft deploys multi-tiered detection systems at regional datacenters to detect attacks closer to their saturation points while maintaining global mitigation at the edge nodes. This strategy ensures that Microsoft services can handle multiple simultaneous attacks.

One of the most effective and low-cost defenses employed by Microsoft against DDoS attacks is reducing service attack surfaces. Unwanted traffic is dropped at the network edge instead of analyzing, processing, and scrubbing the data inline.

At the interface with the public network, Microsoft uses special-purpose security devices for firewall, network address translation, and IP filtering functions. Microsoft also uses global equal-cost multi-path (ECMP) routing. Global ECMP routing is a network framework to ensure that there are multiple global paths to reach a service. With multiple paths to each service, DDoS attacks are limited to the region from which the attack originates. Other regions should be unaffected by the attack, as end users would use other paths to reach the service in those regions. Microsoft has also developed internal DDoS correlation and detection systems that use flow data, performance metrics, and other information to rapidly detect DDoS attacks.

Use Azure Security Center to prevent, detect, and respond to threats

To further protect cloud services, Microsoft uses Azure DDoS Protection, a DDoS defense system built into Microsoft Azure's continuous monitoring and penetration-testing processes. Azure DDoS Protection is designed not only to withstand external attacks, but also attacks from other Azure tenants. Azure uses standard detection and mitigation techniques such as SYN cookies, rate limiting, and connection limits to protect against DDoS attacks. To support automated protections, a cross-workload DDoS incident response team identifies the roles and responsibilities across teams, the criteria for escalations, and the protocols for incident handling across affected teams.

Most DDoS attacks launched against targets are at the Network (L3) and Transport (L4) layers of the Open Systems Interconnection (OSI) model. Attacks directed at the L3 and L4 layers are designed to flood a network interface or service with attack traffic to overwhelm resources and deny the ability to respond to legitimate traffic. To guard against L3 and L4 attacks, Microsoft's DDoS solutions use traffic sampling data from datacenter routers to safeguard the infrastructure and customer targets. Traffic sampling data is analyzed by a network monitoring service to detect attacks. When an attack is detected, automated defense mechanisms kick in to mitigate the attack and ensure that attack traffic directed at one customer does not result in collateral damage or diminished network quality of service for other customers.

Microsoft also takes an offensive approach to DDoS defense. Botnets are a common source of command and control for conducting DDoS attacks to amplify attacks and maintain anonymity. The Microsoft Digital Crimes Unit (DCU) focuses on identifying, investigating, and disrupting malware distribution and communications infrastructure to reduce the scale and impact of botnets.

Azure Network Security webinar: Safeguards for Successful Azure DDoS Protection Standard Deployment

Application-level Defenses

Microsoft's cloud services are intentionally built to support high loads, which help to protect against application-level DDoS attacks. Microsoft's scaled-out architecture distributes services across multiple global datacenters with regional isolation and workload-specific throttling features for relevant workloads.

Each customer's country or region, which the customer's administrator identifies during the initial configuration of the services, determines the primary storage location for that customer's data. Customer data is replicated between redundant datacenters according to a primary/backup strategy. A primary datacenter hosts the application software along with all the primary customer data running on the software. A backup datacenter provides automatic failover. If the primary datacenter ceases to function for any reason, requests are redirected to the copy of the software and customer data in the backup datacenter. At any given time, customer data may be processed in either the primary or the backup datacenter. Distributing data across multiple datacenters reduces the affected surface area in case one datacenter is attacked. Furthermore, the services in the affected datacenter can be quickly redirected to the secondary datacenter to maintain availability during an attack and redirected back to the primary datacenter once an attack has been mitigated.

As another mitigation against DDoS attacks, individual workloads include built-in features that manage resource utilization. For example, the throttling mechanisms in Exchange Online and SharePoint Online are part of a multi-layered approach to defending against DDoS attacks.

Azure SQL Database has an extra layer of security in the form of a gateway service called DoSGuard that tracks failed login attempts based on IP address. If the threshold for failed login attempts from the same IP is reached, DoSGuard blocks the address for a pre-determined amount of time.

Resources

  • Azure DDoS Protection Standard overview
  • Azure DDoS Protection Standard fundamental best practices
  • Components of a DDoS response strategy

Azure DDoS Protection Service preview

Announcing DDoS Protection preview for Azure

Microsoft Azure

Distributed Denial of Service (DDoS) attacks are one of the top availability and security concerns voiced by customers moving their applications to the cloud. These concerns are justified as the number of documented DDoS attacks grew 380% in Q1 2017 over Q1 2016 according to data from Nexusguard. In October 2016, a number of popular websites were impacted by a massive cyberattack consisting of multiple denial of service attacks. It’s estimated that up to one third of all Internet downtime incidents are related to DDoS attacks.

As the types and sophistication of network attacks increases, Azure is committed to providing our customers with solutions that continue to protect the security and availability of applications on Azure. Security and availability in the cloud is a shared responsibility. Azure provides platform level capabilities and design best practices for customers to adopting and apply into application designs meeting their business objectives.

Today we're excited to announce the preview of Azure DDoS Protection Standard. This service is integrated with Virtual Networks and provides protection for Azure applications from the impacts of DDoS attacks.  It enables additional application specific tuning, alerting and telemetry features beyond the basic DDoS Protection which is included automatically in the Azure platform.  

Azure DDoS Protection Service offerings

Azure DDoS Protection Basic service

Basic protection is integrated into the Azure platform by default and at no additional cost. The full scale and capacity of Azure’s globally deployed network provides defense against common network layer attacks through always on traffic monitoring and real-time mitigation. No user configuration or application changes are required to enable DDoS Protection Basic.

Global DDOS Mitigation Presence

Azure DDoS Protection Standard service

Azure DDoS Protection Standard is a new offering which provides additional DDoS mitigation capabilities and is automatically tuned to protect your specific Azure resources. Protection is simple to enable on any new or existing Virtual Network and requires no application or resource changes. Standard utilizes dedicated monitoring and machine learning to configure DDoS protection policies tuned to your Virtual Network. This additional protection is achieved by profiling your application’s normal traffic patterns, intelligently detecting malicious traffic and mitigating attacks as soon as they are detected. DDoS Protection Standard provides attack telemetry views through Azure Monitor, enabling alerting when your application is under attack. Integrated Layer 7 application protection can be provided by Application Gateway WAF.

Azure Network Security | EMEA Security Days April 11-12, 2022

Azure DDoS Protection Standard service features

Native Platform Integration

Azure DDoS Protection is natively integrated into Azure and includes configuration through the Azure Portal and PowerShell when you enable it on a Virtual Network (VNet).

Turn Key Protection

Simplified provisioning immediately protects all resources in a Virtual Network with no additional application changes required.

create-virtual-network

Always on monitoring

When DDoS Protection is enabled, your application traffic patterns are continuously monitored for indicators of attacks.

Adaptive tuning

DDoS protection understands your resources and resource configuration and customizes the DDoS Protection policy to your Virtual Network. Machine Learning algorithms set and adjust protection policies as traffic patterns change over time. Protection policies define protection limits, and mitigation is performed when actual network traffic exceeds the policies threshold.

internet-traffic

L3 to L7 Protection with Application Gateway

Azure DDoS Protection service in combination with Application Gateway Web application firewall provides DDoS Protection for common web vulnerabilities and attacks.

  • Request rate-limiting
  • HTTP Protocol Violations
  • HTTP Protocol Anomalies
  • SQL Injection
  • Cross site scripting
  • virtual-network

DDoS Protection telemetry, monitoring & alerting

Rich telemetry is exposed via Azure Monitor including detailed metrics during the duration of a DDoS attack. Alerting can be configured for any of the Azure Monitor metrics exposed by DDoS Protection. Logging can be further integrated with Splunk (Azure Event Hubs), OMS Log Analytics and Azure Storage for advanced analysis via the Azure Monitor Diagnostics interface. 

Cost protection

When the DDoS Protection services goes GA, Cost Protection will provide resource credits for scale out during a documented attack.

Azure DDoS Protection Standard service availability

Azure DDoS Protection is now available for preview in select regions in US, Europe, and Asia. For details, see DDoS Protection.

Azure DDoS Protection Standard overview

Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns facing customers that are moving their applications to the cloud. A DDoS attack attempts to exhaust an application's resources, making the application unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet.

Azure DDoS Protection Standard, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes.

Features

- Native platform integration: Natively integrated into Azure. Includes configuration through the Azure portal. DDoS Protection Standard understands your resources and resource configuration.

- Turnkey protection: Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required.

- Always-on traffic monitoring: Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. DDoS Protection Standard instantly and automatically mitigates the attack, once it is detected.

- Adaptive tuning: Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time.

- Multi-Layered protection: When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure Application Gateway WAF SKU as well as third-party web application firewall offerings available in the Azure Marketplace.

- Extensive mitigation scale: Over 60 different attack types can be mitigated, with global capacity, to protect against the largest known DDoS attacks.

- Attack analytics: Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to Microsoft Sentinel or an offline security information and event management (SIEM) system for near real-time monitoring during an attack.

- Attack metrics: Summarized metrics from each attack are accessible through Azure Monitor.

- Attack alerting: Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal.

- DDoS Rapid Response: Engage the DDoS Protection Rapid Response (DRR) team for help with attack investigation and analysis. To learn more, see DDoS Rapid Response.

- Cost guarantee: Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks.



More Information:

https://azure.microsoft.com/en-us/services/ddos-protection

https://docs.microsoft.com/en-us/learn/modules/perimeter-security

https://blog.sflow.com/2014/07/ddos-mitigation-with-cumulus-linux.html

https://azure.microsoft.com/en-us/blog/azure-ddos-protection-service-preview

https://docs.microsoft.com/en-us/azure/ddos-protection/manage-ddos-protection

https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-protection-reference-architectures

https://docs.microsoft.com/en-us/azure/ddos-protection/ddos-protection-standard-features

https://azure.microsoft.com/en-us/blog/microsoft-ddos-protection-response-guide

https://azure.microsoft.com/en-us/blog/azure-ddos-protection-2021-q3-and-q4-ddos-attack-trends

https://docs.microsoft.com/en-us/compliance/assurance/assurance-microsoft-dos-defense-strategy