• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

22 November 2020

Super Computer Fukagu

About the Project

The supercomputer Fugaku development plan initiated by the Ministry of Education, Culture, Sports, Science and Technology in 2014, has set the goal to develop: (1) the next generation flagship supercomputer of Japan (the successor to the K computer); and (2) a wide range of applications that will address social and scientific issues of high priority.

RIKEN Center for Computational Science (R-CCS) has been appointed to lead the development of Fugaku with the aim to start public service in FY2021. We are committed to develop a world-leading, versatile supercomputer and its applications, building on the research, technologies, and experience obtained through the use of the K computer.

The MEXT, R-CCS and its corporate partner will collaborate with several research institutions and universities to co-design the system and applications in order to address the high priority social and scientific issues identified by the MEXT.

Outline of the Development of the Supercomputer Fugaku

The supercomputer Fugaku will be developed based on the following guiding principles: 

Top priority on problem-solving research
During development, highest priority will be given to creating a system capable of contributing to the solution of various scientific and societal issues. For this, the hardware and software will be developed in a coordinated way (Co-design), with the aim to make it usable in a variety of fields. 

World-leading performance
Create the most advanced general-use system in the world.

Improve performance through international cooperation
While leveraging Japan’s strengths, cooperate internationally to achieve world-leading technologies of the highest quality and become the international standard.

Continue the legacy of the K computer
Make the fullest use of the technologies, human resources, and applications of the K computer project for developing the Fugaku system.

Introduction to Fujitsu ARM A64FX

Japan’s Fugaku gains title as world’s fastest supercomputer

The supercomputer Fugaku, which is being developed jointly by RIKEN and Fujitsu Limited based on Arm® technology, has taken the top spot on the Top500 listThe webpage will open in a new tab., a ranking of the world’s fastest supercomputers. It also swept the other rankings of supercomputer performance, taking first place on the HPCGThe webpage will open in a new tab., a ranking of supercomputers running real-world applications, HPL-AIThe webpage will open in a new tab., which ranks supercomputers based on their performance capabilities for tasks typically used in artificial intelligence applications, and Graph 500The webpage will open in a new tab., which ranks systems based on data-intensive loads. This is the first time in history that the same supercomputer has become No.1 on Top500, HPCG, and Graph500 simultaneously. The awards were announced on June 22 at the ISC High Performance 2020 DigitalThe webpage will open in a new tab., an international high-performance computing conference.

On the Top500, it achieved a LINPACK score of 415.53 petaflops, a much higher score than the 148.6 petaflops of its nearest competitor, Summit in the United States, using 152,064 of its eventual 158,976 nodes. This marks the first time a Japanese system has taken the top ranking since June 2011, when the K computer—Fugaku’s predecessor—took first place. On HPCG, it scored 13,400 teraflops using 138,240 nodes, and on HPL-AI it gained a score of 1.421 exaflops—the first time a computer has even earned an exascale rating on any list—using 126,720 nodes.

The top ranking on Graph 500 was won by a collaboration involving RIKEN, Kyushu University, Fixstars Corporation, and Fujitsu Limited. Using 92,160 nodes, it solved a breadth-first search of an enormous graph with 1.1 trillion nodes and 17.6 trillion edges in approximately 0.25 seconds, earning it a score of 70,980 gigaTEPS, more than doubling the score of 31,303 gigaTEPS the K computer and far surpassing China’s Sunway TaihuLight, which is currently second on the list, with 23,756 gigaTEPS.

Fugaku, which is currently installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan, is being developed under a national plan to design Japan’s next generation flagship supercomputer and to carry out a wide range of applications that will address high-priority social and scientific issues. It will be put to use in applications aimed at achieving the Society 5.0 planThe webpage will open in a new tab., by running applications in areas such as drug discovery; personalized and preventive medicine; simulations of natural disasters; weather and climate forecasting; energy creation, storage, and use; development of clean energy; new material development; new design and production processes; and—as a purely scientific endeavor—elucidation of the fundamental laws and evolution of the universe. In addition, Fugaku is currently being used on an experimental basis for research on COVID-19, including on diagnostics, therapeutics, and simulations of the spread of the virus. The new supercomputer is scheduled to begin full operation in fiscal 2021 (which starts in April 2021).

According to Satoshi Matsuoka, director of RIKEN R-CCS, “Ten years after the initial concept was proposed, and six years after the official start of the project, Fugaku is now near completion. Fugaku was developed based on the idea of achieving high performance on a variety of applications of great public interest, such as the achievement of Society 5.0, and we are very happy that it has shown itself to be outstanding on all the major supercomputer benchmarks. In addition to its use as a supercomputer, I hope that the leading-edge IT developed for it will contribute to major advances on difficult social challenges such as COVID-19.”

According to Naoki Shinjo, Corporate Executive Officer of Fujitsu Limited, “I believe that our decision to use a co-design process for Fugaku, which involved working with RIKEN and other parties to create the system, was a key to our winning the top position on a number of rankings. I am particularly proud that we were able to do this just one month after the delivery of the system was finished, even during the COVID-19 crisis. I would like to express our sincere gratitude to RIKEN and all the other parties for their generous cooperation and support. I very much hope that Fugaku will show itself to be highly effective in real-world applications and will help to realize Society 5.0.”
“The supercomputer Fugaku illustrates a dramatic shift in the type of compute that has been traditionally used in these powerful machines, and it is proof of the innovation that can happen with flexible computing solutions driven by a strong ecosystem,” said Rene Haas, President, IPG, Arm. “For Arm, this achievement showcases the power efficiency, performance and scalability of our compute platform, which spans from smartphones to the world’s fastest supercomputer. We congratulate RIKEN and Fujitsu Limited for challenging the status quo and showing the world what is possible in Arm-based high-performance computing.”

Fujitsu High Performance CPU for the Post K Computer

The most important thing you need to understand about the role Arm processor architecture plays in any computing or communications market — smartphones, personal computers, servers, or otherwise — is this: Arm Holdings, Ltd., which is based in Cambridge, UK, designs the components of processors for others to build. Arm owns these designs, along with the architecture of their instruction sets, such as 64-bit ARM64. Its business model is to license the intellectual property (IP) for these components and the instruction set to other companies, enabling them to build systems around them that incorporate their own designs as well as Arm's. For its customers who build systems around these chips, Arm has done the hard part for them.

Arm Holdings, Ltd. does not manufacture its own chips. It has no fabrication facilities of its own. Instead, it licenses these rights to other companies, which Arm Holdings calls "partners." They utilize Arm's architectural model as a kind of template, building systems that use Arm cores as their central processors.

A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exascale Performance

These Arm partners are allowed to design the rest of their own systems, perhaps manufacture those systems -- or outsource their production to others -- and then sell them as their own. Many Samsung and Apple smartphones and tablets, and essentially all devices produced by Qualcomm, utilize some Arm intellectual property. A new wave of servers produced with Arm-based systems-on-a-chip (SoC) has already made headway in competing against x86, especially with low-power or special-use models. Each device incorporating an Arm processor tends to be its own unique system, like the multi-part Qualcomm Snapdragon 845 mobile processor depicted above. (Qualcomm announced its 865 Plus 5G mobile platform in early July.)

Last August, Arm announced it had signed a partnership agreement with the US Defense Dept.'s DARPA agency, giving Pentagon research teams access to Arm's technology portfolio for research purposes.

Arm processors: Everything you need to know

CPU? GPU? This new ARM chip is BOTH


On September 13, Nvidia announced a deal to acquire Arm Holdings , Ltd. from its parent company, Tokyo-based Softbank Group Corp., in a cash and stock exchange valued at $40 billion. The deal is pending regulatory review in the European Union, United States, Japan, and China, in separate processes that could take as long as 18 months to conclude.

In a September 14 press conference, Nvidia CEO Jensen Huang told reporters his intention is to maintain Arm's current business model, without influencing its current mix of partners. However, Huang also stated his intention to "add" access to Nvidia's GPU technology to Arm's portfolio of IP offered to partners, giving Arm licensees access to Nvidia designs. What's unclear at the time the deal was announced is what a prospective partner would want with a GPU design, besides the opportunity to compete against Nvidia.

Arm designs are created with the intention of being mixed-and-matched in various configurations, depending on the unique needs of its partners. The Arm Foundry program is a partnership between Arm Holdings and fabricators of semiconductors, such as Taiwan-based TSMC and US-based Intel, giving licensees multiple options for producing systems that incorporate Arm technology.  (Prior to the September announcement, when Arm was considered for sale, rumored potential buyers included TSMC and Samsung.)  By comparison, Nvidia produces exclusive GPU designs, with the intention of being exclusively produced at a foundry of its choosing — originally IBM, then largely TSMC, and most recently Samsung. Nvidia's designs are expressly intended for these particular foundries — for instance, to take advantage of Samsung's Extreme Ultra-Violet (EUV) lithography process.

Arm processors: Everything you need to know

After a colossal $40 billion deal with GPU maker Nvidia closes in 2021 or early 2022, there’s a good chance Arm’s intellectual property may be part of every widely distributed processor that is not x86.



An x86-based PC or server is built to some common set of specifications for performance and compatibility. Such a PC isn't so much designed as assembled. This keeps costs low for hardware vendors, but it also relegates most of the innovation and feature-level premiums to software, and perhaps a few nuances of implementation. The x86 device ecosystem is populated by interchangeable parts, at least insofar as architecture is concerned (granted, AMD and Intel processors have not been socket-compatible for quite some time). 

The Arm ecosystem is populated by some of the same components, such as memory, storage, and interfaces, but otherwise by complete systems designed and optimized for the components they utilize.

This does not necessarily give Arm devices, appliances, or servers any automatic advantage over Intel and AMD. Intel and x86 have been dominant in the computing processor space for the better part of four decades, and Arm chips have existed in one form or another for nearly all of that time -- since 1985. Its entire history has been about finding success in markets that x86 technology had not fully exploited or in which x86 was showing weakness, or in markets where x86 simply cannot be adapted.

For tablet computers, more recently in data center servers, and soon once again in desktop and laptop computers, the vendor of an Arm-based device or system is no longer relegated to being simply an assembler of parts. This makes any direct, unit-to-unit comparison of Arm vs. x86 processor components somewhat frivolous, as a device or system based on one could easily and consistently outperform the other, based on how that system was designed, assembled, and even packaged.

The class of processor now known as GPU originated as a graphics co-processor for PCs, and is still prominently used for that purpose. However, largely due to the influence of Nvidia in the artificial intelligence space, the GPU has come to be regarded as one class of general-purpose accelerator, as well as a principal computing component in supercomputers — being coupled with, rather than subordinate to, supercomputers. The GPU's strong suit is its ability to execute many clusters of instructions, or threads, in parallel, greatly accelerating many academic tasks.

By definition and by design, an Arm processor is not a GPU, though a system could be constructed using both. Last November, Nvidia announced its introduction of a reference platform enabling systems architects to couple Arm-based server designs with Nvidia GPU accelerators.

The tofu-interconnect-d


Apple CEO Tim Cook announces his company's chip manufacturing unit at WWDC 2020.

Apple Silicon is the phrase Apple presently uses to describe its own processor production, beginning last June with Apple's announcement of the replacement of its x86 Mac processor line. In its place, in Mac laptop units that are reportedly already shipping, will be a new system-on-a-chip called A12Z, code-named "Bionic," produced by Apple using the 64-bit instruction set licensed to it by Arm Holdings. In this case, Arm isn't the designer, but the producer of the instruction set around which Apple makes its original design. Apple is widely expected to choose TSMC as the fabricator for its A12Z.

For MacOS 11 to continue to run software compiled for Intel processors, the new Apple system will run a kind of "just-in-time" instruction translator called Rosetta 2. Rather than run an old MacOS image in a virtual machine, the new OS will run a live x86 machine code translator that re-fashions x86 code into what Apple now calls Universal 2 binary code -- an intermediate-level code that can still be made to run on older Intel-based Macs -- in real-time. That code will run in what sources outside of Apple call an "emulator," but which isn't really an emulator in that it doesn't simulate the execution of code in an actual, physical machine (there is no "Universal 2" chip).

The first results of independent performance benchmarks comparing an iPad Pro using the A12Z chip planned for the first Arm-based Macs, against Microsoft Surface models, looked promising. Geekbench results give the Bionic-powered tablet a multi-core processing score of 4669 (higher is better), versus 2966 for the Pentium-powered Surface Pro X, and 3033 for the Core i5-powered Surface Pro 6.

Apple's newly claimed ability to produce its own SoC for Mac, just as it does for iPhone and iPad, could save the company over time as much as 60 percent on production costs, according to its own estimates. Of course, Apple is typically tight-lipped as to how it arrives at that estimate, and how long such savings will take to be realized.

The relationship between Apple and Arm Holdings dates back to 1990, when Apple Computer UK became a founding co-stakeholder. The other co-partners at that time were the Arm concept's originator, Acorn Computers Ltd. (more about Acorn later) and custom semiconductor maker VLSI Technology (named for the common semiconductor manufacturing process called "very large-scale integration"). Today, Arm Holdings is a wholly-owned subsidiary of SoftBank, which announced its intent to purchase the licensor in July 2016. At the time, the acquisition deal was the largest for a Europe-based technology firm.


The maker of an Intel- or AMD-based x86 computer does not design nor does it own any portion of the intellectual property for the CPU. It also cannot reproduce x86 IP for its own purposes. "Intel Inside" is a seal certifying a license for the device manufacturer to build a machine around Intel's processor. An Arm-based device may be designed to incorporate the processor, perhaps even making adaptations to its architecture and functionality. For that reason, rather than a "central processing unit" (CPU), an Arm processor is instead called a system-on-a-chip (SoC). Much of the functionality of the device may be fabricated onto the chip itself, cohabiting the die with Arm's exclusive cores, rather than built around the chip in separate processors, accelerators, or expansions.

As a result, a device run by an Arm processor, such as one of the Cortex series, is a different order of machine from one run by an Intel Xeon or an AMD Epyc. It means something quite different to be an original device based around an Arm chip. Most importantly from a manufacturer's perspective, it means a somewhat different, and hopefully more manageable, supply chain. Since Arm has no interest in marketing itself to end-users, you don't typically hear much about "Arm Inside."

Equally important, however, is the fact that an Arm chip is not necessarily a central processor. Depending on the design of its system, it can be the heart of a device controller, a microcontroller (MCU), or some other subordinate component in a system.

Perhaps the best explanation of Arm's business model, as well as its relationship with its own intellectual property, is to be found in a 2002 filing with the US Securities and Exchange Commission:

We take great care to establish and maintain the proprietary integrity of our products. We focus on designing and implementing our products in a "cleanroom" fashion, without the use of intellectual property belonging to other third parties, except under strictly maintained procedures and express license rights. In the event that we discover that a third party has intellectual property protections covering a product that we are interested in developing, we would take steps to either purchase a license to use the technology or work around the technology in developing our own solution so as to avoid infringement of that other company's intellectual property rights. Notwithstanding such efforts, third parties may yet make claims that we have infringed their proprietary rights, which we would defend.

What types of Arm processors are produced today?

To stay competitive, Arm offers a variety of processor core styles or series. Some are marketed for a variety of use cases; others are earmarked for just one or two. It's important to note here that Intel uses the term "microarchitecture," and sometimes by extension "architecture," to refer to the specific stage of evolution of its processors' features and functionality -- for example, its most recently shipped generation of Xeon server processors is a microarchitecture Intel has codenamed Cascade Lake. By comparison, Arm architecture encompasses the entire history of Arm RISC processors. Each iteration of this architecture has been called a variety of things, but most recently a series. All that having been said, Arm processors' instruction sets have evolved at their own pace, with each iteration generally referred to using the same abbreviation Intel uses for x86: ISA. And yes, here the "A" stands for "architecture."

Intel manufactures Celeron, Core, and Xeon processors for very different classes of customers; AMD manufactures Ryzen for desktop and laptop computers, and Epyc for servers. By contrast, Arm produces designs for complete processors, that may be utilized by partners as-is, or customized by those partners for their own purposes. Here are the principal Arm Holdings, Ltd. designs at the time of this publication:

  • Cortex-A has been marketed as the workhorse of the Arm family, with the "A" in this instance standing for application. As originally conceived, the client looking to build a system around Cortex-A had a particular application in mind for it, such as a digital audio amplifier, digital video processor, the microcontroller for a fire suppression system, or a sophisticated heart rate monitor. As things turned out, Cortex-A ended up being the heart of two emerging classes of device: Single-board computers capable of being programmed for a variety of applications, such as cash register processing; and most importantly of all, smartphones. Importantly, Cortex-A processors include memory management units (MMU) on-chip. Decades ago, it was the inclusion of the MMU on-chip by Intel's 80286 CPU that changed the game in its competition against Motorola chips, which at that time-powered Macintosh. The principal tool in Cortex-A's arsenal is its advanced single-instruction, multiple-data (SIMD) instruction set, code-named NEON, which executes instructions like accessing memory and processing data in parallel over a larger set of vectors. Imagine pulling into a filling station and loading up with enough fuel for 8 or 16 tanks, and you'll get the basic idea.
  • Cortex-R is a class of processor with a much narrower set of use cases: Mainly microcontroller applications that require real-time processing. One big case-in-point is 4G LTE and 5G modems, where time (or what a music composer might more accurately call "tempo") is a critical factor in achieving modulation. Cortex-R's architecture is tailored in such a way that it responds to interrupts -- the requests for attention that trigger processes to run -- not only quickly but predictably. This enables R to run more consistently and deterministically and is one reason why Arm is promoting its use as a high-capacity storage controller for solid-state flash memory.
  • Cortex-M is a more miniaturized form factor, making it more suitable for tight spaces: For example, automotive control and braking systems, and high-definition digital cameras with image recognition. A principal use for M is as a digital signal processor (DSP), which responds to and manages analog signals for applications such as sound synthesis, voice recognition, and radar. Since 2018, Arm has taken to referring to all its Cortex series collectively under the umbrella term Cosmos.
  • Ethos-N is a series of processor specifically intended for applications that may involve machine learning or some other form of neural network processing. Arm calls this series a neural processor, although it's not quite the same class as Google's tensor processing unit, which Google itself admits is actually a co-processor and not a stand-alone controller [PDF]. Arm's concept of the neural processor includes routines used in drawing logical inferences from data, which are the building blocks of artificial intelligence used in image and pattern recognition, as well as machine learning.
  • Ethos-U is a slimmed-down edition of Ethos-N that is designed to work more like a co-processor, particularly in conjunction with Cortex-A.
  • Neoverse, launched in October 2018, represents a new and more concentrated effort by Arm to design cores that are more applicable in servers and the data centers that host them -- especially the smaller varieties. The term Arm uses in marketing Neoverse is "infrastructure" -- without being too specific, but still targeting the emerging use cases for mini and micro data centers stationed at the "customer edge," closer to where end-users will actually consume processor power.
  • SecurCore is a class of processor designed by Arm exclusively for use in smart card, USB-based certification, and embedded security applications.
  • These are series whose designs are licensed for others to produce processors and microcontrollers. All this being said, Arm also licenses certain custom and semi-custom versions of its architecture exclusively, enabling these clients to build unique processors that are available to no other producer. These special clients include:

Apple, which has fabricated for itself a variety of Arm-based designs over the years for iPhone and iPad, and announced last June an entirely new SoC for Mac (see above);
Marvell, which acquired chip maker Cavium in November 2017, and has since doubled down on investments in the ThunderX series of processors originally designed for Cavium;
  • Nvidia, which co-designed two processor series with Arm, the most recent of which is called CArmel. Known generally as a GPU producer, Nvidia leverages the CArmel design to produce its 64-bit Tegra Xavier SoC. That chip powers the company's small-form-factor edge computing device, called Jetson AGX Xavier.
  • Samsung, which produces a variety of 32-bit and 64-bit Arm processors for its entire consumer electronics line, under the internal brand Exynos. Some have used a Samsung core design called Mongoose, while most others have utilized versions of Cortex-A. Notably (or perhaps notoriously) Samsung manufactures variations of its Galaxy Note, Galaxy S, and Galaxy A series smartphones with either its own Exynos SoCs (outside the US) or Qualcomm Snapdragons (the US only).
  • Qualcomm, whose most recent Snapdragon SoC models utilize a core design called Kryo, which is a semi-custom variation of Cortex-A. Earlier Snapdragon models were based on a core design called Krait, which was still officially an Arm-based SoC even though it was a purely Qualcomm design. Analysts estimate Snapdragon 855, 855 Plus, and 865 together to comprise the nucleus of greater than half the world's 5G smartphones. Although Qualcomm did give it a go in November 2017 with producing Arm chips for data center servers, with a product line called Centriq, it began winding down production of that line in December 2018, turning over the rights to continue its production to China-based Huaxintong Semiconductor (HXT), at the time a joint venture partner. That partnership was terminated the following April.
Ampere Computing, a startup launched with ex-Intel president Renee James, produces a super-high core-count server processor line called Altra. The 128-core Altra Max edition will begin sampling in Q4 2020, notwithstanding the pandemic.


Technically speaking, the class of processor to which an Arm chip belongs is an application-specific integrated circuit (ASIC). Consider a hardware platform whose common element is a set of processing cores. That's not too difficult; that describes essentially every device ever manufactured. But miniaturize these components so that they all fit on one die -- on the same physical platform -- interconnected using an exclusive mesh bus.

A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exascale Performance 

As you know, for a computer, the application program is rendered as software. In many appliances such as Internet routers, front-door security systems, and "smart" HDTVs, the memory in which operations programs are stored is non-volatile, so we often call it firmware. In a device whose core processor is an ASIC, its main functionality is rendered onto the chip, as a permanent component. So the functionality that makes a device a "system" shares the die with the processor cores, and an Arm chip can have dozens of those.

Some analysis firms have taken to using the broad phrase applications processor, or AP, to refer to ASICs, but this has not caught on generally. In more casual use, an SoC is also called a chipset, even though in recent years, more often than not, the number of chips in the set is just one. In general use, a chipset is a set of one or more processors that collectively function as a complete system. A CPU executes the main program, while a chipset manages attached components and communicates with the user. On a PC motherboard, the chipset is separate from the CPU. On an SoC, the main processor and the system components share the same die.

What makes Arm processor architecture unique?

The "R" in "Arm" actually stands for another acronym: Reduced Instruction Set Computer (RISC). Its purpose is to leverage the efficiency of simplicity, to render all of the processor's functionality on a single chip. Keeping a processor's instruction set small means it can be coded using a fewer number of bits, thus reducing memory consumption as well as execution cycle time. Back in 1982, students at the University of California, Berkeley, were able to produce the first working RISC architectures by judiciously selecting which functions would be used most often, and rendering only those in hardware -- with the remaining functions rendered as software. Indeed, that's what makes an SoC with a set of small cores feasible: Relegating as much functionality to software as possible.

Retroactively, architectures such as x86, which adopted strategies quite opposite to RISC, were dubbed Complex Instruction Set Computers (CISC), although Intel has historically avoided using that term for itself. The power of x86 comes from being able to accomplish so much with just a single instruction. For instance, with Intel's vector processing, it's possible to execute 16 single-precision math operations, or 8 double-precision operations, simultaneously; here, the vector acts as a kind of "skewer," if you will, poking through all the operands in a parallel operation and racking them up.

That makes complex math easier, at least conceptually. With a RISC system, math operations are decomposed into fundamentals. Everything that would happen automatically with a CISC architecture -- for example, clearing up the active registers when a process is completed -- takes a full, recorded step with RISC. However, because fewer bits (binary digits) are required to encapsulate the entire RISC instruction set, it may end up taking about as many bits in the end to encode a sequence of fundamental operations in a RISC processor -- perhaps even fewer -- than a complex CISC instruction where all the properties and arguments are piled together in a big clump.

Intel can, and has, demonstrated very complex instructions with higher performance statistics than the same processes for Arm processors, or other RISC chips. But sometimes such performance gains come at an overall performance cost for the rest of the system, making RISC architectures somewhat more efficient than CISC at general-purpose tasks.

Then there's the issue of customization. Intel enhances its more premium CPUs with functionality by way of programs that would normally be rendered as software, but are instead embedded as microcode. These are routines designed to be quickly executed at the machine code level, and that can be referenced by that code indirectly, by name. This way, for example, a program that needs to invoke a common method for decrypting messages on a network can address very fast processor code, very close to where that code will be executed. (Conveniently, many of the routines that end up in microcode are the ones often employed in performance benchmarks.) These microcode routines are stored in read-only memory (ROM) near the x86 cores.

An Arm processor, by contrast, does not use digital microcode in its on-die memory. The current implementation of Arm's alternative is a concept called custom instructions [PDF]. It enables the inclusion of  completely client-customizable, on-die modules, whose logic is effectively "pre-decoded." These modules are represented in the above Arm diagram by the green boxes. All the program has to do to invoke this logic is cue up a dependent instruction for the processor core, which passes control to the custom module as though it were another arithmetic logic unit (ALU). Arm asks its partners who want to implement custom modules to present it with a configuration file, and map out the custom data path from the core to the custom ALU. Using just these items, the core can determine the dependencies and instruction interlocking mechanisms for itself.

This is how an Arm partner builds up an exclusive design for itself, using Arm cores as their starting ingredients.

Although Arm did not create the concept of RISC, it had a great deal to do with realizing the concept, and making it publicly available. One branch of the original Berkeley architecture to have emerged unto its own is RISC-V, whose core specification was made open source under the Creative Commons 4.0 license. Nvidia, among others including Qualcomm, Samsung, Huawei, and Micron Technology, has been a founding member of the RISC-V Foundation. When asked, Nvidia CEO Jensen Huang indicated he intends for his company to continue contributing to the RISC-V effort, maintaining that its ecosystem is naturally separate from that of Arm.

The rising prospects for Arm in servers

RIKEN Center for Computational Science
Just last month, a Fujitsu Arm-powered supercomputer named Fugaku (pictured left), built for Japan's RIKEN Center for Computational Science, seized the #1 spot on the semi-annual Top 500 Supercomputer list.

But of all the differences between an x86 CPU and an Arm SoC, this may be the only one that matters to a data center's facilities manager: Given any pair of samples of both classes of processor, it's the Arm chip that is least likely to require an active cooling system. Put another way, if you open up your smartphone, chances are you won't find a fan. Or a liquid cooling apparatus.

The buildout of 5G wireless technology is, ironically enough, expanding the buildout of fiber optic connectivity to locations near the "customer edge" -- the furthest point from the network operations center. This opens up the opportunity to station edge computing devices and servers at or near such points, but preferably without the heat exchanger units that typically accompany racks of x86 servers.

Bamboo Systems

This is where startups such as Bamboo Systems come in. Radical reductions in the size and power requirements for cooling systems enable server designers to devise new ways to think "out-of-the-box" -- for instance, by shrinking the box. A Bamboo server node is a card not much larger than the span of most folks' hands, eight of which may be securely installed in a 1U box that typically supports 1, maybe 2, x86 servers. Bamboo aims to produce servers, the company says, that use as little as one-fifth the rack space and consume one-fourth the power, of x86 racks with comparable performance levels.

Where did Arm processors come from?
An Acorn. Indeed, that's what the "A" originally stood for.

Back in 1981, a Cambridge, UK-based company called Acorn Computers was marketing a microcomputer (what we used to call "PCs" back before IBM popularized the term) based on Motorola's 6502 processor -- which had powered the venerable Apple II, the Commodore 64, and the Atari 400 and 800. Although the name "Acorn" was a clever trick to appear earlier on an alphabetized list than "Apple," its computer had been partly subsidized by the BBC and was thus known nationwide as the BBC Micro.

All 6502-based machines used 8-bit processor architecture, and in 1981, Intel was working towards a fully compatible 16-bit architecture to replace the 8086 used in the IBM PC/XT. The following year, Intel's 80286 would enable IBM to produce its PC AT so that MS-DOS, and all the software that ran on DOS, would not have to be changed or recompiled to run on 16-bit architecture. It was a tremendous success, and Motorola could not match it. Although Apple's first Macintosh was based on the 16-bit Motorola 68000 series, its architecture was only "inspired" by the earlier 8-bit design, not compatible with it. (Eventually, it would produce a 16-bit Apple IIGS based on the 65C816 processor, but only after several months waiting for the makers of the 65816 to ship a working test model. The IIGS did have an "Apple II" step-down mode but technically not full compatibility.)

Acorn's engineers wanted a way forward, and Motorola was leaving them at a dead end. After experimenting with a surprisingly fast co-processor for the 6502 called Tube that just wasn't fast enough, they opted to take the plunge with a full 32-bit pipeline. Following the lead of the Berkeley RISC project, in 1983, they built a simulator for a processor they called Arm1 that was so simple, it ran on the BASIC language interpreter of the BBC Micro (albeit not at speed). They would collaborate with VLSI and would produce two years later their first Arm1 working model, with a 6 MHz clock speed. It utilized so little power that, as one project engineer tells the story, one day they noticed the chip was running without its power supply connected. It was actually being powered by leakage from the power rails leading to the I/O chip.

At this early stage, the Arm1, Arm2, and Arm3 processors were all technically CPUs, not SoCs. Yet in the same sense that today's Intel Core processors are architectural successors of its original 4004, Cortex-A is the architectural successor to Arm1.

More Information:

Supercomputer Fugaku Documents

19 October 2020

IBM Reveals Next-Generation IBM POWER10 Processor


New CPU co-optimized for Red Hat OpenShift for enterprise hybrid cloud

IBM revealed the next generation of its IBM POWER central processing unit (CPU) family: IBM POWER10. 

OpenPOWER Summit 2020 Sponsor Showcase: IBM POWER10

Intel and AMD have some fresh competition in the enterprise and data center markets as IBM just launched its next-generation Power10 processor.

The Power9 processor was introduced back in 2017. It's a 14nm processor that was used in the Summit supercomputer, which held the top spot as the world's fastest supercomputer from Nov. 2018 to June 2020. Now IBM is set to replace Power9 with the company's first 7nm processor, and Power10 will be manufactured through a partnership with Samsung.

Power10 promises some massive improvements over Power9. IBM claims a 3x improvement in both capacity and processor energy efficiency over its previous chip generation within the same power envelope. Power10 also includes a new feature called "memory inception," allowing clusters of physical memory to be shared across a pool of systems. Each system in the pool can access all of the memory, and memory clusters can be scaled up to petabytes in size.

IBM says there's up to a 20x improvement in speed for artificial intelligence workloads compared to Power9, and there's also been a focus on bolstering security. IBM added "quadruple the number of AES encryption engines per core" while also anticipating "future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption."

"Enterprise-grade hybrid clouds require a robust on-premises and off-site architecture inclusive of hardware and co-optimized software," said Stephen Leonard, GM of IBM Cognitive Systems. "With IBM POWER10 we've designed the premier processor for enterprise hybrid cloud, delivering the performance and security that clients expect from IBM. With our stated goal of making Red Hat OpenShift the default choice for hybrid cloud, IBM POWER10 brings hardware-based capacity and security enhancements for containers to the IT infrastructure level."

Considering that the Summit supercomputer has only dropped to second place on the fastest list and still counts as the fifth most efficient supercomputer operating today, it seems likely a supercomputer using Power10 processors is going to appear and jump immediately to the top of the charts within a few years. 


Japan's ARM-Based 'Fugaku' System Now the World's Fastest Supercomputer

Microsoft's Powerful Supercomputer Will Supercharge AI for Azure Developers

Supercomputers Taken Offline After Hackers Secretly Install Cryptocurrency Miners

Designed to offer a platform to meet the unique needs of enterprise hybrid cloud computing, the IBM POWER10 processor uses a design focused on energy efficiency and performance in a 7nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the IBM POWER9 processor.1

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor is an important evolution in IBM's roadmap for POWER. Systems taking advantage of IBM POWER10 are expected to be available in the second half of 2021. Some of the new processor innovations include:

  • -IBM's First Commercialized 7nm Processor that is expected to deliver up to a 3x improvement in capacity and processor energy efficiency within the same power envelope as IBM POWER9, allowing for greater performance.1
  • -Support for Multi-Petabyte Memory Clusters with a breakthrough new technology called Memory Inception, designed to improve cloud capacity and economics for memory-intensive workloads from ISVs like SAP, the SAS Institute, and others as well as large-model AI inference.
  • -New Hardware-Enabled Security Capabilities including transparent memory encryption designed to support end-to-end security. The IBM POWER10 processor is engineered to achieve significantly faster encryption performance with quadruple the number of AES encryption engines per core compared to IBM POWER9 for today's most demanding standards and anticipated future cryptographic standards like quantum-safe cryptography and fully homomorphic encryption. It also brings new enhancements to container security.
  • -New Processor Core Architectures in the IBM POWER10 processor with an embedded Matrix Math Accelerator which is extrapolated to provide 10x, 15x and 20x faster AI inference for FP32, BFloat16 and INT8 calculations per socket respectively than the IBM POWER9 processor to infuse AI into business applications and drive greater insights.

"Enterprise-grade hybrid clouds require a robust on-premises and off-site architecture inclusive of hardware and co-optimized software," said Stephen Leonard, GM of IBM Cognitive Systems. "With IBM POWER10 we've designed the premier processor for enterprise hybrid cloud, delivering the performance and security that clients expect from IBM. With our stated goal of making Red Hat OpenShift the default choice for hybrid cloud, IBM POWER10 brings hardware-based capacity and security enhancements for containers to the IT infrastructure level."

IBM's POWER10 Processor - William Starke & Brian W. Thompto, IBM

IBM POWER10 7nm Form Factor Delivers Energy Efficiency and Capacity Gains

IBM POWER10 is IBM's first commercialized processor built using 7nm process technology. IBM Research has been partnering with Samsung Electronics Co., Ltd. on research and development for more than a decade, including demonstration of the semiconductor industry's first 7nm test chips through IBM's Research Alliance.

With this updated technology and a focus on designing for performance and efficiency, IBM POWER10 is expected to deliver up to a 3x gain in processor energy efficiency per socket, increasing workload capacity in the same power envelope as IBM POWER9. This anticipated improvement in capacity is designed to allow IBM POWER10-based systems to support up to 3x increases in users, workloads and OpenShift container density for hybrid cloud workloads as compared to IBM POWER9-based systems. 

This can affect multiple datacenter attributes to drive greater efficiency and reduce costs, such as space and energy use, while also allowing hybrid cloud users to achieve more work in a smaller footprint.

Hardware Enhancements to Further Secure the Hybrid Cloud

IBM POWER10 offers hardware memory encryption for end-to-end security and faster cryptography performance thanks to additional AES encryption engines for both today's leading encryption standards as well as anticipated future encryption protocols like quantum-safe cryptography and fully homomorphic encryption.

Further, to address new security considerations associated with the higher density of containers, IBM POWER10 is designed to deliver new hardware-enforced container protection and isolation capabilities co-developed with the IBM POWER10 firmware. If a container were to be compromised, the POWER10 processor is designed to be able to prevent other containers in the same Virtual Machine (VM) from being affected by the same intrusion.

Cyberattacks are continuing to evolve, and newly discovered vulnerabilities can cause disruptions as organizations wait for fixes. To better enable clients to proactively defend against certain new application vulnerabilities in real-time, IBM POWER10 is designed to give users dynamic execution register control, meaning users could design applications that are more resistant to attacks with minimal performance loss.

Multi-Petabyte Size Memory Clustering Gives Flexibility for Multiple Hybrid Deployments

IBM POWER has long been a leader in supporting a wide range of flexible deployments for hybrid cloud and on-premises workloads through a combination of hardware and software capabilities. The IBM POWER10 processor is designed to elevate this with the ability to pool or cluster physical memory across IBM POWER10-based systems, once available, in a variety of configurations. In a breakthrough new technology called Memory Inception, the IBM POWER10 processor is designed to allow any of the IBM POWER10 processor-based systems in a cluster to access and share each other's memory, creating multi-Petabyte sized memory clusters.

For both cloud users and providers, Memory Inception offers the potential to drive cost and energy savings, as cloud providers can offer more capability using fewer servers, while cloud users can lease fewer resources to meet their IT needs. 

Infusing AI into the Enterprise Hybrid Cloud to Drive Deeper Insights

As AI continues to be more and more embedded into business applications in transactional and analytical workflows, AI inferencing is becoming central to enterprise applications. The IBM POWER10 processor is designed to enhance in-core AI inferencing capability without requiring additional specialized hardware.

With an embedded Matrix Math Accelerator, the IBM POWER10 processor is expected to achieve 10x, 15x, and 20x faster AI inference for FP32, BFloat16 and INT8 calculations respectively to improve performance for enterprise AI inference workloads as compared to IBM POWER9,2 helping enterprises take the AI models they trained and put them to work in the field. With IBM's broad portfolio of AI software, IBM POWER10 is expected to help infuse AI workloads into typical enterprise applications to glean more impactful insights from data.

Building the Enterprise Hybrid Cloud of the Future

With hardware co-optimized for Red Hat OpenShift, IBM POWER10-based servers will deliver the future of the hybrid cloud when they become available in the second half of 2021. Samsung Electronics will manufacture the IBM POWER10 processor, combining Samsung's industry-leading semiconductor manufacturing technology with IBM's CPU designs.

OpenPOWER Summit EU 2019: Microwatt: Make Your Own POWER CPU

IBM today introduced its next generation Power10 microprocessor, a 7nm device manufactured by Samsung. The chip features a new microarchitecture, broad new memory support, PCIe Gen 5 connectivity, hardware enabled security, impressive energy efficiency, and a host of other improvements. Unveiled at the annual Hot Chips conference (virtual this year) Power10 won’t turn up in IBM systems until this time next year. IBM didn’t disclose when the chip would be available to other systems makers.

IBM says Power10 offers a ~3x performance gain and ~2.6x core efficiency gain over Power9. No benchmarks against non-IBM chips were presented. Power9, of course, was introduced in 2017 and manufactured by Global Foundries on a 14nm process. While the move to a 7nm process provides many of Power10’s gains, there are also significant new features, not least what IBM calls Inception Memory that allows Power10 to access up to “multi petabytes” of pooled memory from diverse sources.

“You’re able to kind of trick a system into thinking that memory in another system belongs to this system. It isn’t like traditional [techniques] and doing an RDMA over InfiniBand to get access to people’s memory. This is programs running on my computer [that] can do load-store-access directly, coherently,” said William Starke, IBM distinguished engineer and a Power10 architect in a pre-briefing. “They use their caches [to] play with memory as if it’s in my system, even if it’s bridged by a cable over to another system. If we’re using short-reach cabling, we can actually do this with only 50-to-100 nanoseconds of additional latency. We’re not talking adding a microsecond or something like you might have over and RDMA.”

IBM is promoting Inception as a major achievement.

“HP came out with their big thing a few years ago. They called it The Machine and it was going to be their way of revolutionizing things largely by disaggregating memory. Intel you’ve seen from their charts talking about their Rack Scale architectures [that] they’re evolving toward. Well, this is IBM’s version of this and we have it today, in silicon. We are announcing we are able to take things outside of the system and aggregate the multiple systems together to directly share memory.

OpenPOWER Summit NA 2019: An Overview of the Self Boot Engine (SBE) in POWER9 base OpenPOWER Systems

Inception is just one of many interesting features of Power10, which has roughly 18 billion transistors. IBM plans to offer two core types – 4 SMT (simultanous multi-threaded) cores and 8 SMT cores; IBM focused on the latter in today’s presentation. There are 16 cores on the chip and on/offchip bandwidth via the OMI interface or PoweAXON (for adding OpenCAPI accelerators) or PCIe5 interface, all of which are shown delivering up to 1 terabyte per sec on IBM’s slides.

CXL interconnect is not supported by Power10, which is perhaps surprising given the increasingly favorable comments about CXL from IBM over the past year.

Starke said as part of a Slack conversation tied to Hot Chips, “Does POWER10 support CXL? No, it does not. IBM created OpenCAPI because we believe in Open, and we have 10+ years of experience in this space that we want to share with the industry. We know that an asymmetric, host-dominant attach is the only way to make these things work across multiple companies. We are encouraged to see the same underpinnings in CXL. It’s open. It’s asymmetric. So it’s built on the right foundations. We are CXL members and we want to bring our know-how into CXL. But right now, CXL is a few years behind OpenCAPI. Until it catches up, we cannot afford to take a step backwards. Right now OpenCAPI provides a great opportunity to get in front of things that will become more mainstream as CXL matures.”

Below is the block diagram of IBM’s new Power10 chip showing major architecture elements.

How open is OpenPOWER? - DevConf.CZ 2020

The process shrink does play role in allowing to IBM to offer two packaging options shown below (slide below).

IBM.  is offering two versions of the processor module and were able to do this primarily because of the energy efficiency gains. “We’re bringing out a single chip module. There is one Power10 chip and exposing all those high bandwidth interfaces, so very high bandwidth per compute type of characteristics. [O]n the upper right you can see [it]. We build a 16-socket, large system that’s very robustly scalable. We’ve enjoyed success over the last several generations with this type of offering, and Power10 is going to be no different.

“On the bottom you see something a little new. We can basically take two Power10 processor chips and cram them into the same form factor where we used to put just one Power9 processor. We’re taking 1200 square millimeters of silicon and putting it into the same form factor. That’s going to be very valuable in compute-dense, energy-dense, volumetric space-dense cloud configurations, where we can build systems ranging from one to four sockets where those are dual chip module sockets as shown.

IBM POWER10 technical preview of chip capabilities

It will be interesting to see what sort of traction the two different offerings gain among non-IBM systems builders as well as hyperscalers. Broadly IBM is positioning Power10 as a strong fit for hybrid cloud, AI, and HPC environments. Hardware and firmware enhancements were made to support security, containerization, and inferencing, with IBM pointedly suggesting Power10 will be able to handle most inferencing workflows as well as GPUs.

Talking about security, Satya Sharma, IBM Fellow and CTO, IBM Cognitive Systems, said “Power10 implements transparent memory encryption, which is memory encryption without any performance degradation. When you do memory encryption in software, it usually leads to performance degradation. Power10 implements transparent hardware memory encryption.”

Sharma cited similar features for containers and acceleration cryptographic standards. IBM’s official announcement says Power10 is designed to deliver hardware-enforced container protection and isolation optimized with the IBM firmware and that Power10 can encrypt data 40 percent faster than Power9.

Architecture innovations in POWER ISA v3.01 and POWER10

IBM also reports Power10 delivers a 10x-to-20x advantage over Power9 on inferencing workloads. Memory bandwidth and new instructions helped achieve those gains. One example is a new special purpose-built matrix math accelerator that was tailored for the demands of machine learning and deep learning inference and includes a lot of AI data types.

Focusing for a moment on dense-math-engine microarchitecture, Brian Thompto, distinguished engineer and Power10 designer, noted, “We also focused on algorithms that were hungry for flops, such as the matrix math utilized in deep learning. Every core has built in matrix math acceleration and efficiently performs matrix outer product operations. These operations were optimized across a wide range of data types. Recognizing that various precisions can be best suited for specific machine learning algorithms, we included very broad support: double precision, single precision, two flavors of half-precision doing both IEEE and bfloat16, as well as reduced precision integer 16-, eight-, and four-bit. The result is 64 flops per cycle, double precision, and up to one K flops per cycle of reduced precision per SMT core. These operations were tailor made to be efficient while applying machine learning.

At the socket level, you get 10 times the performance per socket for double and single-precision, and using reduced precision, bfloat16 sped up to over 15x and int8 inference sped up to over 20x over Power9 More broadly, he said, “We have a host of new capabilities in ISA version 3.1. This is the new instruction set architecture that supports Power10 and is contributed to the OpenPOWER Foundation. The new ISA supports 64-bit prefixed instructions in a risk-friendly way. This is in addition to the classic way that we’ve delivered 32-bit instructions for many decades. It opens the door to adding new capabilities such as adding new addressing modes as well as providing rich new opcode space for future expansion.

POWER Up Your Insights - IBM System Summit

IBM promises 1000-qubit quantum computer—a milestone—by 2023

IBM  today, for the first time, published its road map for the future of its quantum computing hardware. There is a lot to digest here, but the most important news in the short term is that the company believes it is on its way to building a quantum processor with more than 1,000 qubits — and somewhere between 10 and 50 logical qubits — by the end of 2023.

Currently, the company’s quantum processors top out at 65 qubits. It plans to launch a 127-qubit processor next year and a 433-qubit machine in 2022. To get to this point, IBM is also building a completely new dilution refrigerator to house these larger chips, as well as the technology to connect multiple of these units to build a system akin to today’s multi-core architectures in classical chips.

Gil believes that 2023 will be an inflection point in the industry, with the road to the 1,121-qubit machine driving improvements across the stack. The most important — and ambitious — of these performance improvements that IBM is trying to execute on is bringing down the error rate from about 1% today to something closer to 0.0001%. But looking at the trajectory of where its machines were just a few years ago, that’s the number the line is pointing toward.

Q-CTRL  and Quantum Machines, two of the better-known startups in the quantum control ecosystem, today announced a new partnership that will see Quantum Machines  integrate Q-CTRL‘s quantum firmware into Quantum Machines’ Quantum Orchestration hardware and software solution.

Building quantum computers takes so much specialized knowledge that it’s no surprise that we are now seeing some of the best-of-breed startups cooperate — and that’s pretty much why these two companies are now working together and why we’ll likely see more of these collaborations over time.

“The motivation [for quantum computing] is this immense computational power that we could get from quantum computers and while it exists, we didn’t make it happen yet. We don’t have full-fledged quantum computers yet,” Itamar Sivan, the co-founder and CEO of Quantum Machines, told me.

IBM Power10 A Glimpse Into the Future of Servers

For 20 years scientists and engineers have been saying that “someday” they’ll build a full-fledged quantum computer able to perform useful calculations that would overwhelm any conventional supercomputer. But current machines contain just a few dozen quantum bits, or qubits, too few to do anything dazzling. Today, IBM made its aspirations more concrete by publicly announcing a “road map” for the development of its quantum computers, including the ambitious goal of building one containing 1000 qubits by 2023. IBM’s current largest quantum computer, revealed this month, contains 65 qubits.

“We’re very excited,” says Prineha Narang, co-founder and chief technology officer of Aliro Quantum, a startup that specializes in code that helps higher level software efficiently run on different quantum computers. “We didn’t know the specific milestones and numbers that they’ve announced,” she says. The plan includes building intermediate-size machines of 127 and 433 qubits in 2021 and 2022, respectively, and envisions following up with a million-qubit machine at some unspecified date. Dario Gil, IBM’s director of research, says he is confident his team can keep to the schedule. “A road map is more than a plan and a PowerPoint presentation,” he says. “It’s execution.”

IBM is not the only company with a road map to build a full-fledged quantum computer—a machine that would take advantage of the strange rules of quantum mechanics to breeze through certain computations that just overwhelm conventional computers. At least in terms of public relations, IBM has been playing catch-up to Google, which 1 year ago grabbed headlines when the company announced its researchers had used their 53-qubit quantum computer to solve a particular abstract problem that they claimed would overwhelm any conventional computer—reaching a milestone known as quantum supremacy. Google has its own plan to build a million-qubit quantum computer within 10 years, as Hartmut Neven, who leads Google’s quantum computing effort, explained in an April interview, although he declined to reveal a specific timeline for advances.

AI in Automobile :Solutions for ADAS and AI data engineering using OpenPOWER/POWER systems

IBM’s declared timeline comes with an obvious risk that everyone will know if it misses its milestones. But the company decided to reveal its plans so that its clients and collaborators would know what to expect. Dozens of quantum-computing startup companies use IBM’s current machines to develop their own software products, and knowing IBM’s milestones should help developers better tailor their efforts to the hardware, Gil says.

One company joining those efforts is Q-CTRL, which develops software to optimize the control and performance of the individual qubits. The IBM announcement shows venture capitalists the company is serious about developing the challenging technology, says Michael Biercuk, founder and CEO of Q-CTRL. “It’s relevant to convincing investors that this large hardware manufacturer is pushing hard on this and investing significant resources,” he says.

A 1000-qubit machine is a particularly important milestone in the development of a full-fledged quantum computer, researchers say. Such a machine would still be 1000 times too small to fulfill quantum computing’s full potential—such as breaking current internet encryption schemes—but it would big enough to spot and correct the myriad errors that ordinarily plague the finicky quantum bits.

IBM Power Systems at FIS InFocus 2019

A bit in an ordinary computer is an electrical switch that can be set to either zero or one. In contrast, a qubit is a quantum device—in IBM’s and Google’s machines, each is a tiny circuit of superconducting metal chilled to nearly absolute zero—that can be set to zero, one, or, thanks to the strange rules of quantum mechanics, zero and one at the same time. But the slightest interaction with the environment tends to distort those delicate two-ways-at-once states, so researchers have developed error-correction protocols to spread information ordinarily encoded in a single physical qubit to many of them in a way that the state of that “logical qubit” can be maintained indefinitely.

With their planned 1121-qubit machine, IBM researchers would be able to maintain a handful of logical qubits and make them interact, says Jay Gambetta, a physicist who leads IBM’s quantum computing efforts. That’s exactly what will be required to start to make a full-fledged quantum computer with thousands of logical qubits. Such a machine would mark an “inflection point” in which researchers’ focus would switch from beating down the error rate in the individual qubits to optimizing the architecture and performance of the entire system, Gambetta says.

IBM is already preparing a jumbo liquid-helium refrigerator, or cryostat, to hold a quantum computer with 1 million qubits. The IBM road map doesn’t specify when such a machine could be built. But if company researchers really can build a 1000-qubit computer in the next 2 years, that ultimate goal will sound far less fantastical than it does now.

IBM Power Systems at the heart of Cognitive Solutions

More Information:













18 September 2020

Get the cloud you want

The new IBM Z single-frame and multi-frame systems can transform your application and data portfolio with innovative cloud native development, data privacy, security, and cyber resiliency – all delivered in a hybrid cloud environment.

SUSE, IBM Z and LinuxONE: Celebrating the First 20 Years

In 1999, as the digitized world approached the year 2000, all thoughts were on “Y2K,” and what it would mean for the millions of lines of code that ran the computers for business and government. Would switching the two-digit year from “99” to “00” cause massive systems failures and chaos? Would our ATMs stop working, and airplanes fall out of the sky? In the midst of this perceived crisis, hardly anybody was thinking about the future impact of porting Linux to the mainframe.

The new IBM z15 Part 1 - 4/15/20

Skip ahead 20 years, and many technological changes have fundamentally altered the ways we live, work and see the world. Camera phones have given everyone the power to document our times. The internet has gone from a “like to have” to an essential component of consumer-facing commerce, back-end operations, education, social services and even medicine. Far from being the “end of the world” for digital devices, the year 2000 marked the beginning of an unprecedented boom in innovation and collaboration – perhaps none more profound in the world of business IT than combining the power of the IBM mainframe with the possibilities of Linux.

The new IBM z15 Part 2 - 4/23/20

The concept of open source software was still in its infancy when Linux began making significant inroads into commercial data centers – assuming increasingly critical roles in business operations. Opportunities for innovation were wide open as SUSE introduced commercial Linux to the IBM s/390 mainframe – now IBM Z – in the fall of 2000. Since then, SUSE and IBM have continued to watch Linux grow and gain acceptance throughout the business world.

Today, more businesses choose SUSE Linux Enterprise Server (SLES) for IBM Z and LinuxONE than any other Linux for running workloads on IBM mainframes. The reasons are self-evident. SLES is optimized for IBM mainframes, and businesses want to focus on serving their customers – not managing their IT. And the proven engineering excellence of the IBM mainframe, combined with the agility and business value of Linux, enable businesses to accelerate innovation to keep pace with constantly changing market dynamics.

Despite the tendency of some to dismiss mature technologies, neither Linux nor the mainframe could in any way be considered “outmoded” or “antique.” The issue isn’t age. It’s quality, reliability, security and an ongoing ability to innovate and adapt to change. As countless commoditization cycles within the IT industry have written lesser technologies into the history books, Linux on the mainframe is enabling businesses to write the next chapter in their story of digital transformation. Indeed, Gartner’s 2019 assessment that “open source is becoming the backbone for driving digital innovation” speaks both to the innovative capabilities of Linux and to the continued reliability and security of the IBM mainframe.

Introducing the new IBM z15 T02  

Many of the world’s exciting innovations in business IT are being developed and deployed via Linux on the mainframe. Traditional industries such as finance and retail are finding new ways to remain competitive using hybrid cloud and AI – running on Linux and the mainframe – to make the most of their data in service to their customers. While newer types of workloads to manage digital currencies, global scientific research and emerging industries continue to demand the highly available, reliable and secure computing provided by Linux on the mainframe.

The distinguishing features Linux on the mainframe will enable both to play essential roles in mission-critical business operations for many years to come. We’ve come a long way in our first 20 years, and have much to be proud of. But envisioning the possibilities of the next 20 years makes us even more excited about the future of Linux on IBM Z.

LinuxONE: Determining Your Own IT Strategy

Dogs are supposed to wag their tails, not the other way around. Yet, too many enterprises have found themselves in situations where their IT infrastructure dictates the way they run their business.

Introducing the new IBM z15 T02

Lest anyone forget, the purpose of IT is to enable and support business outcomes, not to determine or limit them. But how do enterprises get control of their IT when it’s already up and running, and comprises piece parts from a variety of vendors? How do enterprises migrate their mission-critical operations to a hybrid cloud environment so they can move quickly and manage growth?

With the introduction of LinuxONE five years ago, IBM answered those questions with secure, reliable and scalable architecture, complementing the capabilities of the underlying architecture in unique ways. Massive power, extreme virtualization, and open and disruptive technologies make the combination greater than the sum of its parts.

Unlike other Linux platforms, LinuxONE lets users scale up without disruption. Having this “cloud in a box” capability means enterprises can add database layers and new applications to their IT infrastructure without taking everything offline. They can change their tires and even upgrade their horsepower while staying in the race—critical capabilities in any industry with constantly changing demands.

The key is being able to define what’s valuable to an organization, versus what an IT platform will let it do. With its values determined, the enterprise is free to establish its own cloud roadmap, manage its own cloud services consumption, and position itself for innovation and market disruption.

LinuxONE represents the culmination of years of innovation and integration in optimizing open source workloads on a trusted architecture. Add to that the capabilities of Red Hat OpenShift, and you have a hybrid cloud infrastructure that:

  • Optimizes the value of existing IT infrastructure
  • Hosts mission-critical applications while protecting sensitive data
  • Maintains security and scalability in the public cloud
  • Enables “write once/run anywhere” application portability
  • Installs and upgrades without disrupting ongoing business processes

Enterprises with traditional workloads can capitalize on the elegance of managing and scaling their cloud-native system from a single control point that enables previously unheard-of agility in their digital reinvention. Businesses with emerging workloads— such as enterprises in the Confidential Computing space—can count on the secure service containers and hardware security models of LinuxONE to establish and build trust in their marketplace relationships. And all users can benefit from the systems’ containerization, encryption and virtualization that allow them to maintain control of their own security keys. In other words, the enterprise—not the IT infrastructure—is in charge.

As LinuxONE celebrates its fifth anniversary, it has emerged as a “lighthouse” platform of global collaboration to simplify IT management, even as tasks have become more complex. As a result, we stand on the verge of a period of dramatic change, in which AI running on hybrid cloud will enable breakthroughs in classical computing and its tremendous potential to improve countless aspects of our lives. For businesses and governments making this important journey, LinuxONE is an essential partner to progress.

IBM and Red Hat: Nearly two decades of Linux innovation across computing architectures

In the decades since its inception, Linux has become synonymous with collaboration, both at a technical and organizational standpoint. This community work, from independent contributors, end users and IT vendors, has helped Linux adapt and embrace change, rather than fight it. A powerful example of this collaboration was the launch of Red Hat Enterprise Linux (RHEL) 2.1 in 2002, heralding the march of Linux across the enterprise world. Today, Red Hat Enterprise is a bellwether for Linux in production systems, serving as the world’s leading enterprise Linux platform to power organizations across the world and across the open hybrid cloud.

All of this innovation and industry leadership wouldn’t have been possible without a strong partner ecosystem, including the close ties we’ve long had with IBM. IBM was one of the first major technology players to recognize the value in Linux, especially RHEL. As IBM Z and IBM LinuxONE celebrate 20 years of powering enterprise IT today, this benchmark provides further validation of the need for enterprise-grade Linux across architectures, especially as the requirements of modern businesses change dynamically.

One Linux platform, spanning the mainframe to the open hybrid cloud
For more than five years, Red Hat’s vision of IT’s future rests in the hybrid cloud, where operations and services don’t fully reside in a corporate datacenter or in a public cloud environment. While the open hybrid cloud provides a myriad of benefits, from greater control over resources to extended flexibility and scalability, it also delivers choice: choice of architecture, choice of cloud provider and choice of workload.

RHEL encompasses a vast selection of certified hardware configurations and environments, including IBM Z and LinuxONE - this ecosystem recently expanded to include IBM z15 and LinuxONE III single frame systems. Working with IBM as a long-time partner, we’ve optimized RHEL across nearly all computing architectures, from mainframes and Power systems to x86 and Arm processors. It’s this ability to deliver choice that makes RHEL an ideal backbone for the hybrid cloud.

Linux is just the beginning
Linux is crucial to the success of the hybrid cloud, but it’s just the first step. RHEL lays the foundation for organizations to extend their operations into new environments, like public cloud, or new technologies, like Kubernetes. Choice remains key throughout this evolution, as innovation is worth nothing if it cannot answer the specific and evolving needs of individual enterprises.

RHEL is the starting point for Red Hat’s open innovation, including Red Hat OpenShift. Again, thanks to our close collaboration with IBM, the value of RHEL, OpenShift and Red Hat’s open hybrid cloud technologies encompasses IBM Z and LinuxONE systems. This makes it easier for organizations to use their existing investments in IBM’s powerful, scalable mainframe technologies while still taking advantage of cloud-native technologies.

Supporting IT choice and supporting IT’s future
The open hybrid cloud isn’t a set of technologies delivered in a box - rather, it’s an organizational strategy that brings the power and flexibility of new infrastructure and emerging technologies to wherever the best footprint is for a given enteprise’s needs. IBM Z and LinuxONE represent a powerful architecture for organizations to build out modern, forward-looking datacenter implementations, while RHEL provides the common plane to unite these advanced systems with the next wave of open source innovations, including Red Hat OpenShift.

Twenty years of open source software for IBM Z and LinuxONE

It’s been 20 years since IBM first released Linux on IBM Z, so I thought it appropriate to mark the occasion by exploring the history, the details, and the large ecosystem of open source software that’s now available for the IBM Z and LinuxONE platforms.

IBM has deep roots in the open source community. We have been backing emerging communities from a very early stage — including the Linux Foundation, the Apache Software Foundation, and the Eclipse Foundation. This includes years of contributions to the development of open source code, licenses, advocating for open governance, and open standards in addition to being an active contributor to many projects.

As open source continues to gain momentum in the software world, we see growth reflected across different hardware and processor architectures. The processor architecture for IBM Z and LinuxONE is known as s390x.

If you’re new to these two hardware platforms, they are commonly known as mainframes. IBM Z has had a tremendous evolution with world-class, enterprise-grade features for performance, security, reliability, and scale. The latest version, IBM z15, can co-locate different operating systems including Linux, z/OS, z/VSE, and z/TPF. The LinuxONE III model has the same features as IBM Z, but was designed exclusively for the Linux operating system, including most commercial and open source Linux distributions.

When we talk about commonalities, there’s one that is not very well known related to mainframes — open source software. Did you know that open source software (OSS) for mainframes existed as far back as 1955? SHARE, a volunteer-run user group, was founded in 1955 to share technical information related to mainframe software. They created an open SHARE library with available source code, and undertook distributed development. It was not called “open source” back then, but we can consider that one of the early origins of open source.

Open source software, Linux, and IBM

The popularity of open source software originated in large part as a result of years of cultural evolution through sharing libraries across all programming languages. Innovating and sharing software with reusable functionality has become a common practice led by open source communities and some of the largest organizations in the world. Another factor is that all of the latest technologies are being developed in the open — AI, machine learning, blockchain, virtual reality, and autonomous cars, just to name a few.

As mentioned earlier, open source is not new to mainframes — another example is Linux, which has been used for more than 20 years. In 1999, IBM published a collection of patches and additions to the Linux kernel to facilitate the use of Linux in IBM Z. Then, in 2000, more features were added to the mainframes, including the Integrated Facility for Linux (IFL), which hosts Linux with or without hypervisors for virtual machines (VMs).

Over the last 20+ years, IBM has committed significant resources to Linux. In 2000, IBM announced a $1 billion investment to make Linux a key part of the company strategy, establishing IBM as a champion for contributions to the Linux kernel and subsystems.

One of IBM’s key contributions to Linux has always been enhancements that take advantage of the unique capabilities of the mainframe. Today, IBM Z and LinuxONE run a much-improved open source Linux that allows amazing technology for high I/O transactions, cryptographic capabilities, scalability, reliability, compression, and performance.

All commercial and open source Linux distributions are available for IBM Z and LinuxONE: Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Ubuntu, Fedora, Debian, Open SUSE, CentOS, Alpine, and ClefOS.

The use of Linux over the course of 20 years has opened the doors to a vast ecosystem of open source software for IBM Z and LinuxONE.

The open source software ecosystem for S390x
Today, in line with its commitment to Linux, IBM contributes to many open source projects. In fact, together with Red Hat, which is now part of IBM, it has the largest number of active open source contributors in the world — an amazing feat.

Because IBM is committed to the goal of continuing to develop the open source software ecosystem for IBM Z and LinuxONE, the company has teams of full-time developers that contribute upstream back to open source communities. In general terms, all you need is a different compiled Linux distribution for s390x; then, if you want to port exiting software, you will have to build or compile it again on IBM Z or LinuxONE.

Open source communities and IBM upstream developers address technical items specific to s390x, especially when related to existing open source software for x86 processors that need to be ported and validated on an IBM Z or LinuxONE (s390x).

Technical considerations for porting OSS to S390x
First, it’s important to note that most software recompiles or builds with minimal to no changes; x86-specific components will cause compilation or runtime errors. In those cases, code needs to be added to make those libraries or components work for s390x.

S390x uses big-endian memory allocation. The big-endian scheme stores the most significant byte (MSB) first, while the little-endian scheme available in ARM and x86 processors stores the least-significant byte (LSB) first. What this means is that if the software is doing low-level memory allocation in a little-endian scheme, the code needs to be adjusted to big-endian so the application can continue to work properly in the mainframe.

The same considerations apply to library dependencies (transitive libraries) in which functionality specific to other processor architectures needs to change to work on s390x.

Every tool, script, and piece of software is different, but for the most part, the previous technical considerations apply, and in many cases, no code changes are required — all you have to do is build or compile the software again.

Growing the open source ecosystem
There you have it! Coding and building OSS are basically the same on any platform. The use of Linux and re-use of open source technologies, together with commonly used open source development tools and languages, have helped to grow the ecosystem of OSS for IBM Z and LinuxONE. We have seen more interest in recent months, and we are looking forward to having more OSS (especially in the AI space) being available for s390x.

Open source on IBM z/OS is a topic for another blog post, but it too is seeing growth including Linux Foundation projects like Zowe.

Open source ecosystem - logos

We invite you to participate. We have a growing community, and there are resources available for you to try in the IBM LinuxONE Community Cloud as well as a variety of other resources listed in this blog post. Developers and enterprises are sure to enjoy the benefits of working in a familiar open source environment.

Explore and make use of the advanced capabilities of the IBM z15
More than any other platform, the z15 offers a high-value architecture that can give you the cloud you want with the privacy and security you need. From the hundreds of microprocessors to the Z software stack, the z15 is built to be open, secure, resilient, and flexible from the ground up.

The z15 is offered as a single air-cooled 19-inch frame called the z15 T02, or as a multi-frame (1 to 4 19-inch frames) called the z15 T01.

The IBM Redbooks team brought together experts from around the world to help you explore and realize the potential of the IBM z15. Let IBM Redbooks guide you through the opportunities that this new technology can bring to your business.


The new IBM Z single-frame and multi-frame systems bring security, privacy and resiliency to your hybrid cloud infrastructure


The New IBM z15 T02 Mainframe

Observations On IBM’s Announcement

As anticipated, IBM announced the follow-on to the z14 ZR1 server, the z15 model T02. This class of server from IBM is designed to deliver the scalability and granularity required for small and medium-sized enterprises across all industries. IBM architected this server by building off the enterprise-class chipset introduced in the Enterprise Class counterpart known as the z15 model T01. It is interesting to note, the 19” form factor found in the z15 T01 was first introduced in the prior small and medium-sized enterprise-class mainframe server – the ZR1.

If you have not kept up to speed on mainframe technology, things have changed dramatically. The image below highlights what IBM’s latest Mainframe family looks like. The number of racks in the T01 model is a function of the size of the client configuration. The T02 will always be a single rack system.

Lasting Technologies Innovate

Due to the recent changes in our world, it is comforting to know that the mainframe is the most secure, reliable, available and innovative platform, and it continues to support the backbone of our economy. It is amazing to see the IBM Z enhancements made over just the last decade, as highlighted below.

Mainframes have continued to see new innovations and technologies capabilities over the years. For example, you can see above how the number of cores and memory continues to increase with each server generation.

Network of Experts: Bodo Hoppe and IBM z15 – a developer perspective | IBM Client Center Boeblingen

For those platform nay-sayers that claim the “mainframe is dead,” why are the memory and core numbers increasing? The answer is simple; it is because clients are still dependent on this platform, and their workloads are continuing to grow and demand more resources.

What a great technology story! The first IBM mainframe was introduced in 1964. It just goes to show you that the mainframe is a lasting technology and that IBM and its partners will continue to innovate on the platform, while still preserving its core values

As anticipated, IBM announced the follow-on to the z14 ZR1 server, the z15 model T02. This class of server from IBM is designed to deliver the scalability and granularity required for small and medium-sized enterprises across all industries. IBM architected this server by building off the enterprise-class chipset introduced in the Enterprise Class counterpart known as the z15 model T01. It is interesting to note, the 19” form factor found in the z15 T01 was first introduced in the prior small and medium-sized enterprise-class mainframe server – the ZR1.

If you have not kept up to speed on mainframe technology, things have changed dramatically. The image below highlights what IBM’s latest Mainframe family looks like. The number of racks in the T01 model is a function of the size of the client configuration. The T02 will always be a single rack system.

Due to the recent changes in our world, it is comforting to know that the mainframe is the most secure, reliable, available and innovative platform, and it continues to support the backbone of our economy. It is amazing to see the IBM Z enhancements made over just the last decade, as highlighted below.

Mainframes have continued to see new innovations and technologies capabilities over the years. For example, you can see above how the number of cores and memory continues to increase with each server generation.

For those platform nay-sayers that claim the “mainframe is dead,” why are the memory and core numbers increasing? The answer is simple; it is because clients are still dependent on this platform, and their workloads are continuing to grow and demand more resources.

What a great technology story! The first IBM mainframe was introduced in 1964. It just goes to show you that the mainframe is a lasting technology and that IBM and its partners will continue to innovate on the platform, while still preserving its core values

Enterprise Modernization And The T02

During the uncertainty in our world right now, many states and their mainframe environments have made headlines.  Several states are scrambling to locate programming talent to scale their legacy mainframe applications which are written in Enterprise COBOL. These applications support the unemployment systems, which are seeing a dramatic spike in claim submissions.1  Having read these articles, there appear to be common themes – the organizations decided to no longer invest in the platform, complacency may have set in, or some organizations favored a workload refactor + re-engineering approach.  Organizations that embark on a transformation journey focus on the promise of reduced costs, improved customer experience and revenue growth. The challenge is none of those benefits are realized until the activity results in true Maintenance and Operation (M&O) of the refactored workload. That can happen, but it takes a concerted investment effort and time.

Interesting Features Of IBM Z15 T02

Turning our attention back to IBM’s announcement, this new server offers five hardware models and well over 250+ unique software capacity settings, providing a highly granular and scalable system. The base single-engine speed of 98 MIPS is found on the A01; the same full speed unit (Z01) climbs to 1761 MIPs, up from 1570 MIPs on the prior generation. The server clock speed held steady at 4.5 GHz, yet the average single-core performance increases 14% when compared to the ZR1. Also seeing increases are memory configurations and the number of cores available within a single system image for Linux-centric workloads. Docker Containers can be deployed natively on this system, and doing so would allow your microservices to access native z/OS services within the same LPAR. Talk about zero network latency!

The T02 server also includes key innovative features. One such feature is known as Compression Acceleration, and a second is Instant Recovery. Let’s briefly review both within the context of the T02.

Compression Acceleration

Compression Acceleration is made possible due to a new on-chip accelerator known as the Nest Acceleration Unit (NXU). DEFLATE is an industry-standard compression protocol. The T02 NXU leverages Gzip compression, which offers improved reliability over DEFLATE. Utilizing on-chip compression will provide higher compression ratios and operates in one of two modes: synchronous or asynchronous. Synchronous mode will go straight through to the on-chip accelerator. Asynchronous mode will require a corresponding priced feature on z/OS.

Instant Recovery
Did you know the mainframe system that you currently own embraces a three-pronged system availability strategy? IBM designs the mainframe with a three-pronged availability strategy.

Begin with a mindset that centers on keeping the system up and running.
Eliminate both unplanned outages as well as planned outages.
Architect the hardware and OS so that your applications remain available should an LPAR become unavailable for whatever reason, such as to apply maintenance. IBM instrumentation includes support for rolling IPLs and platform clustering technology.
To continue to improve the focus on system availability, another new feature, IBM System Recovery Boost, has been released.

According to IBM[i], System Recovery Boost is an innovative solution that diminishes the impact of downtime, planned or unplanned, so you can restore service and recover workloads substantially faster than on previous IBM Z generations with zero increase in IBM software MSU consumption or cost.

With System Recovery Boost, you can use your already-entitled Central Processors and zIIPs to unleash additional processing capacity on an LPAR-by-LPAR basis, providing a fixed-duration performance boost to reduce the length and mitigate the impact of downtime:
  • Faster shutdown
  • Faster GDPS automation actions
  • Faster restart & recovery
  • Faster elimination of workload backlog
Key features include:
  • Speed Boost: Enables general-purpose processors on sub-capacity machine models to run at full-capacity speed in the image(s) being boosted.
  • zIIP Boost: Provides additional capacity and parallelism by enabling general-purpose workloads to run on zIIP processors that are available to the image(s) being boosted.
Let’s dive further into this. How does the operating system know you are shutting down one of your LPARs and that a shutdown boost period of 30 minutes should start? It’s quite simple. At shutdown time, the operator has to explicitly activate the boost by starting the new started procedure IEASDBS (Shut Down Boost Start).

Upon re-IPL of that same LPAR, Boost would be “On by Default” for that image, offering up sixty minutes of boosted capacity to get the operating system and subsystems up. During the Boost period time, workloads will also continue processing at an accelerated pace.

For those familiar with this platform, you know that zIIPs traditionally only host DRDA, IPSec and IBM Db2 utility workloads, along with non-IBM Software solutions that have chosen to leverage the zIIP API. During System Recovery Boost, if you have at least one zIIP engine available to the LPAR, it can run both traditional zIIP-only workloads as well as General Purpose CP Workload. IBM dubbed this capability CP Blurring.  Just like Speed Boost, zIIP Boost will last thirty minutes on shutdown and sixty minutes on restart.

What runs on the zIIP during the boost period? The short answer – any program!2

On-Demand Webinar: Preparing Enterprise IT for the Next 50 Years of the Mainframe

Announcing IBM z15 Model T02, IBM LinuxONE III Model LT2 and IBM Secure Execution for Linux

Every day, clients of all sizes are examining their hybrid IT environments, looking for flexibility, responsiveness and ways to cut costs to fuel their digital transformations. To help address these needs, today IBM is making two announcements. The first is two new single-frame, air-cooled platforms– IBM z15 Model T02 and IBM LinuxONE III Model LT2–designed to build on the capabilities of z15. The second, is IBM Secure Execution for Linux, a new offering designed to help protect from internal and external threats across the hybrid cloud. The platforms and offering will become generally available on May 15, 2020.

Expanding privacy with IBM Secure Execution for Linux

According to the Ponemon Institute’s 2020 Cost of an Insider Breach Report[1] sponsored by IBM, insider threats are steadily increasing. From 2016 to 2019, the average number of incidents involving employee or contractor negligence has increased from 10.5 to 14.5–and the average number of credential theft incidents per company has tripled over the past three years, from 1.0 to 3.2.[2] IBM Secure Execution for Linux helps to mitigate these concerns by enabling clients to isolate large numbers of workloads with granularity and at scale, within a trusted execution environment available on all members of the z15 and LinuxONE III families.

Read the Ponemon Institute Report https://www.ibm.com/downloads/cas/LQZ4RONE

For clients with highly sensitive workloads such as cryptocurrency and blockchain services, keeping data secure is even more critical. That’s why IBM Secure Execution for Linux works by establishing secured enclaves that can scale to host these sensitive workloads and provide both enterprise-grade confidentiality and protection for sensitive and regulated data. For our clients, this is the latest step toward delivering a highly secure platform for mission-critical workloads.

For years, Vicom has worked with LinuxONE and Linux® on Z to solve clients’ business challenges as a reseller and integrator. On learning how IBM Secure Execution for Linux can help clients, Tom Amodio, President, Vicom Infinity said, “IBM’s Secure Execution, and the evolution of confidential computing on LinuxONE, give our clients the confidence they need to build and deploy secure hybrid clouds at scale.”

Simplifying your regulatory requirements for highly sensitive workloads
In addition to the growing risk of insider threats, our clients are also facing complexity around new compliance regulations such as GDPR and the California Consumer Privacy Act, demonstrating that workload isolation and separation of control are becoming even more important for companies of all sizes to ensure the integrity of each application and its data across platforms. IBM Secure Execution for Linux provides an alternative to air-gapped or separated dedicated hardware typically required for sensitive workloads.

TechU Talks Replay: Introducing IBM z15 Data Privacy Passports - 4/9/20

Learn more about IBM Storage offerings for IBM Z

Delivering cyber resiliency and flexible compute

Building on recent announcements around encrypting everywhere, cloud-native and IBM Z Instant Recovery capabilities, as well as support for Red Hat OpenShift Container Platform and Red Hat Ansible Certified Content for IBM Z, these two new members of the IBM Z and LinuxONE families bring new cyber resiliency and flexible compute capabilities to clients including:

Enterprise Key Management Foundation–Web Edition provides centralized, secured management of keys for robust IBM z/OS® management.
Flexible compute: Increased core and memory density with 2 central processor complex drawer design provides increased physical capacity and an enhanced high availability option. Clients can have up to 3 I/O drawers and can now support up to 40 crypto processors.
Red Hat OpenShift Container Platform 4.3: The latest release, planned for general availability this month on IBM Z and LinuxONE.
Complementary IBM Storage enhancements
In addition, IBM also announced new updates to our IBM Storage offerings for IBM Z. The IBM DS8900F all-flash array and IBM TS7700 virtual tape library both now offer smaller footprint options. This week the TS7700 family announced a smaller footprint, with flexible configurations for businesses of all sizes and different needs that can be mounted in an industry-standard 19-inch rack.

More Information: