20 October 2021

NeuroMorphic Photonic Computing and Better AI

 

Taking Neuromorphic Computing to the Next Level with Loihi 2

Intel Labs’ new Loihi 2 research chip outperforms its predecessor by up to 10x and comes with an open-source, community-driven neuromorphic computing framework

Today, Intel introduced Loihi 2, its second-generation neuromorphic research chip, and Lava, an open-source software framework for developing neuro-inspired applications. Their introduction signals Intel’s ongoing progress in advancing neuromorphic technology.

“Loihi 2 and Lava harvest insights from several years of collaborative research using Loihi. Our second-generation chip greatly improves the speed, programmability, and capacity of neuromorphic processing, broadening its usages in power and latency constrained intelligent computing applications. We are open sourcing Lava to address the need for software convergence, benchmarking, and cross-platform collaboration in the field, and to accelerate our progress toward commercial viability.”

–Mike Davies, director of Intel’s Neuromorphic Computing Lab



Why It Matters: Neuromorphic computing, which draws insights from neuroscience to create chips that function more like the biological brain, aspires to deliver orders of magnitude improvements in energy efficiency, speed of computation and efficiency of learning across a range of edge applications: from vision, voice and gesture recognition to search retrieval, robotics, and constrained optimization problems.

Neuromorphic Chipsets - Industry Adoption Analysis


Applications Intel and its partners have demonstrated to date include robotic arms, neuromorphic skins and olfactory sensing.

About Loihi 2: The research chip incorporates learnings from three years of use with the first-generation research chip and leverages progress in Intel’s process technology and asynchronous design methods.

Advances in Loihi 2 allow the architecture to support new classes of neuro-inspired algorithms and applications, while providing up to 10 times faster processing1, up to 15 times greater resource density2 with up to 1 million neurons per chip, and improved energy efficiency. Benefitting from a close collaboration with Intel’s Technology Development Group, Loihi 2 has been fabricated with a pre-production version of the Intel 4 process, which underscores the health and progress of Intel 4. The use of extreme ultraviolet (EUV) lithography in Intel 4 has simplified the layout design rules compared to past process technologies. This has made it possible to rapidly develop Loihi 2.

The Lava software framework addresses the need for a common software framework in the neuromorphic research community. As an open, modular, and extensible framework, Lava will allow researchers and application developers to build on each other’s progress and converge on a common set of tools, methods, and libraries. Lava runs seamlessly on heterogeneous architectures across conventional and neuromorphic processors, enabling cross-platform execution and interoperability with a variety of artificial intelligence, neuromorphic and robotics frameworks. Developers can begin building neuromorphic applications without access to specialized neuromorphic hardware and can contribute to the Lava code base, including porting it to run on other platforms.

Architectures for Accelerating Deep Neural Nets

"Investigators at Los Alamos National Laboratory have been using the Loihi neuromorphic platform to investigate the trade-offs between quantum and neuromorphic computing, as well as implementing learning processes on-chip,” said Dr. Gerd J. Kunde, staff scientist, Los Alamos National Laboratory. “This research has shown some exciting equivalences between spiking neural networks and quantum annealing approaches for solving hard optimization problems. We have also demonstrated that the backpropagation algorithm, a foundational building block for training neural networks and previously believed not to be implementable on neuromorphic architectures, can be realized efficiently on Loihi. Our team is excited to continue this research with the second generation Loihi 2 chip."

About Key Breakthroughs: Loihi 2 and Lava provide tools for researchers to develop and characterize new neuro-inspired applications for real-time processing, problem-solving, adaptation and learning. Notable highlights include:

  • Faster and more general optimization: Loihi 2’s greater programmability will allow a wider class of difficult optimization problems to be supported, including real-time optimization, planning, and decision-making from edge to datacenter systems.
  • New approaches for continual and associative learning: Loihi 2 improves support for advanced learning methods, including variations of backpropagation, the workhorse algorithm of deep learning. This expands the scope of adaptation and data efficient learning algorithms that can be supported by low-power form factors operating in online settings.
  • Novel neural networks trainable by deep learning: Fully programmable neuron models and generalized spike messaging in Loihi 2 open the door to a wide range of new neural network models that can be trained in deep learning. Early evaluations suggest reductions of over 60 times fewer ops per inference on Loihi 2 compared to standard deep networks running on the original Loihi without loss in accuracy3. Loihi 2 addresses a practical limitation of Loihi by incorporating faster, more flexible, and more standard input/output interfaces. 
  • Seamless integration with real-world robotics systems, conventional processors, and novel sensors: Loihi 2 addresses a practical limitation of Loihi by incorporating faster, more flexible, and more standard input/output interfaces. Loihi 2 chips will support Ethernet interfaces, glueless integration with a wider range of event-based vision sensors, and larger meshed networks of Loihi 2 chips.

More details may be found in the Loihi 2/Lava technical brief.

About the Intel Neuromorphic Research Community: The Intel Neuromorphic Research Community (INRC) has grown to nearly 150 members, with several new additions this year, including Ford, Georgia Institute of Technology, Southwest Research Institute (SwRI) and Teledyne-FLIR. New partners join a robust community of academic, government and industry partners that are working with Intel to drive advances in real-world commercial usages of neuromorphic computing. (Read what our partners are saying about Loihi technology.)

“Advances like the new Loihi 2 chip and the Lava API are important steps forward in neuromorphic computing,” said Edy Liongosari, chief research scientist and managing director at Accenture Labs. “Next-generation neuromorphic architecture will be crucial for Accenture Labs’ research on brain-inspired computer vision algorithms for intelligent edge computing that could power future extended-reality headsets or intelligent mobile robots. The new chip provides features that will make it more efficient for hyper-dimensional computing and can enable more advanced on-chip learning, while the Lava API provides developers with a simpler and more streamlined interface to build neuromorphic systems.”

Deep learning: Hardware Landscape

About the Path to Commercialization: Advancing neuromorphic computing from laboratory research to commercially viable technology is a three-pronged effort. It requires continual iterative improvement of neuromorphic hardware in response to the results of algorithmic and application research; development of a common cross-platform software framework so developers can benchmark, integrate, and improve on the best algorithmic ideas from different groups; and deep collaborations across industry, academia and governments to build a rich, productive neuromorphic ecosystem for exploring commercial use cases that offer near-term business value.

Today’s announcements from Intel span all these areas, putting new tools into the hands of an expanding ecosystem of neuromorphic researchers engaged in re-thinking computing from its foundations to deliver breakthroughs in intelligent information processing.

What’s Next: Intel currently offers two Loihi 2 based neuromorphic systems through the Neuromorphic Research cloud to engaged members of the INRC: Oheo Gulch, a single chip system for early evaluation and Kapoho Point, an eight-chip system that will be available soon.

Introduction

Recent breakthroughs in AI have swelled our appetite for intelligence in computing devices at all scales and form factors. This new intelligence ranges from recommendation systems, automated call centers, and gaming systems in the data center to autonomous vehicles and robots to more intuitive and predictive interfacing with our personal computing devices to smart city and road infrastructure that immediately responds to emergencies. Meanwhile, as today’s AI technology matures, a clear view of its limitations is emerging. While deep neural networks (DNNs) demonstrate a near limitless capacity to scale to solve large problems, these gains come at a very high price in computational power and pre-collected data. Many emerging AI applications—especially those that must operate in unpredictable real-world environments with power, latency, and data constraints—require fundamentally new approaches. Neuromorphic computing represents a fundamental rethinking of computer architecture at the transistor level, inspired by the form and function of the brain’s biological neural networks. Despite many decades of progress in computing, biological neural circuits remain unrivaled in their ability to intelligently process, respond to, and learn from real-world data at microwatt power levels and millisecond response times. Guided by the principles of biological neural computation, neuromorphic computing intentionally departs from the familiar algorithms and programming abstractions of conventional computing so it can unlock orders of magnitude gains in efficiency and performance compared to conventional architectures. The goal is to discover a computer architecture that is inherently suited for the full breadth of intelligent information processing that living brains effortlessly support.

Advances in neuromorphic computing technology

Three Years of Loihi Research

Intel Labs is pioneering research that drives the evolution of compute and algorithms toward next-generation AI. In 2018, Intel Labs launched the Intel Neuromorphic Research Community (Intel NRC) and released the Loihi research processor for external use. The Loihi chip represented a milestone in the neuromorphic research field. It incorporated self-learning capabilities, novel neuron models, asynchronous spike-based communication, and many other properties inspired from neuroscience modeling, with leading silicon integration scale and circuit speeds. Over the past three years, Intel NRC members have evaluated Loihi in a wide range of application demonstrations. Some examples include:

 • Adaptive robot arm control 

• Visual-tactile sensory perception 

• Learning and recognizing new odors and gestures 

• Drone motor control with state-of-the-art latency in response to visual input 

• Fast database similarity search • Modeling diffusion processes for scientific computing applications 

• Solving hard optimization problems such as railway scheduling In most of these demonstrations, Loihi consumes far less than 1 watt of power, compared to the tens to hundreds of watts that standard CPU and GPU solutions consume. 

With relative gains often reaching several orders of magnitude, these Loihi demonstrations represent breakthroughs in energy efficiency.1 Furthermore, for the best applications, Loihi simultaneously demonstrates state-of-the-art response times to arriving data samples, while also adapting and learning from incoming data streams. 



This combination of low power and low latency, with continuous adaptation, has the potential to bring new intelligent functionality to power- and latencyconstrained systems at a scale and versatility beyond what any other programmable architecture supports today. Loihi has also exposed limitations and weaknesses found in today’s neuromorphic computing approaches. 

While Loihi has one of the most flexible feature sets of any neuromorphic chip, many of the more promising applications stretch the range of its capabilities, such as its supported neuron models and learning rules. Interfacing with conventional sensors, processors, and data formats proved to be a challenge and often a bottleneck for performance. 

While Loihi applications show good scalability in large-scale systems such as the 768-chip Pohoiki Springs system, with gains often increasing relative to conventional solutions at larger scales, congestion in inter-chip links limited application performance. Loihi’s integrated compute-and-memory architecture foregoes off-chip DRAM memory, so scaling up workloads requires increasing the number of Loihi chips in an application. This means the economic viability of the technology depends on achieving significant improvements in the resource density of neuromorphic chips to minimize the number of required chips in commercial deployments. 

Wei Lu (U Mich) Neuromorphic Computing Based on Memristive Materials and Devices

Photonics for Computing: from Optical Interconnects to Neuromorphic Architectures

One of the biggest challenges holding back the commercialization of neuromorphic technology is the lack of software maturity and convergence. Since neuromorphic architecture is fundamentally incompatible with standard programming models, including today’s machine-learning and AI frameworks in wide use, neuromorphic software and application development is often fragmented across research teams, with different groups taking different approaches and often reinventing common functionality. 

Yet to emerge is a single, common software framework for neuromorphic computing that supports the full range of approaches pursued by the research community that presents compelling and productive abstractions to application developers. 

The Nx SDK software developed by Intel Labs for programming Loihi focused on low-level programming abstractions and did not attempt to address the larger community’s need for a more comprehensive and open neuromorphic software framework that runs on a wide range of platforms and allows contributions from throughout the community. This changes with the release of Lava.

 Intel Labs is pioneering research that drives the evolution of compute and algorithms toward next-generation AI.


Loihi 2: A New Generation of Neuromorphic Computing Architecture 

Building on the insights gained from the research performed on the Loihi chip, Intel Labs introduces Loihi 2. A complete tour of the new features, optimizations, and innovations of this chip is provided in the final section. Here are some highlights: • Generalized event-based messaging. Loihi originally supported only binary-valued spike messages. Loihi 2 permits spikes to carry integer-valued payloads with little extra cost in either performance or energy. These generalized spike messages support event-based messaging, preserving the desirable sparse and time-coded communication properties of spiking neural networks (SNNs), while also providing greater numerical precision. • Greater neuron model programmability. Loihi was specialized for a specific SNN model. Loihi 2 now implements its neuron models with a programmable pipeline in each neuromorphic core to support common arithmetic, comparison, and program control flow instructions. Loihi 2’s programmability greatly expands its range of neuron models without compromising performance or efficiency compared to Loihi, thereby enabling a richer space of use cases and applications.

 • Enhanced learning capabilities. Loihi primarily supported two-factor learning rules on its synapses, with a third modulatory term available from nonlocalized “reward” broadcasts. Loihi 2 allows networks to map localized “third factors” to specific synapses. This provides support for many of the latest neuroinspired learning algorithms under study, including approximations of the error backpropagation algorithm, the workhorse of deep learning. While Loihi was able to prototype some of these algorithms in proof-of-concept demonstrations, Loihi 2 will be able to scale these examples up, for example, so new gestures can be learned faster with a greater range of presented hand motions. 

 • Numerous capacity optimizations to improve resource density. Loihi 2 has been fabricated with a preproduction version of the Intel 4 process to address the need to achieve greater application scales within a single neuromorphic chip. Loihi 2 also incorporates numerous architectural optimizations to compress and maximize the efficiency of each chip’s neural memory resources. Together, these innovations improve the overall resource density of Intel’s neuromorphic silicon architecture from 2x to over 160x, depending on properties of the programmed networks. 

 • Faster circuit speeds. Loihi 2’s asynchronous circuits have been fully redesigned and optimized, improving on Loihi down to the lowest levels of pipeline sequencing. This has provided gains in processing speeds from 2x for simple neuron state updates to 5x for synaptic operations to 10x for spike generation.2 Loihi 2 supports minimum chip-wide time steps under 200ns; it can now process neuromorphic networks up to 5000x faster than biological neurons. 

 • Interface improvements. Loihi 2 offers more standard chip interfaces than Loihi. These interfaces are both faster and higher-radix. Loihi 2 chips support 4x faster asynchronous chip-to-chip signaling bandwidths,3 a destination spike broadcast feature that reduces interchip bandwidth utilization by 10x or more in common networks,4 and three-dimensional mesh network topologies with six scalability ports per chip. Loihi 2 supports glueless integration with a wider range of both standard chips, over its new Ethernet interface, as well as emerging event-based vision (and other) sensor devices. 

Photonic reservoir computing for high-speed neuromorphic computing applications - A.Lugnan

 Using these enhancements, Loihi 2 now supports a new deep neural network (DNN) implementation known as the Sigma-Delta Neural Network (SDNN) that provides great gains in speed and efficiency compared to the rate-coded spiking neural network approach commonly used on Loihi. SDNNs compute graded activation values in the same way that conventional DNNs do, but they only communicate significant changes as they happen in a sparse, eventdriven manner. Simulation characterizations show that SDNNs on Loihi 2 can improve on Loihi’s rate-coded SNNs for DNN inference workloads by over 10x in both inference speeds and energy efficiency.

A First Tour of Loihi 2 

 Loihi 2 has the same base architecture as its predecessor Loihi, but comes with several improvements to extend its functionality, improve its flexibility, increase its capacity, accelerate its performance, and make it easier to both scale and integrate into a larger system (see Figure 1). 

 Base Architecture Building on the strengths of its predecessor, each Loihi 2 chip consists of microprocessor cores and up to 128 fully asynchronous neuron cores connected by a network-on-chip (NoC). The neuron cores are optimized for neuromorphic workloads, each implementing a group of spiking neurons, including all synapses connecting to the neurons. All communication between neuron cores is in the form of spike messages. The number of embedded microprocessor cores has doubled from three in Loihi to six in Loihi 2. Microprocessor cores are optimized for spike-based communication and execute standard C code to assist with data I/O as well as network configuration, management, and monitoring. Parallel I/O interfaces extend the on-chip mesh across multiple chips—up to 16,384—with direct pin-to-pin wiring between neighbors. 

Programmable Photonic Integrated Circuits for Quantum Information Processing and Machine Learning

 New Functionality Loihi 2 supports fully programmable neuron models with graded spikes. Each neuron model takes the form of a program, which is a short sequence of microcode instructions describing the behavior of a single neuron. The microcode instruction set supports bitwise and basic math operations in addition to conditional branching, memory access, and specialized instructions for spike generation and probing. 

The second-generation “Loihi” processor from Intel has been made available to advance research into neuromorphic computing approaches that more closely mimic the behavior of biological cognitive processes. Loihi 2 outperforms the previous chip version in terms of density, energy efficiency, and other factors. This is part of an effort to create semiconductors that are more like a biological brain, which might lead to significant improvements in computer performance and efficiency.

Intel Announces Loihi 2, Lava Software Framework For Advancing Neuromorphic  Computing - Phoronix

The first generation of artificial intelligence was built on the foundation of defining rules and emulating classical logic to arrive at rational conclusions within a narrowly defined problem domain. It was ideal for monitoring and optimizing operations. The second generation is dominated by the use of deep learning networks to examine the contents and data that were mostly concerned with sensing and perception. The third generation of AI focuses on drawing similarities to human cognitive processes, like interpretation and autonomous adaptation. 

This is achieved by simulating neurons firing in the same way as humans’ nervous systems do, a method known as neuromorphic computing.

Neuromorphic computing is not a new concept. It was initially suggested in the 1980s by Carver Mead, who coined the phrase “neuromorphic engineering.” Carver had spent more than four decades building analytic systems that simulated human senses and processing mechanisms including sensation, seeing, hearing, and thinking. Neuromorphic computing is a subset of neuromorphic engineering that focuses on the human-like systems’ “thinking” and “processing” capabilities. Today, neuromorphic computing is gaining traction as the next milestone in artificial intelligence technology.

Intel Rolls Out New Loihi 2 Neuromorphic Chip: Built on Early Intel 4  Process

In 2017, Intel released the first-generation Loihi chip, a 14-nanometer chipset with a 60-millimeter die size. It has more than 2 billion transistors and three orchestration Lakemont cores. It also features 128 core packs and a configurable microcode engine for asynchronous spiking neural network-on-chip training. The benefit of having spiking neural networks enabled Loihi to be entirely asynchronous and event-driven, rather than being active and updating on a synchronized clock signal. When a charge builds up in the neurons, “spikes” are sent along active synapses. These spikes are mostly time-based, with time being recorded as part of the data. The core fires out its own spikes to its linked neurons when spikes accumulate in a neuron for a particular amount of time and reach a certain threshold.

Even though Loihi 2 has 128 neuromorphic cores, each core now has 8 times the number of neurons and synapses. Each of the 128 cores has 192 KB of flexible memory, compared to the prior limit of 24. Each neuron may now be assigned up to 4096 states depending on the model, compared to the previous limit of 24. The Neuron model can now be entirely programmable, similar to an FPGA, which gives it more versatility – allowing for new sorts of neuromorphic applications.

One of the drawbacks of Loihi was that spike signals were not programmable and had no context or range of values. Loihi 2 addresses all of these issues while also providing 2-10x (2X for neuron state updates, up to 10X for spike generation) faster circuits, eight times more neurons, and four times more link bandwidth for increased scalability.

Loihi 2 was created using the Intel 4 pre-production process and benefited from the usage of EUV technology in that node. The Intel 4 process allowed to halve the size of the chip from 60 mm2 to 31 mm2, with the number of transistors rising to 2.3 billion. In comparison to previous process technologies, the use of extreme ultraviolet (EUV) lithography in Intel 4 has simplified the layout design guidelines. This has allowed Loihi 2 to be developed quickly.

Programmable Photonic Circuits: a flexible way of manipulating light on chips

Support for three-factor learning rules has been added to the Loihi 2 architecture, as well as improved synaptic (internal interconnections) compression for quicker internal data transmission. Loihi 2 also features parallel off-chip connections (that enable the same types of compression as internal synapses) that may be utilized to extend an on-chip mesh network across many physical chips to create a very powerful neuromorphic computer system. Loihi 2 also features new approaches for continual and associative learning. Furthermore, the chip features 10GbE, GPIO, and SPI interfaces to make it easier to integrate Loihi 2 with traditional systems.

Loihi 2 further improves flexibility by integrating faster, standardized I/O interfaces that support Ethernet connections, vision sensors, and bigger mesh networks. These improvements are intended to improve the chip’s compatibility with robots and sensors, which have long been a part of Loihi’s use cases.

Another significant change is in the portion of the processor that assesses the condition of the neuron before deciding whether or not to transmit a spike. Earlier, users had to make such conclusions using a simple bit of arithmetic in the original processor. Now, they only need to conduct comparisons and regulate the flow of instructions in Loihi 2 thanks to a simpler programmable pipeline.

ESA+ Colloquium - Programmable Photonics - Wim Bogaerts - 3 May 2021

Intel claims Loihi 2’s enhanced architecture allows it to be compatible in carrying back-propagation processes, which is a key component of many AI models. This may help in accelerating the commercialization of neuromorphic chips. Loihi 2 has also been proven to execute inference calculations, with up to 60 times fewer operations per inference compared to Loihi – without any loss in accuracy. Often inference calculations are used by AI models to interpret given data.

The Neuromorphic Research Cloud is presently offering two Loihi 2-based neuromorphic devices to researchers. These are:

Oheo Gulch is a single-chip add-in card that comes with an Intel Arria 10 FPGA for interfacing with Loihi 2 which will be used for early assessment.

Kapoho Point, an 8-chip system board that mounts eight Loihi 2 chips in a 4×4-inch form factor, will be available shortly. It will have GPIO pins along with “standard synchronous and asynchronous interfaces” that will allow it to be used with things like sensors and actuators for embedded robotics applications

These will be available via a cloud service to members of the Intel Neuromorphic Research Community (INRC) and Lava via GitHub for free.

Intel has also created Lava to address the requirement for software convergence, benchmarking, and cross-platform collaboration in the realm of neuromorphic computing. As an open, modular, and extendable framework, it will enable academics and application developers to build on one other’s efforts and eventually converge on a common set of tools, techniques, and libraries. 

Intel Announces Loihi 2, Lava Software Framework For Advancing Neuromorphic  Computing - Phoronix

Lava operates on a range of conventional and neuromorphic processor architectures, allowing for cross-platform execution and compatibility with a variety of artificial intelligence, neuromorphic, and robotics frameworks. Users can get the Lava Software Framework for free on GitHub.

Edy Liongosari, chief research scientist and managing director for Accenture Labs believes that advances like the new Loihi-2 chip and the Lava API will be crucial to the future of neuromorphic computing. “Next-generation neuromorphic architecture will be crucial for Accenture Labs’ research on brain-inspired computer vision algorithms for intelligent edge computing that could power future extended-reality headsets or intelligent mobile robots,” says Edy.

For now, Loihi 2 has piqued the interest of the Queensland University of Technology. The institute is looking to work on more sophisticated neural modules to aid in the implementation of biologically inspired navigation and map formation algorithms. The first generation Loihi is already being used at Los Alamos National Lab to study tradeoffs between quantum and neuromorphic computing. It is also being used in the backpropagation algorithm, which is used to train neural networks.



Intel has unveiled its second-generation neuromorphic computing chip, Loihi 2, the first chip to be built on its Intel 4 process technology. Designed for research into cutting-edge neuromorphic neural networks, Loihi 2 brings a range of improvements. They include a new instruction set for neurons that provides more programmability, allowing spikes to have integer values beyond just 1 and 0, and the ability to scale into three-dimensional meshes of chips for larger systems.

The chipmaker also unveiled Lava, an open-source software framework for developing neuro-inspired applications. Intel hopes to engage neuromorphic researchers in development of Lava, which when up and running will allow research teams to build on each other’s work.

Loihi is Intel’s version of what neuromorphic hardware, designed for brain-inspired spiking neural networks (SNNs), should look like. SNNs are used in event-based computing, in which the timing of input spikes encodes the information. In general, spikes that arrive sooner have more computational effect than those arriving later.

Karlheinz meier - How neuromorphic computing may affect our future life HBP

Intel’s Loihi 2 second-generation neuromorphic processor. (Source: Intel)

Among the key differences between neuromorphic hardware and standard CPUs is fine-grained distribution of memory, meaning Loihi’s memory is embedded into individual cores. Since Loihi’s spikes rely on timing, the architecture is asynchronous.

“In neuromorphic computing, the computation is emerging through the interaction between these dynamical elements,” explained Mike Davies, director of Intel’s Neuromorphic Computing Lab. “In this case, it’s neurons that have this dynamical property of adapting online to the input it receives, and the programmer may not know the precise trajectory of steps that the chip will go through to arrive at an answer.

“It goes through a dynamical process of self-organizing its states and it settles into some new condition. That final fixed point as we call it, or equilibrium state, is what is encoding the answer to the problem that you want to solve,” Davies added. “So it’s very fundamentally different from how we even think about computing in other architectures.”

First-generation Loihi chips have thus far been demonstrated in a variety of research applications, including adaptive robot arm control, where the motion adapts to changes in the system, reducing friction and wear on the arm. Loihi is able to adapt its control algorithm to compensate for errors or unpredictable behavior, enabling robots to operate with the desired accuracy. Loihi has also been used in a system that recognizes different smells. In this scenario, it can learn and detect new odors much more efficiently than a deep learning-based equivalent. A project with Deutsche Bahn also used Loihi for train scheduling. The system reacted quickly to changes such as track closures or stalled trains.

Second-gen features

Built on a pre-production version of the Intel 4 process, Loihi 2 aims to increase programmability and performance without compromising energy efficiency. Like its predecessor, it typically consumes around 100 mW (up to 1 W).

An increase in resource density is one of the most important changes; while the chip still incorporates 128 cores, the neuron count jumps by a factor of eight.

“Getting to a higher amount of storage, neurons and synapses in a single chip is essential for the commercial viability… and commercializing them in a way that makes sense for customer applications,” said Davies.

Loihi 2 features. (Source: Intel)

With Loihi 1, workloads would often map onto the architecture in non-optimal ways. For example, the neuron count would often max out while free memory was still available. The amount of memory in Loihi 2 is similar in total, but has been broken up into memory banks that are more flexible. Additional compression has been added to network parameters to minimize the amount of memory required for larger models. This frees up memory that can be reallocated for neurons.

The upshot is that Loihi 2 can tackle larger problems with the same amount of memory, delivering a roughly 15-fold increase in neural network capacity per millimeter 2 of chip area–bearing in mind that die area is halved overall by new process technology.

Neuron programmability

Programmability is another important architectural modification. Neurons that were previously fixed-function, though configurable, in Loihi 1 gain a full instruction set in Loihi 2. The instruction set includes common arithmetic, comparison and program control flow instructions. That level of programmability would allow varied SNN types to be run more efficiently.

“This is a kind of microcode that allows us to program almost arbitrary neuron models,” Davies said. “This covers the limits of Loihi [1], and where generally we’re finding more application value could be unlocked with even more complex and richer neuron models, which is not what we were expecting at the beginning of Loihi. But now we can actually encompass that full extent of neuron models that our partners are trying to investigate, and what the computational neuroscience domain [is] proposing and characterizing.”

The Loihi 2 die is the first to be fabricated on a pre-production version of Intel 4 process technology. (Source: Intel)

Programmable Photonic Circuits

For Loihi 2, the idea of spikes has also been generalized. Loihi 1 employed strict binary spikes to mirror what is seen in biology, where spikes have no magnitude. All information is represented by spike timing, and earlier spikes would have greater computational effect than later spikes. In Loihi 2, spikes carry a configurable integer payload available to the programmable neuron model. While biological brains don’t do this, Davies said it was relatively easy for Intel to add to the silicon architecture without compromising performance.

“This is an instance where we’re departing from the strict biological fidelity, specifically because we understand what the importance is, the time-coding aspect of it,” he said. “But [we realized] that we can do better, and we can solve the same problems with fewer resources if we have this extra magnitude that can be sent alongside with this spike.”

Generalized event-based messaging is key to Loihi 2’s support of a deep neural network called the sigma-delta neural network (SDNN), which is much faster than the timing approach used on Loihi 1. SDNNs compute graded-activation values in the same way that conventional DNNs do, but only communicate significant changes as they happen in a sparse, event-driven manner.

3D Scaling

Loihi 2 is billed as up to 10 times faster than its predecessor at the circuit level. Combined with functional improvements, the design can deliver up to 10X speed gains, Davies claimed. Loihi 2 supports minimum chip-wide time steps under 200ns; it can also process neuromorphic networks up to 5,000 times faster than biological neurons.

Programmable Photonics - Wim Bogaerts - Stanford

The new chip also features scalability ports which allow Intel to scale neural networks into the third dimension. Without external memory on which to run larger neural networks, Loihi 1 required multiple devices (such as in Intel’s 768-Loihi chip system, Pohoiki Springs). Planar meshes of Loihi 1 chips become 3D meshes in Loihi 2. Meanwhile, chip-to-chip bandwidth has been improved by a factor of four, with compression and new protocols providing one-tenth the redundant spike traffic sent between chips. Davies said the combined capacity boost is around 60-fold for most workloads, avoiding bottlenecks caused by inter-chip links.

Also supported is three-factor learning, which is popular in cutting-edge neuromorphic algorithm research. The same modification, which maps third factors to specific synapses, can be used to approximate back-propagation, the training method used in deep learning. That creates new ways of learning via Loihi.

Loihi 2 will be available to researchers as a single-chip board for developing edge applications (Oheo Gulch). It will also be offered as an eight-chip board intended to scale for more demanding applications. (Source: Intel)

Lava

The Lava software framework rounds out the Loihi enhancements. The open-source project is available to the neuromorphic research community.

“Software continues to hold back the field,” Davies said. “There hasn’t been a lot of progress, not at the same pace as the hardware over the past several years. And there hasn’t been an emergence of a single software framework, as we’ve seen in the deep learning world where we have TensorFlow and PyTorch gathering huge momentum and a user base.”

While Intel has a portfolio of applications demonstrated for Loihi, code sharing among development teams has been limited. That makes it harder for developers to build on progress made elsewhere.

Promoted as a new project, not a product, Davies said Lava is intended as a way to build a framework that supports Loihi researchers working on a range of algorithms. While Lava is aimed at event-based asynchronous message passing, it will also support heterogeneous execution. That allows researchers to develop applications that initially run on CPUs. With access to Loihi hardware, researchers can then map parts of the workload onto the neuromorphic chip. The hope is that approach would help lower the barrier to entry.

“We see a need for convergence and a communal development here towards this greater goal which is going to be necessary for commercializing neuromorphic technology,” Davies said.

Loihi 2 will be used by researchers developing advanced neuromorphic algorithms. Oheo Gulch, a single-chip system for lab testing, will initially be available to researchers, followed by Kapoho Point, an eight-chip Loihi 2 version of Kapoho Bay. Kapoho Point includes an Ethernet interface designed to allow boards to be stacked for applications such as robotics requiring more computing power.

More Information:

https://www.youtube.com/c/PhotonicsResearchGroupUGentimec/videos

https://ecosystem.photonhub.eu/trainings/product/?action=view&id_form=7&id_form_data=14

https://aip.scitation.org/doi/10.1063/5.0047946

https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html

https://www.intel.com/content/www/us/en/newsroom/resources/press-kits-neuromorphic-computing.html

https://www.photonics.com/Articles/Neuromorphic_Processing_Set_to_Propel_Growth_in_AI/a66821

https://www.embedded.com/intel-offers-loihi-2-neuromorphic-chip-and-software-framework/

https://github.com/Linaro/lava




Share:

0 reacties:

Post a Comment