• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

13 December 2019

QISKit open source project

IBM's QISKit open source project

Qiskit is an open-source framework for working with quantum computers at the level of circuits, pulses, and algorithms.

"The Arrival of Quantum Computing" by Will Zeng

A central goal of Qiskit is to build a software stack that makes it easy for anyone to use quantum computers. However, Qiskit also aims to facilitate research on the most important open issues facing quantum computation today.

You can use Qiskit to easily design experiments and run them on simulators and real quantum computers.

Qiskit Quantum Computing tech talk

Qiskit consists of four foundational elements:

Qiskit Terra: Composing quantum programs at the level of circuits and pulses with the code foundation.

Qiskit Aer: Accelerating development via simulators, emulators, and debuggers

Qiskit Ignis: Addressing noise and errors

Qiskit Aqua: Building algorithms and applications

Introduction to Quantum Computer

The Qiskit Elements


Terra, the ‘earth’ element, is the foundation on which the rest of Qiskit lies. Terra provides a bedrock for composing quantum programs at the level of circuits and pulses, to optimize them for the constraints of a particular device, and to manage the execution of batches of experiments on remote-access devices. Terra defines the interfaces for a desirable end-user experience, as well as the efficient handling of layers of optimization, pulse scheduling and backend communication.

Using QISkit: The SDK for Quantum Computing

Qiskit Terra is organized in six main modules:

Circuit A quantum circuit is a model for quantum computing in which a computation is done by performing a sequence of quantum operations (usually gates) on a register of qubits. A quantum circuit usually starts with the qubits in the |0,…,0> state and these gates evolve the qubits to states that cannot be efficiently represented on a classical computer. To extract information on the state a quantum circuit must have a measurement which maps the outcomes (possible random due to the fundamental nature of quantum systems) to classical registers which can be efficiently represented.

Pulse A pulse schedule is set of pulses which are sent to a quantum experiment that are applied to a channel (experimental input line). This is a lower level than circuits and requires each gate in the circuit to be represented as a set of pulses. At this level the experiments can be designed to reduce errors (dynamical decoupling, error mitigation, and optimal pulse shapes).

Transpiler A major part of research on quantum computing is working out how to run a quantum circuits on real devices. In these devices, experimental errors and decoherence introduce errors during computation. Thus, to obtain a robust implementation it is essential to reduce the number of gates and the overall running time of the quantum circuit. The transpiler introduces the concept of a pass manager to allow users to explore optimization and find better quantum circuits for their given algorithm. We call it a transpiler as the end result is still a circuit.

The Arrival of Quantum Computing – Quantum Networks

Providers Once the user has made the circuits to run on the backend they need to have a convenient way of working with it. In Terra we do this using four parts:

A Provider is an entity that provides access to a group of different backends (for example, backends available through the IBM Q Experience). It interacts with those backends to, for example, find out which ones are available, or retrieve an instance of a particular backend.

Backend represent either a simulator or a real quantum computer and are responsible for running quantum circuits and returning results. They have a run method which takes in a qobj as input and returns a BaseJob object. This object allows asynchronous running of jobs for retrieving results from a backend when the job is completed.

Job instances can be thought of as the “ticket” for a submitted job. They find out the execution’s state at a given point in time (for example, if the job is queued, running, or has failed) and also allow control over the job.

Result. Once the job has finished Terra allows the results to be obtained from the remote backends using result = job.result(). This result object holds the quantum data and the most common way of interacting with it is by using result.get_counts(circuit). This method allows the user to get the raw counts from the quantum circuit and use them for more analysis with quantum information tools provided by Terra.

Quantum Information To perform more advanced algorithms and analysis of the circuits run on the quantum computer, it is important to have tools to implement simple quantum information tasks. These include methods to both estimate metrics and generate quantum states, operations, and channels.

IBM Q Quantum Computing

Visualization In Terra we have many tools to visualize a quantum circuit. This allows a quick inspection of the quantum circuit to make sure it is what the user wanted to implement. There is a text, python and latex version. Once the circuit has run it is important to be able to view the output. There is a simple function (plot_histogram) to plot the results from a quantum circuit including an interactive version. There is also a function plot_state and plot_bloch_vector that allow the plotting of a quantum state. These functions are usually only used when using the statevector_simulator backend but can also be used on real data after running state tomography experiments (Ignis).

Aer, the ‘air’ element, permeates all Qiskit elements. To really speed up development of quantum computers we need better simulators, emulators and debuggers. Aer helps us understand the limits of classical processors by demonstrating to what extent they can mimic quantum computation. Furthermore, we can use Aer to verify that current and near-future quantum computers function correctly. This can be done by stretching the limits of simulation, and by simulating the effects of realistic noise on the computation.

Aer provides a high performance simulator framework for quantum circuits using the Qiskit software stack. It contains optimized C++ simulator backends for executing circuits compiled in Terra. Aer also provides tools for constructing highly configurable noise models for performing realistic noisy simulations of the errors that occur during execution on real devices.

Quantum Computing: Technology, Market and Ecosystem Overview

Qiskit Aer includes three high performance simulator backends:

Qasm Simulator
Allows ideal and noisy multi-shot execution of qiskit circuits and returns counts or memory. There are multiple methods that can be used that simulate different cirucits more efficiently. These inlude:

statevector - Uses a dense statevector simulation.

stabilizer - Uses a Clifford stabilizer state simulator that is only valid for Clifford circuits and noise models.

extended_stabilizer - Uses an approximate simulator that decomposes circuits into stabilizer state terms, the number of which grows with the number of non-Clifford gates.

matrix_product_state - Uses a Matrix Product State (MPS) simulator.

Statevector Simulator
Allows ideal single-shot execution of qiskit circuits and returns the final statevector of the simulator after application.

Unitary Simulator
Allows ideal single-shot execution of qiskit circuits and returns the final unitary matrix of the circuit itself. Note that the circuit cannot contain measure or reset operations for this backend.

Ignis, the ‘fire’ element, is dedicated to fighting noise and errors and to forging a new path. This includes better characterization of errors, improving gates, and computing in the presence of noise. Ignis is meant for those who want to design quantum error correction codes, or who wish to study ways to characterize errors through methods such as tomography, or even to find a better way for using gates by exploring dynamical decoupling and optimal control.

Ignis provides code for users to easily generate circuits for specific experiments given a minimal set of user input parameters. Ignis code contains three fundamental building blocks:

The circuits module provides the code to generate the list of circuits for a particular Ignis experiment based on a minimal set of user parameters. These are then run on Terra or Aer.

The results of an Ignis experiment are passed to the Fitters module where they are analyzed and fit according to the physics model describing the experiment. Fitters can plot the data plus fit and output a list of parameters.

For certain Ignis experiments, the fitters can output a Filter object. Filters can be used to mitigate errors in other experiments using the calibration results of an Ignis experiment.

Qiskit Ignis is organized into three types of experiments that can be performed:

Characterization experiments are designed to measure parameters in the system such as noise parameters (T1, T2-star, T2), Hamiltonian parameters such as the ZZ interaction rate and control errors in the gates.

Verification experiments are designed to verify gate and small circuit performance. Verification includes state and process tomography, quantum volume and randomized benchmarking (RB). These experiments provide the information to determine performance metrics such as the gate fidelity.

Mitigation experiments run calibration circuits that are analyzed to generate mitigation routines that can be applied to arbitrary sets of results run on the same backend. Ignis code will generate a list of circuits that run calibration measurements. The results of these measurements will be processed by a Fitter and will output a Filter than can be used to apply mitigation to other results.

Aqua, the ‘water’ element, is the element of life. To make quantum computing live up to its expectations, we need to find real-world applications. Aqua is where algorithms for quantum computers are built. These algorithms can be used to build applications for quantum computing. Aqua is accessible to domain experts in chemistry, optimization, finance and AI, who want to explore the benefits of using quantum computers as accelerators for specific computational tasks.

Problems that may benefit from the power of quantum computing have been identified in numerous domains, such as Chemistry, Artificial Intelligence (AI), Optimization and Finance. Quantum computing, however, requires very specialized skills. To address the needs of the vast population of practitioners who want to use and contribute to quantum computing at various levels of the software stack, we have created Qiskit Aqua.

Programming Existing Quantum Computers

Development Strategy

We are going to look out 12 months to establish a set of goals we want to work towards. When planning, we typically look at potential work from the perspective of the elements.

Qiskit Terra
In 2018 we worked on formalizing the backends and user flow in Qiskit Terra. The basic idea is as follows: the user designs a quantum circuit and then, through a set of transpiler passes, rewrites the circuit to run on different backends with different optimizations. We also introduced the concept of a provider, whose role is to supply backends for the user to run quantum circuits on. The provider API we have defined at version one supplies a set of schemas to verify that the provider and its backends are Terra-compatible.

In 2019, we have many extensions planned. These include:

Add passes to the transpiler. The goal here is to be more efficient in circuit depth as well as adding passes that find approximate circuits and resource estimations.

Introduce a circuit foundry and circuit API. This has the goal of making sure that a user can easily build complex circuits from operations. Some of these include adding controls and power to operations, and inserting unitary matrices directly.

Provide an API for OpenPulse. Now that OpenPulse is defined, and the IBM Q provider can accept it, we plan to build out the pulse features. These will include a scheduler and tools for building experiments out of pulses. Also included will be tools for mapping between experiments with gates (QASM) to experiments with pulses.

Qiskit Aer
The first release of Qiskit Aer was made avaialble at the end of 2018. It included C++ implementations of QASM, statevector, and unitary simulators. These are the core to Qiskit Aer, and replace the simulators that existed in Terra. The QASM simulator includes a customizable general (Kraus) noise model, and all simulators include CPU parallelization through the OpenMP library.

In 2019, Aer will be extended in many ways:

Optimize simulators. We are going to start profiling the simulators and work on making them faster. This will include automatic settings for backend configuration and OpenMP parallelization configuration based on the input Qobj and available hardware.

Classical simulation algorithms for quantum computational supremacy experiments

Develop additional simulator backends. We will include several approximate simulator backends that are more efficient for specific subclasses of circuits, such as the T-gate simulator, which works on Clifford and T gates (with low T-depth), and a stabilizer simulator, which works just on Clifford gates.

Add noise approximation tools. We plan to add noise approximation tools to mapping general (Kraus) noise models to approximate noise model that may be implemented on an approximate backends (for example only mixed Clifford and reset errors in the noise model).

Qiskit Ignis
This year, we are going to release the first version of Qiskit Ignis. The goal of Ignis is to be a set of tools for characterization of errors, improving gates, and enhancing computation in the presence of noise. Examples of these tools include optimal control, dynamical decoupling, and error mitigation.

In 2019, Ignis will include tools for:

quantum state/process tomography

randomized benchmarking over different groups

optimal control (e.g., pulse shaping)

dynamical decoupling

circuit randomization

error mitigation (to improve results for quantum chemistry experiments)

Qiskit Aqua
Aqua is an open-source library of quantum algorithms and applications, introduced in June 2018. As a library of quantum algorithms, Aqua comes with a rich set of quantum algorithms of general applicability—such as VQE, QAOA, Grover’s Search, Amplitude Estimation and Phase Estimation—and domain-specific algorithms-such as the Support Vector Machine (SVM) Quantum Kernel and Variational algorithms, suitable for supervised learning. In addition, Aqua includes algorithm-supporting components, such as optimizers, variational forms, oracles, Quantum Fourier Transforms, feature maps, multiclass classification extension algorithms, uncertainty problems, and random distributions. As a framework for quantum applications, Aqua provides support for Chemistry (released separately as the Qiskit Chemistry component), as well as Artificial Intelligence (AI), Optimization and Finance. Aqua is extensible across multiple domains, and has been designed and structured as a framework that allows researchers to contribute their own implementations of new algorithms and algorithm-supporting components.

Over the course of 2019, we are planning to enrich Aqua as follows:

We will include several new quantum algorithms, such as Deutsch-Jozsa, Simon’s, Bernstein-Vazirani, and Harrow, Hassidim, and Lloyd (HHL).

We will improve the performance of quantum algorithms on top of both simulators and real hardware.

We will provide better support for execution on real quantum hardware.

We will increase the set of problems supported by the AI, Optimization and Finance applications of Aqua.


These are examples of just some of the work we will be focusing on in the next 12 months. We will continuously adapt the plan based on feedback. Please follow along and let us know what you think!

IBM Quantum Computing


The Qiskit project is made up of several elements each performing different functionality. Each is independently useful and can be used on their own, but for convenience we provide this repository and meta-package to provide a single entrypoint to install all the elements at once. This is to simplify the install process and provide a unified interface to end users. However, because each Qiskit element has it’s own releases and versions some care is needed when dealing with versions between the different repositories. This document outlines the guidelines for dealing with versions and releases of both Qiskit elements and the meta-package.

Quantum programming

For the rest of this guide the standard Semantic Versioning nomenclature will be used of: Major.Minor.Patch to refer to the different components of a version number. For example, if the version number was 0.7.1, then the major version is 0, the minor version 7, and the patch version 1.

Meta-package Version
The Qiskit meta-package version is an independent value that is determined by the releases of each of the elements being tracked. Each time we push a release to a tracked component (or add an element) the meta-package requirements, and version will need to be updated and a new release published. The timing should be coordinated with the release of elements to ensure that the meta-package releases track with element releases.

Adding New Elements
When a new Qiskit element is being added to the meta-package requirements, we need to increase the Minor version of the meta-package.

For example, if the meta-package is tracking 2 elements qiskit-aer and qiskit-terra and it’s version is 0.7.4. Then we release a new element qiskit-ignis that we intend to also have included in the meta-package. When we add the new element to the meta-package we increase the version to 0.8.0.

Patch Version Increases
When any Qiskit element that is being already tracked by the meta-package releases a patch version to fix bugs in a release we need also bump the requirement in the setup.py and then increase the patch version of the meta-package.

For example, if the meta-package is tracking 3 elements qiskit-terra==0.8.1, qiskit-aer==0.2.1, and qiskit-ignis==0.1.4 with the current version 0.9.6. When qiskit-terra release a new patch version to fix a bug 0.8.2 the meta-package will also need to increase it’s patch version and release, becoming 0.9.7.

Additionally, there are occasionally packaging or other bugs in the meta-package itself that need to be fixed by pushing new releases. When those are encountered we should increase the patch version to differentiate it from the broken release. Do not delete the broken or any old releases from pypi in any situation, instead just increase the patch version and upload a new release.

Minor Version Increases
Besides adding a new element to the meta-package the minor version of the meta-package should also be increased anytime a minor version is increased in a tracked element.

For example, if the meta-package is tracking 2 elements qiskit-terra==0.7.0 and qiskit-aer==0.1.1 and the current version is 0.7.5. When the qiskit-aer element releases 0.2.0 then we need to increase the meta-package version to be 0.8.0 to correspond to the new release.

Major Version Increases
The major version is different from the other version number components. Unlike the other version number components, which are updated in lock step with each tracked element, the major version is only increased when all tracked versions are bumped (at least before 1.0.0). Right now all the elements still have a major version number component of 0 and until each tracked element in the meta-repository is marked as stable by bumping the major version to be >=1 then the meta-package version should not increase the major version.

The behavior of the major version number component tracking after when all the elements are at >=1.0.0 has not been decided yet.

Qiskit Element Requirement Tracking
While not strictly related to the meta-package and Qiskit versioning how we track the element versions in the meta-package’s requirements list is important. Each element listed in the setup.py should be pinned to a single version. This means that each version of Qiskit should only install a single version for each tracked element. For example, the requirements list at any given point should look something like:

requirements = [
This is to aid in debugging, but also make tracking the versions across multiple elements more transparent.

It is also worth pointing out that the order we install the elements is critically important too. pip does not have a real dependency solver which means the installation order matters. So if there are overlapping requirements versions between elements or dependencies between elements we need to ensure that the order in the requirements list installs everything as expected. If the order needs to be change for some install time incompatibility it should be noted clearly.

Small quantum computers and big classical data

Module Status
Qiskit is developing so fast that is it is hard to keep all different parts of the API supported for various versions. We do our best and we use the rule that for one minor version update, for example 0.6 to 0.7, we will keep the API working with a deprecated warning. Please don’t ignore these warnings. Sometimes there are cases in which this can’t be done and for these in the release history we will outline these in great details.

This being said as we work towards Qiskit 1.0 there are some modules that have become stable and the table below is our attempt to label them


There are three providers that come with the default installation of Qiskit

Basic Aer Provider
This provider simulates ideal quantum circuits and has three backends. As Aer becomes more stable and can work on any operating system this provider will be removed.

Aer Provider
This is a more advance simulator that is written in C++. It runs faster than Basic Aer and also allows you to add noise to your circuits. This allow you to explore what happens to your circuits for realistic models of the experiments and design experiments that might be more resilient to the noise in today’s quantum computers.

IBM Q Provider
This provider gives you access to real experiments. You will need an IBM Q Experience account to use it. It also has an online HPC simulator that can be used. It is a hosted version of the Aer Provider.

Community Extensions
Qiskit has been designed with modularity in mind. It is extensible in many different ways; on the page, we highlight the ways in which the Qiskit community has engaged with Qiskit and developed extensions and packages on top of it.

The Qiskit base provider is an entity that provides access to a group of different backends (for example, backends available through IBM Q). It interacts with those backends to do many things: find out which ones are available, retrieve an instance of a particular backend, get backend properties and configurations, and handling running and working with jobs.

Additional providers
Decision diagram-based quantum simulator

- Organization: Johannes Kepler University, Linz, Austria (Alwin Zulehner and Robert Wille)
- Description: A local provider which allows Qiskit to use decision diagram-based quantum simulation
- Qiskit Version: 0.7
- More info: Webpage at JKU, Medium Blog and Github Repo
Quantum Inspire

- Organization: QuTech-Delft
- Description: A provider for the Quantum Inspire backend
- Qiskit Version: 0.7
- More info: Medium Blog and Github.
Circuit optimization is at the heart of making quantum computing feasible on actual hardware. A central component of Qiskit is the transpiler, which is a framework for manipulating quantum circuits according to certain transformations (known as transpiler passes). The transpiler enables users to create customized sets of passes, orchestrated by a pass manager, to transform the circuit according to the rules specified by the passes. In addition, the transpiler architecture is designed for modularity and extensibility, enabling Qiskit users to write their own passes, use them in the pass manager, and combine them with existing passes. In this way, the transpiler architecture opens up the door for research into aggressive optimization of quantum circuits.

Additional passes
t|ket〉 optimization & routing pass

- Organization: Cambridge Quantum Computing
- Description: Transpiler pass for circuit optimization and mapping to backend using CQC’s t|ket〉compiler.
- Qiskit Version: 0.7
- More info: Tutorial Notebook and Github.
Extending Qiskit with new tools and functionality is an important part of building a community. These tools can be new visualizations, slack integration, Juypter extensions and much more.

Project Highlight: Quantum Computing Meets Machine Learning

If learning is the first step towards intelligence, it’s no wonder we’re sending machines to school.
Machine learning, specifically, is the self-learning process by which machines use patterns to learn rather than (in the ideal case) asking humans for assistance. Seen as a subset of artificial intelligence, machine learning has been gaining traction in the development community as many frameworks are now available.

Quantum Programming A New Approach to Solve Complex Problems Francisco Gálvez Ramirez IBM Staff fjgramirez@es.ibm.com

And soon, you may have a machine learning framework available in your favorite quantum computing framework!

The winning project of 2019 Qiskit Camp Europe, QizGloria, is a hybrid quantum-classical machine learning interface with full Qiskit and PyTorch capabilities. PyTorch is a machine learning library that, like Qiskit, is free and open-source. By integrating Qiskit and PyTorch frameworks during the 24-hour hackathon, the QizGloria group demonstrated that you can use the best of the quantum and classical world for machine learning. The project is still ongoing modifications but may soon be integrated into Qiskit Aqua.
Below, we interview the four members of the QizGloria group about their project, their experiences, and their future outlook on the field. Interviews are edited for clarity.
Why did you think to combine Qiskit, a quantum-computing framework, with PyTorch, a machine-learning framework?

Controlling a Quantum Computer with Code

Karel Dumon: Classical machine learning is currently benefiting hugely from the open-source community, and this is something we want to leverage in quantum too. Our project focuses on the potential application of quantum computing for machine learning, but also on the use of machine learning to help progress quantum computing itself. Through our project, we hope to make it easier for machine learning developers to explore the quantum world.
Patrick Huembeli: To that effect, it makes Qiskit very accessible for people with a classical machine learning background — they can treat the quantum nodes just as another layer of their machine learning algorithm.

Amira Abbas: In that sense, this project bridges the gap between two communities, machine learning and quantum computing, whose research could seriously complement each other instead of diverging.
How do you think your integration will benefit the Qiskit community?

Dumon: There are a lot of open-source tools available for both quantum computing and machine learning, but those integrations do not provide the optimal synergy between the two worlds. What we tried to build is a tighter integration between Qiskit and PyTorch (an open-source machine learning framework from Facebook) that makes optimal use of the existing capabilities.

Isaac Turtletaub: In quantum computing, we commonly have circuits that need to be optimized with a classical computer. PyTorch is one of the largest machine learning libraries out there, and opens up the possibilities of using deep learning for optimizing quantum circuits.

Do you plan to continue working on this project?

Dumon: During the hackathon, we built the bridge between the two worlds, and showcased some possibilities — but we definitely believe that this is just the beginning of what is possible! While our Qiskit Camp submission was a proof-of-concept, we are currently working with the Qiskit team to include our work in the Qiskit Aqua codebase.

Turtletaub: I plan on continuing to work on this project by contributing to a generalized interface between PyTorch and Qiskit, allowing this to work on any variational quantum circuit. I hope collaborating with the IBM coaches will let all Qiskitters take advantage of our project.

Abbas: We also plan on writing a chapter on hybrid quantum-classical machine learning using PyTorch for the open-source Qiskit textbook and created an pull request for this on GitHub.
What is one of the more difficult challenges still ahead?

Huembeli: Getting the parameter binding of Qiskit right. This will be very important if we want to continue this project. This has to be thought through very well.
In what other ways could this project be expanded?
Turtletaub: This project could be expanded by not only opening up Qiskit to PyTorch, but to another machine learning library, such as TensorFlow.

Huembeli: And if we integrate it well into Qiskit, people will be able to add any nice classical machine learning feature to Qiskit. There is really no limit of applications.

Abbas: Since everything is open source, members of the community can contribute to the code (via pull requests) and add functionalities; make things more efficient, and even create more tutorials demonstrating new ideas or research.

Dumon: We hope that others start playing around with our code and help shape the idea further. This is at the core of the open-source spirit.

And on another topic — Qiskit Camp Europe — what was your favorite part?

Huembeli: The hackathon. It was amazing to see what you can get done in 24 hours.

Turtletaub: My favorite aspect was being able to meet people interested in quantum computing from all across the world and being able to collaborate with some of the top researchers and engineers at IBM.

Abbas: Hands down, my favourite aspect of the hackathon was the people. Coming from South Africa, I was really worried I wouldn’t fit in or be good enough because I’m just a master’s student from the University of KwaZulu-Natal with no undergraduate experience in physics. But as soon as I arrived, I realised that the intention of others at the camp wasn’t to undermine others’ capabilities or differences, but to highlight them and use them to build beautiful applications with Qiskit. There were people from all types of backgrounds with differing levels of experience, and all so helpful, open and keen to learn. I was blown away by the creativity of the projects and I am convinced that the world of quantum computing has a very bright future if these are some of the individuals contributing to it.

Quantum Programming A New Approach to Solve Complex Problems Francisco Gálvez Ramirez IBM Staff fjgramirez@es.ibm.com

More Information:








QISKit OPpenSource








19 November 2019

What is Azure Synapse Analytics (formerly SQL DW)?

What is Azure Synapse Analytics 

Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs

Vision Keynote with Satya Nadella | Microsoft Ignite 2019

Azure Synapse has four components:

  • SQL Analytics: Complete T-SQL based analytics – Generally Available
  • SQL pool (pay per DWU provisioned)
  • SQL on-demand (pay per TB processed) – (Preview)
  • Spark: Deeply integrated Apache Spark (Preview)
  • Data Integration: Hybrid data integration (Preview)
  • Studio: Unified user experience. (Preview)

To access the preview features of Azure Synapse, request access here. Microsoft will triage all requests and respond as soon as possible.

SQL Analytics and SQL pool in Azure Synapse

SQL Analytics refers to the enterprise data warehousing features that are generally available with Azure Synapse.

Azure Synapse Analytics - Next-gen Azure SQL Data Warehouse

SQL pool represents a collection of analytic resources that are being provisioned when using SQL Analytics. The size of SQL pool is determined by Data Warehousing Units (DWU).

Import big data with simple PolyBase T-SQL queries, and then use the power of MPP to run high-performance analytics. As you integrate and analyze, SQL Analytics will become the single version of truth your business can count on for faster and more robust insights.

Modern Data Warehouse overview | Azure SQL Data Warehouse

In a cloud data solution, data is ingested into big data stores from a variety of sources. Once in a big data store, Hadoop, Spark, and machine learning algorithms prepare and train the data. When the data is ready for complex analysis, SQL Analytics uses PolyBase to query the big data stores. PolyBase uses standard T-SQL queries to bring the data into SQL Analytics tables.

Azure data platform overview

SQL Analytics stores data in relational tables with columnar storage. This format significantly reduces the data storage costs, and improves query performance. Once data is stored, you can run analytics at massive scale. Compared to traditional database systems, analysis queries finish in seconds instead of minutes, or hours instead of days.

The analysis results can go to worldwide reporting databases or applications. Business analysts can then gain insights to make well-informed business decisions.

Azure Synapse Analytics (formerly SQL DW) architecture

Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs.

Azure Synapse has four components:

1) SQL Analytics : Complete T-SQL based analytics:

  • SQL pool (pay per DWU provisioned) – Generally Available
  • SQL on-demand (pay per TB processed) – (Preview)
2) Spark : Deeply integrated Apache Spark (Preview)
3) Data Integration : Hybrid data integration (Preview)
4) Studio : unified user experience. (Preview)

On November fourth, Microsoft announced Azure Synapse Analytics, the next evolution of Azure SQL Data Warehouse. Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs.

With Azure Synapse, data professionals can query both relational and non-relational data using the familiar SQL language. This can be done using either serverless on-demand queries for data exploration and ad hoc analysis or provisioned resources for your most demanding data warehousing needs. A single service for any workload.

In fact, it’s the first and only analytics system to have run all the TPC-H queries at petabyte-scale. For current SQL Data Warehouse customers, you can continue running your existing data warehouse workloads in production today with Azure Synapse and will automatically benefit from the new preview capabilities when they become generally available. You can sign up to preview new features like Serverless on-demand query, Azure Synapse studio, and Apache Spark™ integration.

Building a modern data warehouse

Taking SQL beyond data warehousing

A cloud native, distributed SQL processing engine is at the foundation of Azure Synapse and is what enables the service to support the most demanding enterprise data warehousing workloads. This week at Ignite we introduced a number of exciting features to make data warehousing with Azure Synapse easier and allow organizations to use SQL for a broader set of analytics use cases.

Unlock powerful insights faster from all data
Azure Synapse deeply integrates with Power BI and Azure Machine Learning to drive insights for all users, from data scientists coding with statistics to the business user with Power BI. And to make all types of analytics possible, we’re announcing native and built-in prediction support, as well as runtime level improvements to how Azure Synapse handles streaming data, parquet files, and Polybase. Let’s dive into more detail:

With the native PREDICT statement, you can score machine learning models within your data warehouse—avoiding the need for large and complex data movement. The PREDICT function (available in preview) relies on open model framework and takes user data as input to generate predictions. Users can convert existing models trained in Azure Machine Learning, Apache Spark™, or other frameworks into an internal format representation without having to start from scratch, accelerating time to insight.

Azure SQL Database & Azure SQL Data Warehouse

We’ve enabled direct streaming ingestion support and ability to execute analytical queries over streaming data. Capabilities such as: joins across multiple streaming inputs, aggregations within one or more streaming inputs, transform semi-structured data and multiple temporal windows are all supported directly in your data warehousing environment (available in preview). For streaming ingestion, customers can integrate with Event Hubs (including Event Hubs for Kafka) and IoT Hubs.

We’re also removing the barrier that inhibits securely and easily sharing data inside or outside your organization with Azure Data Share integration for sharing both data lake and data warehouse data.

Modern Data Warehouse Overview

By using new ParquetDirect technology, we are making interactive queries over the data lake a reality (in preview). It’s designed to access Parquet files with native support directly built into the engine. Through improved data scan rates, intelligent data caching and columnstore batch processing, we’ve improved Polybase execution by over 13x.

Introducing the modern data warehouse solution pattern with Azure SQL Data Warehouse

Workload isolation
To support customers as they democratize their data warehouses, we are announcing new features for intelligent workload management. The new Workload Isolation functionality allows you to manage the execution of heterogeneous workloads while providing flexibility and control over data warehouse resources. This leads to improved execution predictability and enhances the ability to satisfy predefined SLAs.

COPY statement
Analyzing petabyte-scale data requires ingesting petabyte-scale data. To streamline the data ingestion process, we are introducing a simple and flexible COPY statement. With only one command, Azure Synapse now enables data to be seamlessly ingested into a data warehouse in a fast and secure manner.

This new COPY statement enables using a single T-SQL statement to load data, parse standard CSV files, and more.

COPY statement sample code:

COPY INTO dbo.[FactOnlineSales] FROM ’https://contoso.blob.core.windows.net/Sales/’

Safe keeping for data with unmatched security
Azure has the most advanced security and privacy features in the market. These features are built into the fabric of Azure Synapse, such as automated threat detection and always-on data encryption. And for fine-grained access control businesses can ensure data stays safe and private using column-level security, native row-level security, and dynamic data masking (now generally available) to automatically protect sensitive data in real time.

To further enhance security and privacy, we are introducing Azure Private Link. It provides a secure and scalable way to consume deployed resources from your own Azure Virtual Network (VNet). A secure connection is established using a consent-based call flow. Once established, all data that flows between Azure Synapse and service consumers is isolated from the internet and stays on the Microsoft network. There is no longer a need for gateways, network addresses translation (NAT) devices, or public IP addresses to communicate with the service.

SQL Analytics MPP architecture components

SQL Analytics leverages a scale out architecture to distribute computational processing of data across multiple nodes. The unit of scale is an abstraction of compute power that is known as a data warehouse unit. Compute is separate from storage which enables you to scale compute independently of the data in your system.

AI for Intelligent Cloud and Intelligent Edge: Discover, Deploy, and Manage with Azure ML Services

SQL Analytics uses a node-based architecture. Applications connect and issue T-SQL commands to a Control node, which is the single point of entry for SQL Analytics. The Control node runs the MPP engine which optimizes queries for parallel processing, and then passes operations to Compute nodes to do their work in parallel.

The Compute nodes store all user data in Azure Storage and run the parallel queries. The Data Movement Service (DMS) is a system-level internal service that moves data across the nodes as necessary to run queries in parallel and return accurate results.

With decoupled storage and compute, when using SQL Analytics one can:

  • Independently size compute power irrespective of your storage needs.
  • Grow or shrink compute power, within a SQL pool (data warehouse), without moving data.
  • Pause compute capacity while leaving data intact, so you only pay for storage.
  • Resume compute capacity during operational hours.

Data Warehousing And Big Data Analytics in Azure Basics Tutorial

Azure storage

SQL Analytics leverages Azure storage to keep your user data safe. Since your data is stored and managed by Azure storage, there is a separate charge for your storage consumption. The data itself is sharded into distributions to optimize the performance of the system. You can choose which sharding pattern to use to distribute the data when you define the table. These sharding patterns are supported:

  • Hash
  • Round Robin
  • Replicate
  • Control node

The Control node is the brain of the architecture. It is the front end that interacts with all applications and connections. The MPP engine runs on the Control node to optimize and coordinate parallel queries. When you submit a T-SQL query to SQL Analytics, the Control node transforms it into queries that run against each distribution in parallel.
Compute nodes

The Compute nodes provide the computational power. Distributions map to Compute nodes for processing. As you pay for more compute resources, SQL Analytics re-maps the distributions to the available Compute nodes. The number of compute nodes ranges from 1 to 60, and is determined by the service level for SQL Analytics.
Each Compute node has a node ID that is visible in system views. You can see the Compute node ID by looking for the node_id column in system views whose names begin with sys.pdw_nodes. For a list of these system views, see MPP system views.
Data Movement Service

Data Movement Service (DMS) is the data transport technology that coordinates data movement between the Compute nodes. Some queries require data movement to ensure the parallel queries return accurate results. When data movement is required, DMS ensures the right data gets to the right location.

Machine Learning and AI

A distribution is the basic unit of storage and processing for parallel queries that run on distributed data. When SQL Analytics runs a query, the work is divided into 60 smaller queries that run in parallel.
Each of the 60 smaller queries runs on one of the data distributions. Each Compute node manages one or more of the 60 distributions. A SQL pool with maximum compute resources has one distribution per Compute node. A SQL pool with minimum compute resources has all the distributions on one compute node.

Hash-distributed tables

A hash distributed table can deliver the highest query performance for joins and aggregations on large tables.
To shard data into a hash-distributed table, SQL Analytics uses a hash function to deterministically assign each row to one distribution. In the table definition, one of the columns is designated as the distribution column. The hash function uses the values in the distribution column to assign each row to a distribution.

The following diagram illustrates how a full (non-distributed table) gets stored as a hash-distributed table.

Distributed table
Each row belongs to one distribution.
A deterministic hash algorithm assigns each row to one distribution.
The number of table rows per distribution varies as shown by the different sizes of tables.
There are performance considerations for the selection of a distribution column, such as distinctness, data skew, and the types of queries that run on the system.

Round-robin distributed tables
A round-robin table is the simplest table to create and delivers fast performance when used as a staging table for loads.
A round-robin distributed table distributes data evenly across the table but without any further optimization. A distribution is first chosen at random and then buffers of rows are assigned to distributions sequentially. It is quick to load data into a round-robin table, but query performance can often be better with hash distributed tables. Joins on round-robin tables require reshuffling data and this takes additional time.

Replicated Tables
A replicated table provides the fastest query performance for small tables.
A table that is replicated caches a full copy of the table on each compute node. Consequently, replicating a table removes the need to transfer data among compute nodes before a join or aggregation. Replicated tables are best utilized with small tables. Extra storage is required and there is additional overhead that is incurred when writing data which make large tables impractical.
The diagram below shows a replicated table which is cached on the first distribution on each compute node.

AI for an intelligent cloud and intelligent edge: Discover, deploy, and manage with Azure ML services

Compare price-performance of Azure Synapse Analytics and Google BigQuery

Azure Synapse Analytics (formerly Azure SQL Data Warehouse) outperforms Google BigQuery in all TPC-H and TPC-DS* benchmark queries. Azure Synapse Analytics consistently demonstrated better price-performance compared with BigQuery, and costs up to 94 percent less when measured against Azure Synapse Analytics clusters running TPC-H* benchmark queries.

*Performance and price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in January 2019 for the TPC-H benchmark report and March 2019 for the TPC-DS benchmark report. Analytics in Azure is up to 14 times faster and costs 94 percent less, according to the TPC-H benchmark, and is up to 12 times faster and costs 73 percent less, according to the TPC-DS benchmark. Benchmark data is taken from recognized industry standards, TPC Benchmark™ H (TPC-H) and TPC Benchmark™ DS (TPC-DS), and is based on query execution performance testing of 66 queries for TPC-H and 309 queries for TPC-DS, conducted by GigaOm in January 2019 and March 2019, respectively; testing commissioned by Microsoft. Price-performance is calculated by GigaOm as the TPC-H/TPC-DS metric of cost of ownership divided by composite query. Prices are based on publicly available US pricing as of January 2019 for TPC-H queries and March 2019 for TPC-DS queries. Actual performance and prices may vary. Learn more about the GigaOm benchmark study

QSSUG: Azure Cognitive Services – The Rise of the Machines

Forrester interviewed four customers and surveyed 364 others on their use of Azure analytics with Power BI. Of those surveyed customers, 85 percent agreed or strongly agreed that well-integrated analytics databases and storage, a data management stack, and business intelligence tools were beneficial to their organization. Customers also reported a 21.9 percent average reduction in the overall cost of Microsoft analytics and BI offerings when compared to alternative analytics solutions.

Based on the companies interviewed and surveyed, Forrester projects that a Microsoft analytics and business intelligence (BI) solution could provide:
Benefits of $22.1 million over three years versus costs of $6 million, resulting in a net present value of $16.1 million and a projected return on investment of 271 percent.
Reduced total cost of ownership by 25.7 percent.
Better overall analytics system performance with improved data security, enhanced decision making, and democratized data access.

Modern Data Warehousing with BigQuery (Cloud Next '19)

Analytics in Azure is up to 14x faster and costs 94% less than other cloud providers. Why go anywhere else?

Julia White Corporate Vice President, Microsoft Azure
It’s true. With the volume and complexity of data rapidly increasing, performance and security are critical requirements for analytics. But not all analytics services are built equal. And not all cloud storage is built for analytics.

Only Azure provides the most comprehensive set of analytics services from data ingestion to storage to data warehousing to machine learning and BI. Each of these services have been finely tuned to provide industry leading performance, security and ease of use, at unmatched value. In short, Azure has you covered.

Unparalleled price-performance

When it comes to analytics, price-performance is key. In July 2018, GigaOm published a study that showed that Azure SQL Data Warehouse was 67 percent faster and 23 percent cheaper than Amazon Web Service RedShift.

That was then. Today, we’re even better!

In the most recent study by GigaOm, they found that Azure SQL Data Warehouse is now outperforming the competition up to a whopping 14x times. No one else has produced independent, industry-accepted benchmarks like these. Not AWS Redshift or Google BigQuery. And the best part? Azure is up to 94 percent cheaper.

This industry leading price-performance extends to the rest of our analytics stack. This includes Azure Data Lake Storage, our cloud data storage service, and Azure Databricks, our big data processing service. Customers like Newell Brands – worldwide marketer of consumer and commercial products such as Rubbermaid, Mr. Coffee and Oster – recently moved their workload to Azure and realized significant improvements.

“Azure Data Lake Storage will streamline our analytics process and deliver better end to end performance with lower cost.” 
– Danny Siegel, Vice President of Information Delivery Systems, Newell Brands
Secure cloud analytics

All the price-performance in the world means nothing without security. Make the comparison and you will see Azure is the most trusted cloud in the market. Azure has the most comprehensive set of compliance offerings, including more certifications than any other cloud vendor combined with advanced identity governance and access management with Active Directory integration.

For analytics, we have developed additional capabilities to meet customers’ most stringent security requirements. Azure Data Lake Storage provides multi-layered security including POSIX compliant file and folder permissions and at-rest encryption. Similarly, Azure SQL Data Warehouse utilizes machine learning to provide the most comprehensive set of security capabilities across data protection, access control, authentication, network security, and automatic threat detection.

Insights for all

What’s the best compliment to Azure Analytics’ unmatched price-performance and security? The answer is Microsoft Power BI.

Power BI’s ease of use enables everyone in your organization to benefit from our analytics stack. Employees can get their insights in seconds from all enterprise data stored in Azure. And without limitations on concurrency, Power BI can be used across teams to create the most beautiful visualizations that deliver powerful insights.

Leveraging Microsoft’s Common Data Model, Power BI users can easily access and analyze enterprise data using a common data schema without needing complex data transformation. Customers looking for petabyte-scale analytics can leverage Power BI Aggregations with Azure SQL Data Warehouse for rapid query. Better yet, Power BI users can easily apply sophisticated AI models built with Azure. Powerful insights easily accessible to all.

Customers like Heathrow Airport, one of the busiest airports in the world, are empowering their employees with powerful insights:

“With Power BI, we can very quickly connect to a wide range of data sources with very little effort and use this data to run Heathrow more smoothly than ever before. Every day, we experience a huge amount of variability in our business. With Azure, we’re getting to the point where we can anticipate passenger flow and stay ahead of disruption that causes stress for passengers and employee.”
– Stuart Birrell, Chief Information Officer, Heathrow Airport
Code-free modern data warehouse using Azure SQL DW and Data Factory | Azure Friday


We continue to focus on making Azure the best place for your data and analytics. Our priority is to meet your needs for today and tomorrow.

So, we are excited to make the following announcements:

General availability of Azure Data Lake Storage: The first cloud storage that combines the best of hierarchical files system and blob storage.
General availability of Azure Data Explorer: A fast, fully managed service that simplifies ad hoc and interactive analysis over telemetry, time-series, and log data. This service, powering other Azure services like Log Analytics, App Insights, Time Series Insights, is useful to query streaming data to identify trends, detect anomalies, and diagnose problems.
Preview of new Mapping Data Flow capability in Azure Data Factory: Visual Flow provides a visual, zero-code experience to help data engineers to easily build data transformations. This complements the Azure Data Factory’s code-first experience to enable data engineers of all skill levels to collaborate and build powerful hybrid data transformation pipelines.
Azure provides the most comprehensive platform for analytics. With these updates, Azure solidifies its leadership in analytics.

More Information:











22 October 2019

Google Claims Quantum Supremacy - Not so Fast Says IBM, but are they Right?

What Google's Quantum Supremacy Claim Means for Quantum Computing

Leaked details about Google's quantum supremacy experiment stirred up a media frenzy about the next quantum computing milestone

Return to editingThe Limits of Quantum Computers

Google’s claim to have demonstrated quantum supremacy—one of the earliest and most hotly anticipated milestones on the long road toward practical quantum computing—was supposed to make its official debut in a prestigious science journal. Instead, an early leak of the research paper has sparked a frenzy of media coverage and some misinformed speculation about when quantum computers will be ready to crack the world’s computer security algorithms.

Google’s new Bristlecone processor brings it one step closer to quantum supremacy

The moment when quantum computing can seriously threaten to compromise the security of digital communications remains many years, if not decades, in the future. But the leaked draft of Google’s paper likely represents the first experimental proof of the long-held theoretical premise that quantum computers can outperform even the most powerful modern supercomputers on certain tasks, experts say. Such a demonstration of quantum supremacy is a long-awaited signpost showing researchers that they’re on the right path to the promised land of practical quantum computers.

“For those of us who work in quantum computing, the achievement of quantum supremacy is a huge and very welcome milestone,” says Scott Aaronson, a computer scientist and director of the Quantum Information Center at the University of Texas at Austin, who was not involved in Google’s research. “And it’s not a surprise—it’s something we all expected was coming in a matter of a couple of years at most.”

The Complexity of Quantum Sampling QIP 2018 Michael Bremner

What Is Quantum Computing? 

Quantum computing harnesses the rules of quantum physics that hold sway over some of the smallest particles in the universe in order to build devices very different from today’s “classical” computer chips used in smartphones and laptops. Instead of classical computing’s binary bits of information that can only exist in one of two basic states, a quantum computer relies on quantum bits (qubits) that can exist in many different possible states. It’s a bit like having a classical computing coin that can only go “heads” or “tails” versus a quantum computing marble that can roll around and take on many different positions relative to its “heads” or “tails” hemispheres.

Because each qubit can hold many different states of information, multiple qubits connected through quantum entanglement hold the promise of speedily performing complex computing operations that might take thousands or millions of years on modern supercomputers. To build such quantum computers, some research labs have been using lasers and electric fields to trap and manipulate atoms as individual qubits.

Quantum Computing and Quantum Supremacy

Other groups such as the Google AI Quantum Lab led by John Martinis at the University of California, Santa Barbara, have been experimenting with qubits made of loops of superconducting metal. It’s this approach that enabled Google and its research collaborators to demonstrate quantum supremacy based on a 54-qubit array laid out in a flat, rectangular arrangement—although one qubit turned out defective and reduced the number of working qubits to 53. (Google did not respond to a request for comment.)

“For the past year or two, we had a very good idea that it was going to be the Google group, because they were the ones who were really explicitly targeting this goal in all their work,” Aaronson says. “They are also on the forefront of building the hardware.”

D Wave Webinar: A Machine of a Different Kind, Quantum Computing, 2019

Google’s Quantum Supremacy Experiment
Google’s experiment tested whether the company’s quantum computing device, named Sycamore, could correctly produce samples from a random quantum circuit—the equivalent of verifying the results from the quantum version of a random number generator. In this case, the quantum circuit consisted of a certain random sequence of single- and two-qubit logical operations, with up to 20 such operations (known as “gates”) randomly strung together.

The Sycamore quantum computing device sampled the random quantum circuit one million times in just three minutes and 20 seconds. When the team simulated the same quantum circuit on classical computers, it found that even the Summit supercomputer that is currently ranked as the most powerful in the world would require approximately 10,000 years to perform the same task.

“There are many in the classical computer community, who don't understand quantum theory, who have claimed that quantum computers are not more powerful than classical computers,” says Jonathan Dowling, a professor in theoretical physics and member of the Quantum Science and Technologies Group at Louisiana State University in Baton Rouge. “This experiment pokes a stick into their eyes.”


“This is not the top of Mount Everest, but it’s certainly crossing a pretty big peak along the way.”
—Daniel Lidar, University of Southern California
In a twist that even Google probably didn’t see coming, a draft of the paper describing the company’s quantum supremacy experiment leaked early when someone—possibly a research collaborator at the NASA Ames Research Center—uploaded the paper to the NASA Technical Reports Server. It might have sat there unnoticed before being hastily removed, if not for Google’s own search engine algorithm, which plucked the paper from its obscure server and emailed it to Dowling and anyone else who had signed up for Google Scholar alerts related to quantum computing.

The random number generator experiment may seem like an arbitrary benchmark for quantum supremacy without much practical application. But Aaronson has recently proposed that such a random quantum circuit could become the basis of a certified randomness protocol that could prove very useful for certain cryptocurrencies and cryptographic protocols. Beyond this very specific application, he suggests that future quantum computing experiments could aim to perform a useful quantum simulation of complex systems such as those found in condensed matter physics.

Introduction to Quantum Computing

What’s Next for Quantum Computing?
Google’s apparent achievement doesn’t rule out the possibility of another research group developing a better classical computing algorithm that eventually solves the random number generator challenge faster than Google’s current quantum computing device. But even if that happens, quantum computing capabilities are expected to greatly outpace classical computing’s much more limited growth as time goes on.

“This horse race between classical computing and quantum computing is going to continue,” says Daniel Lidar, director of the Center for Quantum Information Science and Technology at the University of Southern California in Los Angeles. “Eventually though, because quantum computers that have sufficiently high fidelity components just scale better as far as we know—exponentially better for some problems—eventually it’s going to become impossible for classical computers to keep up.”

Google’s team has even coined a term to describe how quickly quantum computing could gain on classical computing: “Neven’s Law.” Unlike Moore’s Law that has predicted classical computing power will approximately double every two years—exponential growth—Neven’s Law describes how quantum computing seems to gain power far more rapidly through double exponential growth.

“If you’ve ever plotted a double exponential [on a graph], it looks like the line is zero and then you hit the corner of a box and you go straight up,” says Andrew Sornborger, a theoretical physicist who studies quantum computers at Los Alamos National Laboratory in New Mexico. “And so before and after, it’s not so much like an evolution, it’s more like an event—before you hit the corner and after you hit the corner.”

Quantum computing’s exponential growth advantage has the potential to transform certain areas of scientific research and real-world applications in the long run. For example, Sornborger anticipates being able to use future quantum computers to perform far more complex simulations that go well beyond anything that’s possible with today’s best supercomputers.

The Integration Algorithm A quantum computer could integrate a function in less computational time then a classical computer... The integral of a one dimensional.

Wanted: Quantum Error Correction
Another long-term expectation is that a practical, general-purpose quantum computer could someday crack the standard digital codes used to safeguard computer security and the Internet. That possibility triggered premature alarm bells from conspiracy theorists and at least one U.S. presidential candidate when news first broke about Google’s quantum supremacy experiment via the Financial Times. (The growing swirl of online speculation eventually prompted Junye Huang, a Ph.D. candidate at the National University of Singapore, to share a copy of the leaked Google paper on his Google Drive account.)

In fact, the U.S. government is already taking steps to prepare for the future possibility of practical quantum computing breaking modern cryptography standards. The U.S. National Institute of Standards and Technology has been overseeing a process that challenges cryptography researchers to develop and test quantum-resistant algorithms that can continue to keep global communications secure.
The moment when quantum computing can seriously threaten to compromise the security of digital communications remains many years, if not decades, in the future.
The apparent quantum supremacy achievement marks just the first of many steps necessary to develop practical quantum computers. The fragility of qubits makes it challenging to maintain specific quantum states over longer periods of time when performing computational operations. That means it’s far from easy to cobble together large arrays involving the thousands or even millions of qubits that will likely be necessary for practical, general-purpose quantum computing.

Quantum computing

Such huge qubit arrays will require error correction techniques that can detect and fix errors in the many individual qubits working together. A practical quantum computer will need to have full error correction and prove itself fault tolerant—immune to the errors in logical operations and qubit measurements—in order to truly unleash the power of quantum computing, Lidar says.

Many experts think the next big quantum computing milestone will be a successful demonstration of error correction in a quantum computing device that also achieves quantum supremacy. Google’s team is well-positioned to shoot for that goal given that its quantum computing architecture showcased in the latest experiment is built to accommodate "surface code” error correction. But it will almost certainly have plenty of company on the road ahead as many researchers look beyond quantum supremacy to the next milestones.

“You take one step at a time and you get to the top of Mount Everest,” Lidar says. “This is not the top of Mount Everest, but it’s certainly crossing a pretty big peak along the way.”

This could be the dawn of a new era in computing. Google has claimed that its quantum computer performed a calculation that would be practically impossible for even the best supercomputer – in other words, it has attained quantum supremacy.

If true, it is big news. Quantum computers have the potential to change the way we design new materials, work out logistics, build artificial intelligence and break encryption. That is why firms like Google, Intel and IBM – along with plenty of start-ups – have been racing to reach this crucial milestone.

The development at Google is, however, shrouded in intrigue. A paper containing details of the work was posted to a NASA server last week, before being quickly removed. Several media outlets reported on the rumours, but Google hasn’t commented on them.

Read more: Revealed: Google’s plan for quantum computer supremacy
A copy of the paper seen by New Scientist contains details of a quantum processor called Sycamore that contains 54 superconducting quantum bits, or qubits. It claims that Sycamore has achieved quantum supremacy. The paper identifies only one author: John Martinis at the University of California, Santa Barbara, who is known to have partnered with Google to build the hardware for a quantum computer.

“This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper says.

Google appears to have partnered with NASA to help test its quantum computer. In 2018, the two organisations made an agreement to do this, so the news isn’t entirely unexpected.

Making an impossible universe with IBM's quantum processor

The paper describes how Google’s quantum processor tackled a random sampling problem – that is, checking that a set of numbers has a truly random distribution. This is very difficult for a traditional computer when there are a lot of numbers involved.

But Sycamore does things differently. Although one of its qubits didn’t work, the remaining 53 were quantum entangled with one another and used to generate a set of binary digits and check their distribution was truly random. The paper calculates the task would have taken Summit, the world’s best supercomputer, 10,000 years – but Sycamore did it in 3 minutes and 20 seconds.

This benchmarking task isn’t particularly useful beyond producing truly random numbers – it was a proof of concept. But in the future, the quantum chip may be useful in the fields of machine learning, materials science and chemistry, says the paper. For example, when trying to model a chemical reaction or visualise the ways a new molecule may connect to others, quantum computers can handle the vast amount of variables to create an accurate simulation.

“Google’s recent update on the achievement of quantum supremacy is a notable mile marker as we continue to advance the potential of quantum computing,” said Jim Clarke at Intel Labs in a statement.

CQT11: The challenge of developing post-classical applications with noisy quantum computers

Yet we are still at “mile one of this marathon”, Clarke said. This demonstration is a proof of concept, but it isn’t free of errors within the processor. Better and bigger processors will continue to be built and used to do more useful calculations.

Read more: Google’s quantum computing plans threatened by IBM curveball
At the same time, classical computing isn’t giving up the fight. Over the past few years, as quantum computing took steps towards supremacy, classical computing moved the goal posts as researchers showed it was able to simulate ever more complex systems. It is likely that this back-and-forth will continue.

“We expect that lower simulation costs than reported here will eventually be achieved, but we also expect they will be consistently outpaced by hardware improvements on larger quantum processors,” says the Google paper.

A month ago, news broke that Google had reportedly achieved “quantum supremacy”: it had gotten a quantum computer to run a calculation that would take a classical computer an unfeasibly long time. While the calculation itself—essentially, a very specific technique for outputting random numbers—is about as useful as the Wright brothers’ 12-second first flight, it would be a milestone of similar significance, marking the dawn of an entirely new era of computing.

But in a blog post published today, IBM disputes Google’s claim. The task that Google says might take the world’s fastest classical supercomputer 10,000 years can actually, says IBM, be done in just days.

As John Preskill, the CalTech physicist who coined the term “quantum supremacy,” wrote in an article for Quanta magazine, Google specifically chose a very narrow task that a quantum computer would be good at and a classical computer is bad at. “This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative,” he wrote.

Google’s research paper hasn’t been published, but a draft was leaked online last month. In it, researchers say they got a machine with 53 quantum bits, or qubits, to do the calculation in 200 seconds. They also estimated that it would take the world’s most powerful supercomputer, the Summit machine at Oak Ridge National Laboratory, 10,000 years to repeat it with equal “fidelity,” or the same level of uncertainty as the inherently uncertain quantum system.

The problem is that such simulations aren’t just a matter of porting the code from a quantum computer to a classical one. They grow exponentially harder the more qubits you’re trying to simulate. For that reason, there are a lot of different techniques for optimizing the code to arrive at a good enough equivalent.

And that’s where Google and IBM differ. The IBM researchers propose a method that they say would take just two and a half days on a classical machine “with far greater fidelity,” and that “with additional refinements” this could come down even further.

Quantum Computing and Quantum Supremacy at Google

The key difference? Hard drives. Simulating a quantum computer in a classical one requires storing vast amounts of data in memory during the process to represent the condition of the quantum computer at any given moment. The less memory you have available, the more you have to slice up the task into stages, and the longer it takes. Google’s method, IBM says, relied heavily on storing that data in RAM, while IBM’s “uses both RAM and hard drive space.” It also proposes using a slew of other classical optimization techniques, in both hardware and software, to speed up the computation. To be fair, IBM hasn't tested it in practice, so it's hard to know if it would work as proposed. (Google declined to comment.)

So what’s at stake? Either a whole lot or not much, depending on how you look at it. As Preskill points out, the problem Google reportedly solved is of almost no practical consequence, and even as quantum computers get bigger, it will be a long time before they can solve any but the narrowest classes of problems. Ones that can crack modern codes will likely take decades to develop, at a minimum.

IAS Distinguished Lecture: Prof Leo Kouwenhoven

Moreover, even if IBM is right that Google hasn’t achieved it this time, the quantum supremacy threshold is surely not far off. The fact that simulations get exponentially harder as you add qubits means it may only take a slightly larger quantum machine to get to the point of being truly unbeatable at something.

Still, as Preskill notes, even limited quantum supremacy is “a pivotal step in the quest for practical quantum computers.” Whoever ultimately achieves it will, like the Wright brothers, get to claim a place in history.

Every major tech company is looking at quantum computers as the next big breakthrough in computing. Teams at Google,  Microsoft, Intel, IBM and various startups and academic labs are racing to become the first to achieve quantum supremacy — that is, the point where a quantum computer can run certain algorithms faster than a classical computer ever could.

Quantum Computing Germany Meetup v1.0

Today, Google said that it believes that Bristlecone, its latest quantum processor, can put it on a path to reach quantum supremacy in the future. The purpose of Bristlecone, Google says, it to provide its researchers with a testbed “for research into system error rates and scalability of our qubit technology, as well as applications in quantum simulation, optimization, and machine learning.”

One of the major issues that all quantum computers have to contend with is error rates. Quantum computers typically run at extremely low temperatures (we’re talking millikelvins here) and are shielded from the environment because today’s quantum bits are still highly unstable and any noise can lead to errors.

Because of this, the qubits in modern quantum processors (the quantum computing versions of traditional bits) aren’t really single qubits but often a combination of numerous bits to help account for potential errors. Another limited factor right now is that most of these systems can only preserve their state for under 100 microseconds.

The systems that Google previously demonstrated showed an error rate of one percent for readout, 0.1 percent for single-qubit and 0.6 percent for two-qubit gates.

Quantum computing and the entanglement frontier

Every Bristlecone chip features 72 qubits. The general assumption in the industry is that it will take 49 qubits to achieve quantum supremacy, but Google also cautions that a quantum computer isn’t just about qubits. “Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself,” the team writes today. “Getting this right requires careful systems engineering over several iterations.”

Google’s announcement today will put some new pressure on other teams that are also working on building functional quantum computers. What’s interesting about the current state of the industry is that everybody is taking different approaches.

Microsoft is currently a bit behind in that its team hasn’t actually produced a qubit yet, but once it does, its approach — which is very different from Google’s — could quickly lead to a 49 qubit machine. Microsoft is also working on a programming language for quantum computing. IBM currently has a 50-qubit machine in its labs and lets developers play with a cloud-based simulation of a quantum computer.

Technical quarrels between quantum computing experts rarely escape the field’s rarified community. Late Monday, though, IBM’s quantum team picked a highly public fight with Google.

In a technical paper and blog post, IBM took aim at potentially history-making scientific results accidentally leaked from a collaboration between Google and NASA last month. That draft paper claimed Google had reached a milestone dubbed “quantum supremacy”—a kind of drag race in which a quantum computer proves able to do something a conventional computer can’t.

Programming a quantum computer with Cirq (QuantumCasts)

Monday, Big Blue’s quantum PhDs said Google’s claim of quantum supremacy was flawed. IBM said Google had essentially rigged the race by not tapping the full power of modern supercomputers. “This threshold has not been met,” IBM’s blog post says. Google declined to comment.

It will take time for the quantum research community to dig through IBM’s claim and any responses from Google. For now, Jonathan Dowling, a professor at Louisiana State University, says IBM appears to have some merit. “Google picked a problem they thought to be really hard on a classical machine, but IBM now has demonstrated that the problem is not as hard as Google thought it was,” he says.

Whoever is proved right in the end, claims of quantum supremacy are largely academic for now. The problem crunched to show supremacy doesn’t need to have immediate practical applications. It's a milestone suggestive of the field’s long-term dream: That quantum computers will unlock new power and profits by enabling progress in tricky areas such as battery chemistry or health care. IBM has promoted its own quantum research program differently, highlighting partnerships with quantum-curious companies playing with its prototype hardware, such as JP Morgan, which this summer claimed to have figured out how to run financial risk calculations on IBM quantum hardware.

Quantum Computing 2019 Update

The IBM-Google quantretemps illustrates the paradoxical state of quantum computing. There has been a burst of progress in recent years, leading companies such as IBM, Google, Intel, and Microsoft to build large research teams. Google has claimed for years to be close to demonstrating quantum supremacy, a useful talking point as it competed with rivals to hire top experts and line up putative customers. Yet while quantum computers appear closer than ever, they remain far from practical use, and just how far isn’t easily determined.

The draft Google paper that appeared online last month described posing a statistical math problem to both the company’s prototype quantum processor, Sycamore, and the world’s fastest supercomputer, Summit, at Oak Ridge National Lab. The paper used the results to estimate that a top supercomputer would need approximately 10,000 years to match what Sycamore did in 200 seconds.

Classical simulation algorithms for quantum computational supremacy experiments

IBM, which developed Summit, says the supercomputer could have done that work in 2 ½ days, not millennia—and potentially even faster, given more time to finesse its implementation. That would still be slower than the time posted by Google’s Sycamore quantum chip, but the concept of quantum supremacy as originally conceived by Caltech professor John Preskill required the quantum challenger to do something that a classical computer could not do at all.

This is not the first time that Google’s rivals have questioned its quantum supremacy plans. In 2017, after the company said it was closing in on the milestone, IBM researchers published results that appeared to move the goalposts. Early in 2018, Google unveiled a new quantum chip called Bristlecone said to be ready to demonstrate supremacy. Soon, researchers from Chinese ecommerce company Alibaba, which has its own quantum computing program, released analysis claiming that the device could not do what Google said.

Google is expected to publish a peer-reviewed version of its leaked supremacy paper, based on the newer Sycamore chip, bringing its claim onto the scientific record. IBM’s paper released Monday is not yet peer reviewed either, but the company says it will be.

Did Google Just Achieve 'Quantum Supremacy'?

Jay Gambetta, one of IBM’s top quantum researchers and a coauthor on the paper, says he expects it to influence whether Google’s claims ultimately gain acceptance among technologists. Despite the provocative way IBM chose to air its technical concerns, he claims the company’s motivation is primarily to head off unhelpful expectations around the term “quantum supremacy,” not to antagonize Google. “Quantum computing is important and is going to change how computing is done,” Gambetta says. “Let’s focus on the road map without creating hype.”


Other physicists working on quantum computing agree that supremacy is not a top priority—but say IBM’s tussle with Google isn’t either.

“I don't much like these claims of quantum supremacy. What might be quantum supreme today could just be classical inferior tomorrow,” says Dowling of Louisiana State. “I am much more interested in what the machine can do for me on any particular problem.”

Chris Monroe, a University of Maryland professor and cofounder of quantum computing startup IonQ agrees. His company is more interested in demonstrating practical uses for early quantum hardware than academic disputes between two tech giants, he says. “We’re not going to lose much sleep over this debate,” he says.

More Information: