• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

19 November 2019

What is Azure Synapse Analytics (formerly SQL DW)?


What is Azure Synapse Analytics 

Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs

Vision Keynote with Satya Nadella | Microsoft Ignite 2019


Azure Synapse has four components:

  • SQL Analytics: Complete T-SQL based analytics – Generally Available
  • SQL pool (pay per DWU provisioned)
  • SQL on-demand (pay per TB processed) – (Preview)
  • Spark: Deeply integrated Apache Spark (Preview)
  • Data Integration: Hybrid data integration (Preview)
  • Studio: Unified user experience. (Preview)

Note
To access the preview features of Azure Synapse, request access here. Microsoft will triage all requests and respond as soon as possible.

SQL Analytics and SQL pool in Azure Synapse

SQL Analytics refers to the enterprise data warehousing features that are generally available with Azure Synapse.

Azure Synapse Analytics - Next-gen Azure SQL Data Warehouse


SQL pool represents a collection of analytic resources that are being provisioned when using SQL Analytics. The size of SQL pool is determined by Data Warehousing Units (DWU).

Import big data with simple PolyBase T-SQL queries, and then use the power of MPP to run high-performance analytics. As you integrate and analyze, SQL Analytics will become the single version of truth your business can count on for faster and more robust insights.

Modern Data Warehouse overview | Azure SQL Data Warehouse


In a cloud data solution, data is ingested into big data stores from a variety of sources. Once in a big data store, Hadoop, Spark, and machine learning algorithms prepare and train the data. When the data is ready for complex analysis, SQL Analytics uses PolyBase to query the big data stores. PolyBase uses standard T-SQL queries to bring the data into SQL Analytics tables.

Azure data platform overview


SQL Analytics stores data in relational tables with columnar storage. This format significantly reduces the data storage costs, and improves query performance. Once data is stored, you can run analytics at massive scale. Compared to traditional database systems, analysis queries finish in seconds instead of minutes, or hours instead of days.

The analysis results can go to worldwide reporting databases or applications. Business analysts can then gain insights to make well-informed business decisions.

Azure Synapse Analytics (formerly SQL DW) architecture

Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs.

Azure Synapse has four components:

1) SQL Analytics : Complete T-SQL based analytics:

  • SQL pool (pay per DWU provisioned) – Generally Available
  • SQL on-demand (pay per TB processed) – (Preview)
2) Spark : Deeply integrated Apache Spark (Preview)
3) Data Integration : Hybrid data integration (Preview)
4) Studio : unified user experience. (Preview)


On November fourth, Microsoft announced Azure Synapse Analytics, the next evolution of Azure SQL Data Warehouse. Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs.

With Azure Synapse, data professionals can query both relational and non-relational data using the familiar SQL language. This can be done using either serverless on-demand queries for data exploration and ad hoc analysis or provisioned resources for your most demanding data warehousing needs. A single service for any workload.

In fact, it’s the first and only analytics system to have run all the TPC-H queries at petabyte-scale. For current SQL Data Warehouse customers, you can continue running your existing data warehouse workloads in production today with Azure Synapse and will automatically benefit from the new preview capabilities when they become generally available. You can sign up to preview new features like Serverless on-demand query, Azure Synapse studio, and Apache Spark™ integration.

Building a modern data warehouse

Taking SQL beyond data warehousing

A cloud native, distributed SQL processing engine is at the foundation of Azure Synapse and is what enables the service to support the most demanding enterprise data warehousing workloads. This week at Ignite we introduced a number of exciting features to make data warehousing with Azure Synapse easier and allow organizations to use SQL for a broader set of analytics use cases.

Unlock powerful insights faster from all data
Azure Synapse deeply integrates with Power BI and Azure Machine Learning to drive insights for all users, from data scientists coding with statistics to the business user with Power BI. And to make all types of analytics possible, we’re announcing native and built-in prediction support, as well as runtime level improvements to how Azure Synapse handles streaming data, parquet files, and Polybase. Let’s dive into more detail:

With the native PREDICT statement, you can score machine learning models within your data warehouse—avoiding the need for large and complex data movement. The PREDICT function (available in preview) relies on open model framework and takes user data as input to generate predictions. Users can convert existing models trained in Azure Machine Learning, Apache Spark™, or other frameworks into an internal format representation without having to start from scratch, accelerating time to insight.

Azure SQL Database & Azure SQL Data Warehouse


We’ve enabled direct streaming ingestion support and ability to execute analytical queries over streaming data. Capabilities such as: joins across multiple streaming inputs, aggregations within one or more streaming inputs, transform semi-structured data and multiple temporal windows are all supported directly in your data warehousing environment (available in preview). For streaming ingestion, customers can integrate with Event Hubs (including Event Hubs for Kafka) and IoT Hubs.

We’re also removing the barrier that inhibits securely and easily sharing data inside or outside your organization with Azure Data Share integration for sharing both data lake and data warehouse data.

Modern Data Warehouse Overview


By using new ParquetDirect technology, we are making interactive queries over the data lake a reality (in preview). It’s designed to access Parquet files with native support directly built into the engine. Through improved data scan rates, intelligent data caching and columnstore batch processing, we’ve improved Polybase execution by over 13x.

Introducing the modern data warehouse solution pattern with Azure SQL Data Warehouse


Workload isolation
To support customers as they democratize their data warehouses, we are announcing new features for intelligent workload management. The new Workload Isolation functionality allows you to manage the execution of heterogeneous workloads while providing flexibility and control over data warehouse resources. This leads to improved execution predictability and enhances the ability to satisfy predefined SLAs.


COPY statement
Analyzing petabyte-scale data requires ingesting petabyte-scale data. To streamline the data ingestion process, we are introducing a simple and flexible COPY statement. With only one command, Azure Synapse now enables data to be seamlessly ingested into a data warehouse in a fast and secure manner.

This new COPY statement enables using a single T-SQL statement to load data, parse standard CSV files, and more.

COPY statement sample code:

COPY INTO dbo.[FactOnlineSales] FROM ’https://contoso.blob.core.windows.net/Sales/’

Safe keeping for data with unmatched security
Azure has the most advanced security and privacy features in the market. These features are built into the fabric of Azure Synapse, such as automated threat detection and always-on data encryption. And for fine-grained access control businesses can ensure data stays safe and private using column-level security, native row-level security, and dynamic data masking (now generally available) to automatically protect sensitive data in real time.

To further enhance security and privacy, we are introducing Azure Private Link. It provides a secure and scalable way to consume deployed resources from your own Azure Virtual Network (VNet). A secure connection is established using a consent-based call flow. Once established, all data that flows between Azure Synapse and service consumers is isolated from the internet and stays on the Microsoft network. There is no longer a need for gateways, network addresses translation (NAT) devices, or public IP addresses to communicate with the service.


SQL Analytics MPP architecture components

SQL Analytics leverages a scale out architecture to distribute computational processing of data across multiple nodes. The unit of scale is an abstraction of compute power that is known as a data warehouse unit. Compute is separate from storage which enables you to scale compute independently of the data in your system.

AI for Intelligent Cloud and Intelligent Edge: Discover, Deploy, and Manage with Azure ML Services


SQL Analytics uses a node-based architecture. Applications connect and issue T-SQL commands to a Control node, which is the single point of entry for SQL Analytics. The Control node runs the MPP engine which optimizes queries for parallel processing, and then passes operations to Compute nodes to do their work in parallel.

The Compute nodes store all user data in Azure Storage and run the parallel queries. The Data Movement Service (DMS) is a system-level internal service that moves data across the nodes as necessary to run queries in parallel and return accurate results.

With decoupled storage and compute, when using SQL Analytics one can:

  • Independently size compute power irrespective of your storage needs.
  • Grow or shrink compute power, within a SQL pool (data warehouse), without moving data.
  • Pause compute capacity while leaving data intact, so you only pay for storage.
  • Resume compute capacity during operational hours.


Data Warehousing And Big Data Analytics in Azure Basics Tutorial

Azure storage

SQL Analytics leverages Azure storage to keep your user data safe. Since your data is stored and managed by Azure storage, there is a separate charge for your storage consumption. The data itself is sharded into distributions to optimize the performance of the system. You can choose which sharding pattern to use to distribute the data when you define the table. These sharding patterns are supported:

  • Hash
  • Round Robin
  • Replicate
  • Control node


The Control node is the brain of the architecture. It is the front end that interacts with all applications and connections. The MPP engine runs on the Control node to optimize and coordinate parallel queries. When you submit a T-SQL query to SQL Analytics, the Control node transforms it into queries that run against each distribution in parallel.
Compute nodes

The Compute nodes provide the computational power. Distributions map to Compute nodes for processing. As you pay for more compute resources, SQL Analytics re-maps the distributions to the available Compute nodes. The number of compute nodes ranges from 1 to 60, and is determined by the service level for SQL Analytics.
Each Compute node has a node ID that is visible in system views. You can see the Compute node ID by looking for the node_id column in system views whose names begin with sys.pdw_nodes. For a list of these system views, see MPP system views.
Data Movement Service

Data Movement Service (DMS) is the data transport technology that coordinates data movement between the Compute nodes. Some queries require data movement to ensure the parallel queries return accurate results. When data movement is required, DMS ensures the right data gets to the right location.

Machine Learning and AI


Distributions
A distribution is the basic unit of storage and processing for parallel queries that run on distributed data. When SQL Analytics runs a query, the work is divided into 60 smaller queries that run in parallel.
Each of the 60 smaller queries runs on one of the data distributions. Each Compute node manages one or more of the 60 distributions. A SQL pool with maximum compute resources has one distribution per Compute node. A SQL pool with minimum compute resources has all the distributions on one compute node.


Hash-distributed tables

A hash distributed table can deliver the highest query performance for joins and aggregations on large tables.
To shard data into a hash-distributed table, SQL Analytics uses a hash function to deterministically assign each row to one distribution. In the table definition, one of the columns is designated as the distribution column. The hash function uses the values in the distribution column to assign each row to a distribution.

The following diagram illustrates how a full (non-distributed table) gets stored as a hash-distributed table.



Distributed table
Each row belongs to one distribution.
A deterministic hash algorithm assigns each row to one distribution.
The number of table rows per distribution varies as shown by the different sizes of tables.
There are performance considerations for the selection of a distribution column, such as distinctness, data skew, and the types of queries that run on the system.



Round-robin distributed tables
A round-robin table is the simplest table to create and delivers fast performance when used as a staging table for loads.
A round-robin distributed table distributes data evenly across the table but without any further optimization. A distribution is first chosen at random and then buffers of rows are assigned to distributions sequentially. It is quick to load data into a round-robin table, but query performance can often be better with hash distributed tables. Joins on round-robin tables require reshuffling data and this takes additional time.

Replicated Tables
A replicated table provides the fastest query performance for small tables.
A table that is replicated caches a full copy of the table on each compute node. Consequently, replicating a table removes the need to transfer data among compute nodes before a join or aggregation. Replicated tables are best utilized with small tables. Extra storage is required and there is additional overhead that is incurred when writing data which make large tables impractical.
The diagram below shows a replicated table which is cached on the first distribution on each compute node.

AI for an intelligent cloud and intelligent edge: Discover, deploy, and manage with Azure ML services

Compare price-performance of Azure Synapse Analytics and Google BigQuery

Azure Synapse Analytics (formerly Azure SQL Data Warehouse) outperforms Google BigQuery in all TPC-H and TPC-DS* benchmark queries. Azure Synapse Analytics consistently demonstrated better price-performance compared with BigQuery, and costs up to 94 percent less when measured against Azure Synapse Analytics clusters running TPC-H* benchmark queries.


*Performance and price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in January 2019 for the TPC-H benchmark report and March 2019 for the TPC-DS benchmark report. Analytics in Azure is up to 14 times faster and costs 94 percent less, according to the TPC-H benchmark, and is up to 12 times faster and costs 73 percent less, according to the TPC-DS benchmark. Benchmark data is taken from recognized industry standards, TPC Benchmark™ H (TPC-H) and TPC Benchmark™ DS (TPC-DS), and is based on query execution performance testing of 66 queries for TPC-H and 309 queries for TPC-DS, conducted by GigaOm in January 2019 and March 2019, respectively; testing commissioned by Microsoft. Price-performance is calculated by GigaOm as the TPC-H/TPC-DS metric of cost of ownership divided by composite query. Prices are based on publicly available US pricing as of January 2019 for TPC-H queries and March 2019 for TPC-DS queries. Actual performance and prices may vary. Learn more about the GigaOm benchmark study

QSSUG: Azure Cognitive Services – The Rise of the Machines


Forrester interviewed four customers and surveyed 364 others on their use of Azure analytics with Power BI. Of those surveyed customers, 85 percent agreed or strongly agreed that well-integrated analytics databases and storage, a data management stack, and business intelligence tools were beneficial to their organization. Customers also reported a 21.9 percent average reduction in the overall cost of Microsoft analytics and BI offerings when compared to alternative analytics solutions.

Based on the companies interviewed and surveyed, Forrester projects that a Microsoft analytics and business intelligence (BI) solution could provide:
Benefits of $22.1 million over three years versus costs of $6 million, resulting in a net present value of $16.1 million and a projected return on investment of 271 percent.
Reduced total cost of ownership by 25.7 percent.
Better overall analytics system performance with improved data security, enhanced decision making, and democratized data access.

Modern Data Warehousing with BigQuery (Cloud Next '19)


Analytics in Azure is up to 14x faster and costs 94% less than other cloud providers. Why go anywhere else?

Julia White Corporate Vice President, Microsoft Azure
It’s true. With the volume and complexity of data rapidly increasing, performance and security are critical requirements for analytics. But not all analytics services are built equal. And not all cloud storage is built for analytics.

Only Azure provides the most comprehensive set of analytics services from data ingestion to storage to data warehousing to machine learning and BI. Each of these services have been finely tuned to provide industry leading performance, security and ease of use, at unmatched value. In short, Azure has you covered.

Unparalleled price-performance

When it comes to analytics, price-performance is key. In July 2018, GigaOm published a study that showed that Azure SQL Data Warehouse was 67 percent faster and 23 percent cheaper than Amazon Web Service RedShift.

That was then. Today, we’re even better!

In the most recent study by GigaOm, they found that Azure SQL Data Warehouse is now outperforming the competition up to a whopping 14x times. No one else has produced independent, industry-accepted benchmarks like these. Not AWS Redshift or Google BigQuery. And the best part? Azure is up to 94 percent cheaper.

This industry leading price-performance extends to the rest of our analytics stack. This includes Azure Data Lake Storage, our cloud data storage service, and Azure Databricks, our big data processing service. Customers like Newell Brands – worldwide marketer of consumer and commercial products such as Rubbermaid, Mr. Coffee and Oster – recently moved their workload to Azure and realized significant improvements.

“Azure Data Lake Storage will streamline our analytics process and deliver better end to end performance with lower cost.” 
– Danny Siegel, Vice President of Information Delivery Systems, Newell Brands
Secure cloud analytics

All the price-performance in the world means nothing without security. Make the comparison and you will see Azure is the most trusted cloud in the market. Azure has the most comprehensive set of compliance offerings, including more certifications than any other cloud vendor combined with advanced identity governance and access management with Active Directory integration.

For analytics, we have developed additional capabilities to meet customers’ most stringent security requirements. Azure Data Lake Storage provides multi-layered security including POSIX compliant file and folder permissions and at-rest encryption. Similarly, Azure SQL Data Warehouse utilizes machine learning to provide the most comprehensive set of security capabilities across data protection, access control, authentication, network security, and automatic threat detection.

Insights for all

What’s the best compliment to Azure Analytics’ unmatched price-performance and security? The answer is Microsoft Power BI.

Power BI’s ease of use enables everyone in your organization to benefit from our analytics stack. Employees can get their insights in seconds from all enterprise data stored in Azure. And without limitations on concurrency, Power BI can be used across teams to create the most beautiful visualizations that deliver powerful insights.

Leveraging Microsoft’s Common Data Model, Power BI users can easily access and analyze enterprise data using a common data schema without needing complex data transformation. Customers looking for petabyte-scale analytics can leverage Power BI Aggregations with Azure SQL Data Warehouse for rapid query. Better yet, Power BI users can easily apply sophisticated AI models built with Azure. Powerful insights easily accessible to all.

Customers like Heathrow Airport, one of the busiest airports in the world, are empowering their employees with powerful insights:

“With Power BI, we can very quickly connect to a wide range of data sources with very little effort and use this data to run Heathrow more smoothly than ever before. Every day, we experience a huge amount of variability in our business. With Azure, we’re getting to the point where we can anticipate passenger flow and stay ahead of disruption that causes stress for passengers and employee.”
– Stuart Birrell, Chief Information Officer, Heathrow Airport
Code-free modern data warehouse using Azure SQL DW and Data Factory | Azure Friday

Future-proof

We continue to focus on making Azure the best place for your data and analytics. Our priority is to meet your needs for today and tomorrow.

So, we are excited to make the following announcements:

General availability of Azure Data Lake Storage: The first cloud storage that combines the best of hierarchical files system and blob storage.
General availability of Azure Data Explorer: A fast, fully managed service that simplifies ad hoc and interactive analysis over telemetry, time-series, and log data. This service, powering other Azure services like Log Analytics, App Insights, Time Series Insights, is useful to query streaming data to identify trends, detect anomalies, and diagnose problems.
Preview of new Mapping Data Flow capability in Azure Data Factory: Visual Flow provides a visual, zero-code experience to help data engineers to easily build data transformations. This complements the Azure Data Factory’s code-first experience to enable data engineers of all skill levels to collaborate and build powerful hybrid data transformation pipelines.
Azure provides the most comprehensive platform for analytics. With these updates, Azure solidifies its leadership in analytics.


More Information:

https://azure.microsoft.com/en-us/blog/azure-sql-data-warehouse-is-now-azure-synapse-analytics/

https://azure.microsoft.com/en-us/services/synapse-analytics/compare

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-overview-what-is

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu

https://azure.microsoft.com/en-us/resources/forrester-tei-microsoft-azure-analytics-with-power-bi

https://azure.microsoft.com/mediahandler/files/resourcefiles/data-warehouse-in-the-cloud-benchmark/FINAL%20data-warehouse-cloud-benchmark.pdf

https://www.gartner.com/doc/reprints?id=1-3U1LC65&ct=170222&st=sb

https://clouddamcdnprodep.azureedge.net/gdc/gdcEbYaLj/original

https://clouddamcdnprodep.azureedge.net/gdc/gdcpLECbc/original

https://azure.microsoft.com/en-us/blog/analytics-in-azure-is-up-to-14x-faster-and-costs-94-less-than-other-cloud-providers-why-go-anywhere-else/

22 October 2019

Google Claims Quantum Supremacy - Not so Fast Says IBM, but are they Right?


What Google's Quantum Supremacy Claim Means for Quantum Computing

Leaked details about Google's quantum supremacy experiment stirred up a media frenzy about the next quantum computing milestone

Return to editingThe Limits of Quantum Computers





Google’s claim to have demonstrated quantum supremacy—one of the earliest and most hotly anticipated milestones on the long road toward practical quantum computing—was supposed to make its official debut in a prestigious science journal. Instead, an early leak of the research paper has sparked a frenzy of media coverage and some misinformed speculation about when quantum computers will be ready to crack the world’s computer security algorithms.

Google’s new Bristlecone processor brings it one step closer to quantum supremacy



The moment when quantum computing can seriously threaten to compromise the security of digital communications remains many years, if not decades, in the future. But the leaked draft of Google’s paper likely represents the first experimental proof of the long-held theoretical premise that quantum computers can outperform even the most powerful modern supercomputers on certain tasks, experts say. Such a demonstration of quantum supremacy is a long-awaited signpost showing researchers that they’re on the right path to the promised land of practical quantum computers.

“For those of us who work in quantum computing, the achievement of quantum supremacy is a huge and very welcome milestone,” says Scott Aaronson, a computer scientist and director of the Quantum Information Center at the University of Texas at Austin, who was not involved in Google’s research. “And it’s not a surprise—it’s something we all expected was coming in a matter of a couple of years at most.”

The Complexity of Quantum Sampling QIP 2018 Michael Bremner


What Is Quantum Computing? 

Quantum computing harnesses the rules of quantum physics that hold sway over some of the smallest particles in the universe in order to build devices very different from today’s “classical” computer chips used in smartphones and laptops. Instead of classical computing’s binary bits of information that can only exist in one of two basic states, a quantum computer relies on quantum bits (qubits) that can exist in many different possible states. It’s a bit like having a classical computing coin that can only go “heads” or “tails” versus a quantum computing marble that can roll around and take on many different positions relative to its “heads” or “tails” hemispheres.

Because each qubit can hold many different states of information, multiple qubits connected through quantum entanglement hold the promise of speedily performing complex computing operations that might take thousands or millions of years on modern supercomputers. To build such quantum computers, some research labs have been using lasers and electric fields to trap and manipulate atoms as individual qubits.

Quantum Computing and Quantum Supremacy


Other groups such as the Google AI Quantum Lab led by John Martinis at the University of California, Santa Barbara, have been experimenting with qubits made of loops of superconducting metal. It’s this approach that enabled Google and its research collaborators to demonstrate quantum supremacy based on a 54-qubit array laid out in a flat, rectangular arrangement—although one qubit turned out defective and reduced the number of working qubits to 53. (Google did not respond to a request for comment.)

“For the past year or two, we had a very good idea that it was going to be the Google group, because they were the ones who were really explicitly targeting this goal in all their work,” Aaronson says. “They are also on the forefront of building the hardware.”


D Wave Webinar: A Machine of a Different Kind, Quantum Computing, 2019


Google’s Quantum Supremacy Experiment
Google’s experiment tested whether the company’s quantum computing device, named Sycamore, could correctly produce samples from a random quantum circuit—the equivalent of verifying the results from the quantum version of a random number generator. In this case, the quantum circuit consisted of a certain random sequence of single- and two-qubit logical operations, with up to 20 such operations (known as “gates”) randomly strung together.

The Sycamore quantum computing device sampled the random quantum circuit one million times in just three minutes and 20 seconds. When the team simulated the same quantum circuit on classical computers, it found that even the Summit supercomputer that is currently ranked as the most powerful in the world would require approximately 10,000 years to perform the same task.

“There are many in the classical computer community, who don't understand quantum theory, who have claimed that quantum computers are not more powerful than classical computers,” says Jonathan Dowling, a professor in theoretical physics and member of the Quantum Science and Technologies Group at Louisiana State University in Baton Rouge. “This experiment pokes a stick into their eyes.”



via GIPHY



“This is not the top of Mount Everest, but it’s certainly crossing a pretty big peak along the way.”
—Daniel Lidar, University of Southern California
In a twist that even Google probably didn’t see coming, a draft of the paper describing the company’s quantum supremacy experiment leaked early when someone—possibly a research collaborator at the NASA Ames Research Center—uploaded the paper to the NASA Technical Reports Server. It might have sat there unnoticed before being hastily removed, if not for Google’s own search engine algorithm, which plucked the paper from its obscure server and emailed it to Dowling and anyone else who had signed up for Google Scholar alerts related to quantum computing.

The random number generator experiment may seem like an arbitrary benchmark for quantum supremacy without much practical application. But Aaronson has recently proposed that such a random quantum circuit could become the basis of a certified randomness protocol that could prove very useful for certain cryptocurrencies and cryptographic protocols. Beyond this very specific application, he suggests that future quantum computing experiments could aim to perform a useful quantum simulation of complex systems such as those found in condensed matter physics.

Introduction to Quantum Computing


What’s Next for Quantum Computing?
Google’s apparent achievement doesn’t rule out the possibility of another research group developing a better classical computing algorithm that eventually solves the random number generator challenge faster than Google’s current quantum computing device. But even if that happens, quantum computing capabilities are expected to greatly outpace classical computing’s much more limited growth as time goes on.

“This horse race between classical computing and quantum computing is going to continue,” says Daniel Lidar, director of the Center for Quantum Information Science and Technology at the University of Southern California in Los Angeles. “Eventually though, because quantum computers that have sufficiently high fidelity components just scale better as far as we know—exponentially better for some problems—eventually it’s going to become impossible for classical computers to keep up.”

Google’s team has even coined a term to describe how quickly quantum computing could gain on classical computing: “Neven’s Law.” Unlike Moore’s Law that has predicted classical computing power will approximately double every two years—exponential growth—Neven’s Law describes how quantum computing seems to gain power far more rapidly through double exponential growth.

“If you’ve ever plotted a double exponential [on a graph], it looks like the line is zero and then you hit the corner of a box and you go straight up,” says Andrew Sornborger, a theoretical physicist who studies quantum computers at Los Alamos National Laboratory in New Mexico. “And so before and after, it’s not so much like an evolution, it’s more like an event—before you hit the corner and after you hit the corner.”

Quantum computing’s exponential growth advantage has the potential to transform certain areas of scientific research and real-world applications in the long run. For example, Sornborger anticipates being able to use future quantum computers to perform far more complex simulations that go well beyond anything that’s possible with today’s best supercomputers.

The Integration Algorithm A quantum computer could integrate a function in less computational time then a classical computer... The integral of a one dimensional.


Wanted: Quantum Error Correction
Another long-term expectation is that a practical, general-purpose quantum computer could someday crack the standard digital codes used to safeguard computer security and the Internet. That possibility triggered premature alarm bells from conspiracy theorists and at least one U.S. presidential candidate when news first broke about Google’s quantum supremacy experiment via the Financial Times. (The growing swirl of online speculation eventually prompted Junye Huang, a Ph.D. candidate at the National University of Singapore, to share a copy of the leaked Google paper on his Google Drive account.)

In fact, the U.S. government is already taking steps to prepare for the future possibility of practical quantum computing breaking modern cryptography standards. The U.S. National Institute of Standards and Technology has been overseeing a process that challenges cryptography researchers to develop and test quantum-resistant algorithms that can continue to keep global communications secure.
The moment when quantum computing can seriously threaten to compromise the security of digital communications remains many years, if not decades, in the future.
The apparent quantum supremacy achievement marks just the first of many steps necessary to develop practical quantum computers. The fragility of qubits makes it challenging to maintain specific quantum states over longer periods of time when performing computational operations. That means it’s far from easy to cobble together large arrays involving the thousands or even millions of qubits that will likely be necessary for practical, general-purpose quantum computing.

Quantum computing



Such huge qubit arrays will require error correction techniques that can detect and fix errors in the many individual qubits working together. A practical quantum computer will need to have full error correction and prove itself fault tolerant—immune to the errors in logical operations and qubit measurements—in order to truly unleash the power of quantum computing, Lidar says.

Many experts think the next big quantum computing milestone will be a successful demonstration of error correction in a quantum computing device that also achieves quantum supremacy. Google’s team is well-positioned to shoot for that goal given that its quantum computing architecture showcased in the latest experiment is built to accommodate "surface code” error correction. But it will almost certainly have plenty of company on the road ahead as many researchers look beyond quantum supremacy to the next milestones.

“You take one step at a time and you get to the top of Mount Everest,” Lidar says. “This is not the top of Mount Everest, but it’s certainly crossing a pretty big peak along the way.”

This could be the dawn of a new era in computing. Google has claimed that its quantum computer performed a calculation that would be practically impossible for even the best supercomputer – in other words, it has attained quantum supremacy.

If true, it is big news. Quantum computers have the potential to change the way we design new materials, work out logistics, build artificial intelligence and break encryption. That is why firms like Google, Intel and IBM – along with plenty of start-ups – have been racing to reach this crucial milestone.

The development at Google is, however, shrouded in intrigue. A paper containing details of the work was posted to a NASA server last week, before being quickly removed. Several media outlets reported on the rumours, but Google hasn’t commented on them.

Read more: Revealed: Google’s plan for quantum computer supremacy
A copy of the paper seen by New Scientist contains details of a quantum processor called Sycamore that contains 54 superconducting quantum bits, or qubits. It claims that Sycamore has achieved quantum supremacy. The paper identifies only one author: John Martinis at the University of California, Santa Barbara, who is known to have partnered with Google to build the hardware for a quantum computer.

“This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper says.

Google appears to have partnered with NASA to help test its quantum computer. In 2018, the two organisations made an agreement to do this, so the news isn’t entirely unexpected.

Making an impossible universe with IBM's quantum processor


The paper describes how Google’s quantum processor tackled a random sampling problem – that is, checking that a set of numbers has a truly random distribution. This is very difficult for a traditional computer when there are a lot of numbers involved.

But Sycamore does things differently. Although one of its qubits didn’t work, the remaining 53 were quantum entangled with one another and used to generate a set of binary digits and check their distribution was truly random. The paper calculates the task would have taken Summit, the world’s best supercomputer, 10,000 years – but Sycamore did it in 3 minutes and 20 seconds.

This benchmarking task isn’t particularly useful beyond producing truly random numbers – it was a proof of concept. But in the future, the quantum chip may be useful in the fields of machine learning, materials science and chemistry, says the paper. For example, when trying to model a chemical reaction or visualise the ways a new molecule may connect to others, quantum computers can handle the vast amount of variables to create an accurate simulation.

“Google’s recent update on the achievement of quantum supremacy is a notable mile marker as we continue to advance the potential of quantum computing,” said Jim Clarke at Intel Labs in a statement.

CQT11: The challenge of developing post-classical applications with noisy quantum computers


Yet we are still at “mile one of this marathon”, Clarke said. This demonstration is a proof of concept, but it isn’t free of errors within the processor. Better and bigger processors will continue to be built and used to do more useful calculations.

Read more: Google’s quantum computing plans threatened by IBM curveball
At the same time, classical computing isn’t giving up the fight. Over the past few years, as quantum computing took steps towards supremacy, classical computing moved the goal posts as researchers showed it was able to simulate ever more complex systems. It is likely that this back-and-forth will continue.

“We expect that lower simulation costs than reported here will eventually be achieved, but we also expect they will be consistently outpaced by hardware improvements on larger quantum processors,” says the Google paper.

A month ago, news broke that Google had reportedly achieved “quantum supremacy”: it had gotten a quantum computer to run a calculation that would take a classical computer an unfeasibly long time. While the calculation itself—essentially, a very specific technique for outputting random numbers—is about as useful as the Wright brothers’ 12-second first flight, it would be a milestone of similar significance, marking the dawn of an entirely new era of computing.

But in a blog post published today, IBM disputes Google’s claim. The task that Google says might take the world’s fastest classical supercomputer 10,000 years can actually, says IBM, be done in just days.




As John Preskill, the CalTech physicist who coined the term “quantum supremacy,” wrote in an article for Quanta magazine, Google specifically chose a very narrow task that a quantum computer would be good at and a classical computer is bad at. “This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative,” he wrote.

Google’s research paper hasn’t been published, but a draft was leaked online last month. In it, researchers say they got a machine with 53 quantum bits, or qubits, to do the calculation in 200 seconds. They also estimated that it would take the world’s most powerful supercomputer, the Summit machine at Oak Ridge National Laboratory, 10,000 years to repeat it with equal “fidelity,” or the same level of uncertainty as the inherently uncertain quantum system.

The problem is that such simulations aren’t just a matter of porting the code from a quantum computer to a classical one. They grow exponentially harder the more qubits you’re trying to simulate. For that reason, there are a lot of different techniques for optimizing the code to arrive at a good enough equivalent.

And that’s where Google and IBM differ. The IBM researchers propose a method that they say would take just two and a half days on a classical machine “with far greater fidelity,” and that “with additional refinements” this could come down even further.

Quantum Computing and Quantum Supremacy at Google



The key difference? Hard drives. Simulating a quantum computer in a classical one requires storing vast amounts of data in memory during the process to represent the condition of the quantum computer at any given moment. The less memory you have available, the more you have to slice up the task into stages, and the longer it takes. Google’s method, IBM says, relied heavily on storing that data in RAM, while IBM’s “uses both RAM and hard drive space.” It also proposes using a slew of other classical optimization techniques, in both hardware and software, to speed up the computation. To be fair, IBM hasn't tested it in practice, so it's hard to know if it would work as proposed. (Google declined to comment.)

So what’s at stake? Either a whole lot or not much, depending on how you look at it. As Preskill points out, the problem Google reportedly solved is of almost no practical consequence, and even as quantum computers get bigger, it will be a long time before they can solve any but the narrowest classes of problems. Ones that can crack modern codes will likely take decades to develop, at a minimum.

IAS Distinguished Lecture: Prof Leo Kouwenhoven


Moreover, even if IBM is right that Google hasn’t achieved it this time, the quantum supremacy threshold is surely not far off. The fact that simulations get exponentially harder as you add qubits means it may only take a slightly larger quantum machine to get to the point of being truly unbeatable at something.

Still, as Preskill notes, even limited quantum supremacy is “a pivotal step in the quest for practical quantum computers.” Whoever ultimately achieves it will, like the Wright brothers, get to claim a place in history.

Every major tech company is looking at quantum computers as the next big breakthrough in computing. Teams at Google,  Microsoft, Intel, IBM and various startups and academic labs are racing to become the first to achieve quantum supremacy — that is, the point where a quantum computer can run certain algorithms faster than a classical computer ever could.

Quantum Computing Germany Meetup v1.0


Today, Google said that it believes that Bristlecone, its latest quantum processor, can put it on a path to reach quantum supremacy in the future. The purpose of Bristlecone, Google says, it to provide its researchers with a testbed “for research into system error rates and scalability of our qubit technology, as well as applications in quantum simulation, optimization, and machine learning.”

One of the major issues that all quantum computers have to contend with is error rates. Quantum computers typically run at extremely low temperatures (we’re talking millikelvins here) and are shielded from the environment because today’s quantum bits are still highly unstable and any noise can lead to errors.

Because of this, the qubits in modern quantum processors (the quantum computing versions of traditional bits) aren’t really single qubits but often a combination of numerous bits to help account for potential errors. Another limited factor right now is that most of these systems can only preserve their state for under 100 microseconds.

The systems that Google previously demonstrated showed an error rate of one percent for readout, 0.1 percent for single-qubit and 0.6 percent for two-qubit gates.

Quantum computing and the entanglement frontier




Every Bristlecone chip features 72 qubits. The general assumption in the industry is that it will take 49 qubits to achieve quantum supremacy, but Google also cautions that a quantum computer isn’t just about qubits. “Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself,” the team writes today. “Getting this right requires careful systems engineering over several iterations.”

Google’s announcement today will put some new pressure on other teams that are also working on building functional quantum computers. What’s interesting about the current state of the industry is that everybody is taking different approaches.

Microsoft is currently a bit behind in that its team hasn’t actually produced a qubit yet, but once it does, its approach — which is very different from Google’s — could quickly lead to a 49 qubit machine. Microsoft is also working on a programming language for quantum computing. IBM currently has a 50-qubit machine in its labs and lets developers play with a cloud-based simulation of a quantum computer.

Technical quarrels between quantum computing experts rarely escape the field’s rarified community. Late Monday, though, IBM’s quantum team picked a highly public fight with Google.

In a technical paper and blog post, IBM took aim at potentially history-making scientific results accidentally leaked from a collaboration between Google and NASA last month. That draft paper claimed Google had reached a milestone dubbed “quantum supremacy”—a kind of drag race in which a quantum computer proves able to do something a conventional computer can’t.

Programming a quantum computer with Cirq (QuantumCasts)


Monday, Big Blue’s quantum PhDs said Google’s claim of quantum supremacy was flawed. IBM said Google had essentially rigged the race by not tapping the full power of modern supercomputers. “This threshold has not been met,” IBM’s blog post says. Google declined to comment.

It will take time for the quantum research community to dig through IBM’s claim and any responses from Google. For now, Jonathan Dowling, a professor at Louisiana State University, says IBM appears to have some merit. “Google picked a problem they thought to be really hard on a classical machine, but IBM now has demonstrated that the problem is not as hard as Google thought it was,” he says.

Whoever is proved right in the end, claims of quantum supremacy are largely academic for now. The problem crunched to show supremacy doesn’t need to have immediate practical applications. It's a milestone suggestive of the field’s long-term dream: That quantum computers will unlock new power and profits by enabling progress in tricky areas such as battery chemistry or health care. IBM has promoted its own quantum research program differently, highlighting partnerships with quantum-curious companies playing with its prototype hardware, such as JP Morgan, which this summer claimed to have figured out how to run financial risk calculations on IBM quantum hardware.

Quantum Computing 2019 Update


The IBM-Google quantretemps illustrates the paradoxical state of quantum computing. There has been a burst of progress in recent years, leading companies such as IBM, Google, Intel, and Microsoft to build large research teams. Google has claimed for years to be close to demonstrating quantum supremacy, a useful talking point as it competed with rivals to hire top experts and line up putative customers. Yet while quantum computers appear closer than ever, they remain far from practical use, and just how far isn’t easily determined.

The draft Google paper that appeared online last month described posing a statistical math problem to both the company’s prototype quantum processor, Sycamore, and the world’s fastest supercomputer, Summit, at Oak Ridge National Lab. The paper used the results to estimate that a top supercomputer would need approximately 10,000 years to match what Sycamore did in 200 seconds.

Classical simulation algorithms for quantum computational supremacy experiments


IBM, which developed Summit, says the supercomputer could have done that work in 2 ½ days, not millennia—and potentially even faster, given more time to finesse its implementation. That would still be slower than the time posted by Google’s Sycamore quantum chip, but the concept of quantum supremacy as originally conceived by Caltech professor John Preskill required the quantum challenger to do something that a classical computer could not do at all.

This is not the first time that Google’s rivals have questioned its quantum supremacy plans. In 2017, after the company said it was closing in on the milestone, IBM researchers published results that appeared to move the goalposts. Early in 2018, Google unveiled a new quantum chip called Bristlecone said to be ready to demonstrate supremacy. Soon, researchers from Chinese ecommerce company Alibaba, which has its own quantum computing program, released analysis claiming that the device could not do what Google said.

Google is expected to publish a peer-reviewed version of its leaked supremacy paper, based on the newer Sycamore chip, bringing its claim onto the scientific record. IBM’s paper released Monday is not yet peer reviewed either, but the company says it will be.

Did Google Just Achieve 'Quantum Supremacy'?


Jay Gambetta, one of IBM’s top quantum researchers and a coauthor on the paper, says he expects it to influence whether Google’s claims ultimately gain acceptance among technologists. Despite the provocative way IBM chose to air its technical concerns, he claims the company’s motivation is primarily to head off unhelpful expectations around the term “quantum supremacy,” not to antagonize Google. “Quantum computing is important and is going to change how computing is done,” Gambetta says. “Let’s focus on the road map without creating hype.”

CeBIT-Australia-2016-michael-bremner-big-data-and-analytics-commercialisation-quantum-computing



Other physicists working on quantum computing agree that supremacy is not a top priority—but say IBM’s tussle with Google isn’t either.

“I don't much like these claims of quantum supremacy. What might be quantum supreme today could just be classical inferior tomorrow,” says Dowling of Louisiana State. “I am much more interested in what the machine can do for me on any particular problem.”

Chris Monroe, a University of Maryland professor and cofounder of quantum computing startup IonQ agrees. His company is more interested in demonstrating practical uses for early quantum hardware than academic disputes between two tech giants, he says. “We’re not going to lose much sleep over this debate,” he says.

More Information:


https://towardsdatascience.com/google-has-cracked-quantum-supremacy-cd70c79a774b

https://spectrum.ieee.org/tech-talk/computing/hardware/how-googles-quantum-supremacy-plays-into-quantum-computings-long-game

https://ai.googleblog.com/2018/03/a-preview-of-bristlecone-googles-new.html

https://www.technologyreview.com/s/614604/quantum-supremacy-from-google-not-so-fast-says-ibm/

https://www.newscientist.com/article/2217347-google-claims-it-has-finally-reached-quantum-supremacy/

https://www.newscientist.com/article/mg23130894-000-revealed-googles-plan-for-quantum-computer-supremacy/

https://www.newscientist.com/article/2151032-googles-quantum-computing-plans-threatened-by-ibm-curveball/

https://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf

https://www.wired.com/story/the-ongoing-battle-between-quantum-and-classical-computers/


















23 September 2019

IBM and Oracle Join Forces to Beat AWS.




IBM and Oracle Join Forces


In early July, IBM and Red Hat officially closed their most significant acquisition of 2019–an important milestone combining the power and flexibility of Red Hat’s open hybrid portfolio and IBM’s technology and deep industry expertise.

IBM Oracle International Competency Center



The feedback from our clients and partners is clear. A recent IBM report found that 80 percent want solutions that support hybrid cloud, including containers and orchestration. Today IBM is announcing plans to bring Red Hat OpenShift and IBM Cloud Paks to the IBM Z and LinuxONE enterprise platforms*. Together these two platforms power about 30 billion transactions a day globally. Our goal is for you to harness the scalability and security of IBM Z and LinuxONE alongside the flexibility to run, build, manage and modernize cloud-native workloads on your choice of architecture.

Cloud Impact Assessment for Oracle



For more than 20 years, IBM Systems and Red Hat have worked to drive open source systems innovation, make Linux enterprise-grade, and help joint customers like MetOffice and Techcombank with mission-critical workloads to build, deploy and manage next-gen apps and protect data through advanced security.

Today, IBM supports Red Hat Enterprise Linux on IBM Power Systems, IBM Z and LinuxONE, as well as Red Hat OpenShift on IBM POWER. IBM will also support Red Hat OpenShift and Red Hat OpenShift Container Storage across IBM’s all-flash and software-defined storage portfolio.

The combination of Red Hat OpenShift with IBM Z and LinuxONE reflects our shared values: to provide a flexible, open, hybrid, multicloud and secured enterprise platform for mission-critical workloads.

Our goal for Red Hat OpenShift for IBM Z and LinuxONE will be to help clients enable greater agility and portability through integrated tooling and a feature-rich ecosystem for cloud-native development to:
  • Deliver containerized applications that can scale vertically and horizontally;
  • Accelerate deployment and orchestration of containers with Kubernetes;
  • Help IT to support rapid business growth;
  • Optimize workloads to take advantage of pervasive encryption; 

and

Increase container density that can make systems management easier which should help reduce total cost of ownership.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC. “IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9% 5 year CAGR and is predicted to reach over $1.5B by 2022.”

How the Results of Summit and Sierra are Influencing Exascale


This is also exciting news for our business partners and ecosystems. This offering will provide an enterprise platform for hybrid multicloud that can help ISVs and others develop the next generation of applications. They will have the benefit of flexibility of a microservices-based architecture to deploy containers on infrastructure of their choice. For ISVs not currently running on Red Hat Enterprise Linux on IBM Z and LinuxONE enterprise platforms, this is an opportunity to bring your software to these two platforms, where the most critical workloads are run.

For more information, please visit www.ibm.com/linuxone and www.ibm.com/redhat, or contact your local IBM sales representative.

[1] (IBM Sponsored Primary Research, MD&I Systems and Cloud NDB 2019)

[2] https://www-03.ibm.com/press/uk/en/pressrelease/52824.wss


New Oracle Exadata X8M PMEM and RoCE capabilities and benefits


IBM LinuxONE servers running Oracle Database 12c on Linux

Today, enterprises require a trusted IT infrastructure that is dynamic, scalable
and flexible enough to support both mission-critical work and the development
and deployment of new workloads. This infrastructure must help decision makers
to use their company’s most valuable asset—their data—with insight rather than
hindsight, and it must assist in using IT to gain a competitive edge.

IBM® LinuxONE™ is a family of systems designed for more-secure data serving.
Expect advanced performance, security, resiliency, availability and virtualization
for a high quality of service. Ideal for larger enterprises that are embracing digital
business, the dual-frame LinuxONE Emperor II™ offers massive scalability in a
high-volume transaction processing and large-scale consolidation platform.

Meet the new IBM z15


LinuxONE is exceptionally good for deploying Oracle data-serving workloads
The Emperor II delivers outstanding transaction processing and data serving
performance for excellent economies of scale and more-efficient use of critical
data. With up to 170 LinuxONE cores, up to 32 TB of memory, and simultaneous
multithreading (SMT) support, the Emperor II is ideally suited for consolidating
large-scale distributed environments and for new in-memory and Java™ workloads.

New IBM z15 Up & Running at IBM Systems Center Montpellier.


The IBM LinuxONE Rockhopper II™, the newest member of the LinuxONE
family, is a single-frame system in a 19-inch industry standard rack allowing it
to sit side-by-side with any other platform in a data center. The Rockhopper II
also supports SMT, up to 30 Linux cores and up to 8 TB of memory. This system
is ideal for any growing business that seeks to use LinuxONE technologies’
qualities of service, flexibility and performance.

IBM Data Privacy Passports on IBM z15


As environmental concerns raise the focus on energy consumption, the ASHRAE
A3 rated Emperor II and Rockhopper II systems promote energy efficiency.
Their design helps to dramatically reduce energy consumption and save floor
space by consolidating workloads into a simpler, more manageable and efficient
IT infrastructure.

Linux and IBM LinuxONE

A Linux infrastructure on LinuxONE provides an enterprise-grade Linux
environment. It combines the advantages of the LinuxONE hardware servers
and leading IBM z/VM® virtualization—along with the flexibility and open
standards of the Linux operating system.
IBM LinuxONE virtualization technology
During spikes in demand, the Emperor II and Rockhopper II systems can quickly
redistribute system resources and scale up, scale out, or both in a way that can
make the difference between flawless execution or costly, slow response times
and system crashes.

Inside the new IBM z15


You can further improve the virtualization management capabilities of Linux
and z/VM by using the intelligent visualization, simplified monitoring, and
unified management features of IBM Wave and IBM Dynamic Partition Manager.
These solutions are designed to help simplify everyday administrative and
configuration tasks and to help you transform your Linux environment to a
virtualized private cloud


The enterprise-grade Linux infrastructure on Emperor II and Rockhopper II is
designed to bring unique business value in the areas of operational efficiency,
scalability, autonomic workload management, reliability, business continuance
and security. Linux on LinuxONE solutions can further benefit from the following

IBM technologies to enhance this infrastructure:

• High availability capabilities are provided by the IBM Spectrum Scale™
high-performance data and file management solution based on the IBM
General Parallel File System (GPFS™). The Spectrum Scale solution is a
cluster file system that provides access to storage that can deliver greater
speed, flexibility, cost efficiency and security that are achievable by using
built-in encryption and data protection.

• IBM GDPS® Virtual Appliance provides near-continuous availability and
disaster recovery by extending GDPS capabilities for Linux guests on z/VM
environments. It can help substantially reduce recovery time, recovery
point objectives and the complexity that is associated with manual
disaster recovery.

Lecture 2: Introducing IBM Z Hardware & Operating Systems | z/OS Introduction



Oracle Database 12c and IBM LinuxONE

Oracle Database 12c, has a major focus on cloud and enables customers
to make more efficient use of their IT resources. Oracle Database 12c has
a new multitenant architecture, and includes several enhancements and new
features for:

• Consolidating multiple databases into multitenant containers
• Automatically optimizing data storage
• Providing continuous access with high availability features
• Securing enterprise data with a comprehensive defense-in-depth strategy
• Simplifying in-database analysis of big data

Introducing Oracle Gen 2 Exadata Cloud at Customer


Multitenant architecture
Oracle Multitenant delivers an architecture that simplifies consolidation and
delivers the high density of schema-based consolidation without requiring
changes to existing applications. This Oracle Database 12c option offers the
benefits of managing many databases as one, yet retains the isolation and
resource control of separate databases. In this architecture, a single multitenant
container database can host many “pluggable” databases, up to 4,096 pluggable
databases can run on a single container database. Each database consolidated
or “plugged in” to a multitenant container looks and feels to applications the
same as the other existing Oracle Databases and administrators can control the
prioritization of available resources between consolidated databases.

Database In-Memory
Oracle Database In-Memory uses a new dual-format in-memory architecture
that allows customers to improve the performance of online transaction
processing and enables analytics and data warehousing applications. The
dual-format architecture that allows simultaneous row and column format
in-memory enables existing applications to run transparently with better
performance without additional programming changes. New features such
as In-Memory Virtual Columns and In-Memory expressions can further
improve performance.

High availability
Basic high availability architectures using redundant resources can prove costly
and fall short of availability and service level expectations due to technological
limitations, complex integration, and inability to offer availability through planned
maintenance. Oracle Database 12c goes beyond the limitations of basic high
availability and with hardware features such as provided by IBM storage devices
and servers, offers customers a solution that can be deployed at minimal cost
and that addresses the common causes of unforeseen and planned downtime.

What is RDMA over Converged Ethernet (RoCE)?


Reducing planned downtime
Planned downtime for essential maintenance such as hardware upgrades,
software upgrades and patching are standard for every IT operation. Oracle
Database 12c offers a number of solutions to help customers reduce the
amount of planned downtime required for maintenance activities, including
these features of Oracle Database 12c and other Oracle offerings:

• Hardware Maintenance and Migration Operations to Oracle Database 12c
infrastructure can be performed without taking users offline.
• Online Patching of database software can be applied to server nodes in a
‘rolling’ manner using Oracle Real Application Clusters. Users are simply
migrated from one server to another; the server is quiesced from the cluster,
patched, and then put back online.
• Rolling Database Upgrades using Oracle Data Guard or Oracle Active
Data Guard enables upgrading of a standby database, testing of the
upgraded environment and then switching users to the new environment,
without any downtime.
• Online Redefinition can reduce maintenance downtime by allowing changes
to a table structure while continuing to support an online production system.
• Edition Based Redefinition enables online application upgrades. With
edition-based redefinition, changes to program code can be made in the
privacy of a new edition within the database, separated from the current
production edition.
• Data Guard Far Sync provides zero data loss protection for a production
database by maintaining a synchronized standby database at any distance
from the primary location.
• Global Data Services provides inter-region and intra-region load balancing
across Active Data Guard and Golden Gate replicated databases. This
service effectively provides Real Application Cluster failover and load balancing
capabilities to Active Data Guard and Golden Gate distributed databases.

RSOCKETS - RDMA for Dummies

Simplifying Analysis of Big Data

Oracle Database 12c fully supports a wide range of Business Intelligence tools
that take advantage of optimizations, including advanced indexing operations,
OLAP aggregations, automatic star query transformations, partitioning pruning
and parallelized database operations.

By providing a comprehensive set of integration tools, customers can use their
existing Oracle resources and skills to bring together big data sources into their
data warehouse. With this, customers can add to the existing Oracle Database
12c features, the ability to better analyze data throughout the enterprise.

Oracle’s stated goal is to help lower total cost of ownership (TCO) by delivering
customer requested product features, minimizing customizations and providing
pre-built integration to other Oracle solutions. These Oracle Database benefits
further complement the IT infrastructure TCO savings gained by implementing
Oracle Database on a LinuxONE server.

The enterprise-grade Linux on LinuxONE solution is designed to add
value to Oracle Database solutions, including the new functions that were
introduced in Oracle Database 12c. Oracle Database on LinuxONE includes
the following benefits:

• Provides high levels of security with the industry highest EAL5+ and
virtualization ratings, and high quality of service
• Optimizes performance by deploying powerful database hardware engines
that are available on Emperor II and Rockhopper II systems
• Achieves greater flexibility through the LinuxONE workload management
capability by allowing the Oracle Database environment to dynamically
adjust to user demand
• Reduces TCO by using the specialized LinuxONE cores that run the

OFI Overview 2019 Webinar



Oracle Database and management of the environment

Sizing and capacity planning for Oracle Database 12c on IBM LinuxONE
By working together, IBM and Oracle have developed a capacity-estimation
capability to aid in designing an optimal configuration for each specific
Oracle Database 12c client environment. You can obtain a detailed sizing
estimate that is customized for your environment from the IBM Digital
Techline Center, which is accessible through your IBM or IBM Business Partner
representative. You can download a questionnaire to start the sizing process at
ibm.com/partnerworld/wps/servlet/ContentHandler/techline/FAQ00000750

The IBM and Oracle alliance

Since 1986, Oracle and IBM have been providing clients with compelling
joint solutions, combining Oracle’s technology and application software with
IBM’s complementary hardware, software and services solutions. More than
100,000 joint clients benefit from the strength and stability of the Oracle and
IBM alliance. Through this partnership, Oracle and IBM offer technology,
applications, services and hardware solutions that are designed to mitigate
risk, boost efficiency and lower total cost of ownership.

IBM is a Platinum level partner in the Oracle Partner Network, delivering
the proven combination of industry insight, extensive real-world Oracle
applications experience, deep technical skills and high performance servers
and storage to create a complete business solution with a defined return
on investment. From application selection, purchase and implementation
to upgrade and maintenance, we help organizations reduce the total cost
of ownership and the complexity of managing their current and future
applications environment while building a solid base for business growth.

A Taste of Open Fabrics Interfaces



For more information

For more information about joint solutions from IBM and Oracle,
please contact an IBM sales representative at 1-866-426-9989.

For more information about IBM LinuxONE, see ibm.com/LinuxONE

For more information about Oracle Database 12c, visit
oracle.com/us/corporate/features/database-12c/index.html


IBM Oracle International Competency Center overview
CesarCantua | July 12 2012 | Tags:  international_competency_... icc | 10,988 Views


The IBM Oracle International Competency Center (ICC) works with our technical sales teams and business partners to provide technical pre-sales support for all Oracle Solutions. With locations across North America, including Foster City CA, Pleasanton CA, Redwoods Shores CA and Denver CO, the center supports all Oracle solutions including: Oracle Database, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle Retail, PeopleSoft Enterprise, JD Edwards EnterpriseOne, JD Edwards World, and Siebel.

All ICC personnel work on-site at Oracle locations and include IBM hardware and software brand experts, technology managers, and solutions specialists. Working closely with the Advanced Technical Skills and Solutions Technical Sales teams, the ICC executes benchmarks and platform certifications and develops technical white papers and solution collateral.

The ICC is also responsible for the development and maintenance of the tools used worldwide to size Oracle applications on IBM hardware. Finally, The ICC offers individualized customer briefings tailored to a customer's unique requirements. These consultations demonstrate the close technology relationship between IBM and Oracle and help customers understand the hardware options and sizing implications of their Oracle solution implementation. The ICC-hosted customer briefings are a great tool to use with your prospects and clients. The briefings are tailored specifically around your client's area of interest. Don't miss the opportunity to demonstrate the depth of our business and technical relationship with Oracle!

Accelerating TensorFlow with RDMA for high-performance deep learning



For Oracle related sales technical questions, contact: ibmoracle@us.ibm.com

IBM Oracle International Competency Center (ICC). 
IBM Oracle Technical Quick Reference Guide.  
IBM Oracle International Competency Center (ICC) Mission.  

IBM, Oracle Join Forces For Midmarket Foray

IBM and Oracle are teaming to develop "affordable and easy to deploy and maintain" midmarket solutions based on Oracle's enterprise applications.The deal expands the companies' relationship in the enterprise. More specifically, IBM and Oracle will tailor Oracle's JD Edwards Enterprise One and the Oracle E-Business Suite, bundling them with Big Blue's hardware, software and services, for midsize companies. The first solutions are aimed at midsize industrial manufacturers in the United States and midsize food and beverage companies in Europe. Solutions for midsize clients in the life sciences and high-tech industries are expected out by the end of the year.

"We see this collaboration with IBM as a monumental step in our effort to provide midsize companies with enterprise-level functionality that is affordable and easy to deploy and maintain," said Oracle SVP Tony Kender in a statement.ChannelWeb

Why rivals Microsoft and Oracle are teaming up to take on Amazon

We’re living in a new digital age, defined by always-on connectivity, empowered customers, and groundbreaking IT solutions. In this new world, a business can’t thrive by relying on the same tired processes and disjointed IT systems they’ve used up until now.

5G Cellular D2D RDMA Clusters



At IBM, we’re dedicated to helping Oracle users upgrade their systems and processes to prepare for the age of Digital Transformation. In place of the complex multivendor platforms that are so common in business today, we offer streamlined, end-to-end solutions that include everything you need to optimize your Oracle applications. We offer our customers best-in-class solutions and services, deep experience across many different industries, and an intimate understanding of Oracle technology based on a decades-long partnership—all under the same roof.

Go to market quickly
We employ about 16,000 people who work directly with Oracle systems on a daily basis, across cloud and on-prem environments, making us one of the leading systems integrators in the world. This hands-on experience, combined with our wide breadth of systems and services offerings, allows us to ramp up new Oracle applications much quicker than our clients could working alone.

For example, we were able to help Epic Piping go from startup to enterprise-scale operation in a matter of months. With IBM Cloud Managed Services to support their Oracle JD Edwards Enterprise ERP solutions, Epic Piping gained a hugely scalable, fully integrated business platform that supports both organic growth and acquisitions. This new platform, along with application management services from IBM, played an instrumental role in helping Epic Piping achieve exponential growth. After starting with only four employees, the company topped out at over 900 just 18 months later.

Simplify processes to cut costs
Far too often, complexity forms a barrier that keeps businesses from fully capitalizing on the opportunities of Digital Transformation. When businesses run different sets of processes across different teams, it can create serious inefficiency, which in turn leads to higher costs.

This was the situation at Co-operative Group Limited when they came to IBM for help. The company wanted to move to a shared services model to increase the simplicity and efficiency of its HR function. However, after a period of rapid growth, including multiple acquisitions, Co-op’s HR policies had become so disjointed that it was simply not possible to get everyone using the shared services.
With the power of Oracle HCM Cloud solutions implemented by IBM Global Business Services, the company simplified and standardized its HR processes, increasing productivity and lowering HR costs. Co-op is also deploying IBM Watson solutions to collate and cleanse its old HR data. This will enable cognitive analytics, allowing the company to further optimize its HR services in the future.

Deep industry expertise
At IBM, our elite services professionals have a unique combination of Oracle experience and industry expertise, helping us drive even better results for users of Oracle’s industry-specific applications.

When Shop Direct, a multi-brand online retailer from the UK, wanted to introduce a new retail software platform based on Oracle applications, working with IBM to implement and manage those new applications was a natural choice. The company was already a long-time user of Oracle solutions, and ever since it first introduced Oracle E-Business Suite, it has been working with IBM to optimize those solutions. As a result, they already knew we understood their business, and that we’re intimately familiar with the needs of retailers in general.

The Z15 Enterprise Platform


Moving fast is absolutely critical in retail. By deploying the new Oracle applications quickly, and creating a centralized platform for accessing and managing product data, we were able to help Shop Direct ensure leaner, faster operations. As a result, they can now respond quickly to changing customer demand.

Learn More

Visit the IBM-Oracle Alliance website to learn more about how we can help you maximize your Oracle investments.
https://www.ibm.com/blogs/insights-on-business/oracle-consulting/ibm-oracle-global-alliance-optimizing-applications-digital-transformation/

More Information:

https://www.ibm.com/it-infrastructure/z/news

https://www.ibm.com/blogs/systems/announcing-our-direction-for-red-hat-openshift-for-ibm-z-and-linuxone/

https://www.ibm.com/blogs/systems/topics/servers/mainframes/

https://www.informationweek.com/mobile/ibm-oracle-join-forces-for-midmarket-foray/d/d-id/1066798?piddl_msgorder=

https://www.ibm.com/blogs/insights-on-business/oracle-consulting/ibm-oracle-global-alliance-optimizing-applications-digital-transformation/

https://www.ibm.com/services/oracle

https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2947

https://www.ibm.com/developerworks/community/blogs/fd4076f7-7d3d-4080-a198-e62d7bb263e8/entry/international_competency_center?lang=en

https://www.ibm.com/blogs/insights-on-business/oracle-consulting/ibm-oracle-global-alliance-optimizing-applications-digital-transformation/