IBM BLU Accelerators
These goals are accomplished by using multiple complementary technologies, including:
The data is in a column store, meaning that I/O is performed only on those columns and values that satisfy a particular query.
The column data is compressed with actionable compression, which preserves order so that the data can be used without decompression, resulting in huge storage and CPU savings and a significantly higher density of useful data held in memory.
Parallel vector processing, with multi-core parallelism and single instruction, multiple data (SIMD) parallelism, provides improved performance and better utilization of available CPU resources.
Data skipping avoids the unnecessary processing of irrelevant data, thereby further reducing the I/O that is required to complete a query.
DB2 BLU Acceleration and more
These and other technologies combine to provide an in-memory, CPU-optimized, and I/O-optimized solution that is greater than the sum of its parts.
BLU Acceleration is fully integrated into DB2 10.5, so that much of how you leverage DB2 in your analytics environment today still applies when you adopt BLU Acceleration. The simplicity of BLU Acceleration changes how you implement and manage a BLU-accelerated environment. Gone are the days of having to define secondary indexes or aggregates, or having to make SQL or schema changes to achieve adequate performance.
What's new in IBM DB2 BLU?
Four key capabilities make BLU Acceleration a next generation solution for in-memory computing:
1. BLU Acceleration does not require the entire dataset to fit in memory while still processing at lightning-fast speeds.
Instead, BLU Acceleration uses a series of patented algorithms that nimbly handle in-memory data processing. This includes the ability to anticipate and “prefetch” data just before it’s needed and to automatically adapt to keep necessary data in or close to the CPU. Add some additional CPU acceleration techniques, and you get highly efficient in-memory computing at lightning-speed.
2. BLU Acceleration works on compressed data- saving time and money.
Why waste time and CPU resources on decompressing data, analyzing it and recompressing it? Instead of all these extra steps, BLU Acceleration preserves the order of data and performs a broad range of operations—including joins and predicate evaluations—on compressed data without the need for decompression. This is another next-generation technique to speed processing, skip resource-intensive steps and add agility.
3. BLU Acceleration intelligently skips processing of data it doesn’t need to get the answers you want.
With a massive data set, chances are good that you don’t need all of the data to answer a particular query. BLU Acceleration employs a series of metadata management techniques to automatically determine which data would not qualify for analysis within a particular query, enabling large chunks of data to be skipped. This results in a more agile computing, including storage savings and system hardware efficiency. What’s more this metadata is kept updated on a real-time basis so that data changes are continually reflected in the analytics. Less data to analyze in the first place means faster, simpler and more agile in-memory computing. We call this data skipping.
4. BLU Acceleration is simple to use.
As your business users demand more analytics faster, you need in-memory computing that keeps the pace. BLU Acceleration delivers optimal performance out of the box—no need for indexes, tuning, or time-consuming configuration efforts. You simply convert your row-based data to columns and run your queries. Because BLU Acceleration is seamlessly integrated with DB2, you can manage both row-based and column-based data from a single proven system, thus reducing complexity. This helps free the technical team to deliver value to the business – less routine maintenance and more innovation.
Simplicity in DB2 10.5 with BLU Acceleration
Fast and simple in-memory computingFast answers
DB2 with BLU Acceleration includes six advances for fast in-memory computing:
•In the moment business answers from within the transaction environment, new with DB2 10.5 “Cancun Release”, utilizes BLU Shadow Tables to automatically maintain a column-based version of the row-based operational data. Analytic queries are seamlessly routed to these column organized BLU Shadow Tables that are ideal for fast analytic processing.
•Next-generation in-memory computing delivers the benefits of in-memory columnar processing without the limitations or cost of in-memory only systems that require all data to be stored in system memory to achieve breakthrough performance. BLU Acceleration dynamically optimizes movement of data from storage to system memory to CPU memory (cache). This patented IBM innovation enables BLU Acceleration to maintain in-memory performance even when active data sets are larger than system memory.
•Actionable compression preserves the order of the data, enabling compressed data in BLU Acceleration tables to be used without decompression. A broad range of operations like predicates and joins are completed on compressed data. The most frequent values are encoded with fewer bits to optimize the compression.
•CPU acceleration is designed to process a huge volume of data simultaneously by multiplying the power of the CPU. Multi-core processing, SIMD processor support and parallel data processing are all used to deeply exploit the CPU and process data with less system latency and fewer bottlenecks.
•Data skipping eliminates processing of irrelevant and duplicate data. This is accomplished by examining small sections of data to determine if it contains information that is relevant to the analytics problem at hand. Deciding on these “hot” portions of data in more granular sections means that less irrelevant data is being processed in the first place.
Oracle SQL compatibility streamlines and reduces risk in moving data from Oracle database to DB2 with BLU Acceleration. This leverages existing skills and investments, while taking advantage of the speed and simplicity of BLU Acceleration to deliver fast business insights.
IBM believes that in-memory computing should be easy on IT resources:
•Load and go set-up allows you to start deriving value from your data in a couple simple steps. Simply create the table, load the data and go. It’s fast out of the gate – no tuning, no tweaking required. This means you can more quickly satisfy business needs even as they change and evolve.
•One administration environment for analytics or transactional data helps ease management. BLU Acceleration is built seamlessly into DB2 10.5 for Linux, UNIX and Windows, a proven enterprise-class database. A single set of enterprise-class administration functions for either row- or column-organized data reduces complexity, while a series of automation capabilities help free IT talent for higher value projects.
IBM Accelerating Analytics with BLU
Flexible multi-platform deployment for Linux on Intel, zLinux, AIX on Power and Windows makes the most of IT resources whether you are using existing hardware or the latest technology. This is the only in-memory computing technology to deploy on the cloud or on multiple platforms, offering greater flexibility in meeting the need for business answers.
IBM DB2 10.5 with BLU Acceleration vs Oracle Exadata
DB2 and BLU Acceleration on Cloud Tech Talk
BLU Acceleration: Delivering Speed of Thought Analytics
Big data poses big challenges for accessing and analyzing information. BLU Acceleration from IBM delivers speed of thought analytics that help you make better decisions faster. See BLU Acceleration’s innovative dynamic in-memory processing, actionable compression, parallel vector processing and data skipping. Learn how to get started using your existing infrastructure and skills.
New in-memory capabilities help you capitalize on business answers even more easily
Technology never stands still and BLU Acceleration is no exception! This product has been enhanced in key areas so you can:
•Gain access to the fast answers BLU Acceleration delivers on Windows and zLinux to support a broader range of organizations, as well as data mart consolidation on these new platforms
•Protect data at rest while saving administration time with native application-transparent data encryption
•Deliver in the moment business answers from within the transaction environment
•Leverage Oracle skills with SQL compatibility to enable simple, low-risk migration from Oracle database to DB2 with BLU Acceleration
•Reduce risk and improve performance of SAP environments with
•Significant enhancements to SAP Business Warehouse support
IBM DB2 11 SQL Improvements
Take advantage of faster query processing and better data reliability by using BLU Acceleration on the POWER8 processor
Bigdata Webcast on Blu acceleration
Best practices for DB2 pureScale performance and monitoring
Let's Get Hands-on: 60 Minutes in the Cloud—Predictive Analytics Made Easy
IBM dashDB - Keeping data warehouse infrastructure out of your way
DB2 Tech Talk: Deep Dive BLU Acceleration Super Easy In-memory Analytics
Join DB2 expert Sam Lightstone for an in-depth discussion of the all-new BLU Acceleration features in DB2 10.5 for Linux, UNIX and Windows. BLU Acceleration in-memory computing is designed to deliver results from data-intensive analytic workloads with speed and precision that is termed "speed of thought" analytics.
In this Tech Talk, Sam will explain the details of this ground-breaking technology such as:
•Dynamic in-memory analytics that do not require all of the data to fit in memory in order to perform analytics processing
•Parallel vector processing, driving spectacular CPU exploitation
DB2 Tech Talk: Introduction and Technical Tour of DB2 with BLU Acceleration
Join Distinguished Engineer and DB2 expert Berni Schiefer and host Rick Swagerman for a technical tour of the all new DB2 10.5 with BLU Acceleration in-memory technology. You will learn about new features such as:
•BLU Acceleration, for “Speed of Thought” analytics
Designed to handle data-intensive analytics workloads, BLU Acceleration extends the capabilities of traditional in-memory systems by providing in-memory performance even when the data set size exceeds the size of the memory. Learn about RAM data loading capabilities, plus “data skipping”; parallel data analysis; actionable compression for analysis without decompressing data and more.
• New DB2 pureScale capabilities that enable online rolling maintenance updates and capacity growth with no planned downtime, plus new integration HADR capabilities to help ensure always available transactions.
• SQL and Oracle Database compatibility refinements in DB2 10.5, helping to ensure fast, easy moves to DB2 as well as increased flexibility for DB2 applications.
• Enhancements to NoSQL technologies that are now business-ready in DB2 10.5. Although not part of the DB2 10.5 announcement, we will fill you in on other No SQL technology introduction plans as well.
• New packaging editions of DB2 that that handle either OLTP or data warehousing needs.
• DB2 tools advances that support these new functions.
Join us for this Tech Talk to find out about these exciting enhancements and how they can help you deliver the data analytics your organization needs while providing tools to keep your OLTP systems in top shape.
A deeper dive into dashDB - know more in a dash
dashDB is a newly announced data warehouse as a service deployed in the cloud that leverages technologies like BLU Acceleration, in-database analytics and Cloudant to allow you to focus more on the business and less on the business of IT. In this DB2 Tech Talk you will learn a little more about IBM’s cloud initiatives and the value proposition around dashDB as well as...
-dashDB’s architecture and use cases
-pricing and offerings as as service
-competitive differentiations and customer feedback
DB2 with BLU Acceleration on Power Systems
Best practices Optimizing analytic workloads using DB2 10.5 with BLU Acceleration