• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

19 December 2017

IBM Big Data Platform

What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.

Enterprise Data Warehouse Optimization: 7 Keys to Success

Big data spans three dimensions: Volume, Velocity and Variety.

Volume: Enterprises are awash with ever-growing data of all types, easily amassing terabytes—even petabytes—of information.

Turn 12 terabytes of Tweets created each day into improved product sentiment analysis
Convert 350 billion annual meter readings to better predict power consumption

Velocity: Sometimes 2 minutes is too late. For time-sensitive processes such as catching fraud, big data must be used as it streams into your enterprise in order to maximize its value.

Scrutinize 5 million trade events created each day to identify potential fraud
Analyze 500 million daily call detail records in real-time to predict customer churn faster

Variety: Big data is any type of data - structured and unstructured data such as text, sensor data, audio, video, click streams, log files and more. New insights are found when analyzing these data types together.

Overview - IBM Big Data Platform

Monitor 100’s of live video feeds from surveillance cameras to target points of interest
Exploit the 80% data growth in images, video and documents to improve customer satisfaction

Big data is more than simply a matter of size; it is an opportunity to find insights in new and emerging types of data and content, to make your business more agile, and to answer questions that were previously considered beyond your reach. Until now, there was no practical way to harvest this opportunity. Today, IBM’s platform for big data uses state of the art technologies including patented advanced analytics to open the door to a world of possibilities.

IBM big data platform

Data Science Experience: Build SQL queries with Apache Spark

Do you have a big data strategy? IBM does. We’d like to share our know-how with you to help your enterprise solve its big data challenges.

IBM is unique in having developed an enterprise class big data platform that allows you to address the full spectrum of big data business challenges.

The platform blends traditional technologies that are well suited for structured, repeatable tasks together with complementary new technologies that address speed and flexibility and are ideal for adhoc data exploration, discovery and unstructured analysis.
IBM’s integrated big data platform has four core capabilities: Hadoop-based analytics, stream computing, data warehousing, and information integration and governance.

Fig. 1 - IBM big data platform

The core capabilities are:

Hadoop-based analytics: Processes and analyzes any data type across commodity server clusters.
Stream Computing: Drives continuous analysis of massive volumes of streaming data with sub-millisecond response times.
Data Warehousing: Delivers deep operational insight with advanced in-database analytics.
Information Integration and Governance: Allows you to understand, cleanse, transform, govern and deliver trusted information to your critical business initiatives.

Delight Clients with Data Science on the IBM Integrated Analytics System

Supporting Platform Services:

Visualization & Discovery: Helps end users explore large, complex data sets.
Application Development: Streamlines the process of developing big data applications.
Systems Management: Monitors and manages big data systems for secure and optimized performance.
Accelerators: Speeds time to value with analytical and industry-specific modules.

IBM DB2 analytics accelerator on IBM integrated analytics system technical overview

How Big Data and Predictive Analytics are revolutionizing AML and Financial Crime Detection

Big data in action

What types of business problems can a big data platform help you address? There are multiple uses for big data in every industry – from analyzing larger volumes of data than was previously possible to drive more precise answers, to analyzing data in motion to capture opportunities that were previously lost. A big data platform will enable your organization to tackle complex problems that previously could not be solved.

Big data = Big Return on Investment (ROI)

While there is a lot of buzz about big data in the market, it isn’t hype. Plenty of customers are seeing tangible ROI using IBM solutions to address their big data challenges:

Healthcare: 20% decrease in patient mortality by analyzing streaming patient data
Telco: 92% decrease in processing time by analyzing networking and call data
Utilities: 99% improved accuracy in placing power generation resources by analyzing 2.8 petabytes of untapped data

IBM’s big data platform is helping enterprises across all industries. IBM understands the business challenges and dynamics of your industry and we can help you make the most of all your information.

The Analytic Platform behind IBM’s Watson Data Platform - Big Data

When companies can analyze ALL of their available data, rather than a subset, they gain a powerful advantage over their competition. IBM has the technology and the expertise to apply big data solutions in a way that addresses your specific business problems and delivers rapid return on investment.

The data stored in the cloud environment is organized into repositories. These repositories may be hosted on different data platforms (such as a database server, Hadoop, or a NoSQL data platform) that are tuned to support the types of analytics workload that is accessing the data.

What’s new in predictive analytics: IBM SPSS and IBM decision optimization

The data that is stored in the repositories may come from legacy, new, and streaming sources, enterprise applications, enterprise data, cleansed and reference data, as well as output from streaming analytics.

Breaching the 100TB Mark with SQL Over Hadoop

Types of data repositories include:

  • Catalog: Results from discovery and IT data curation create a consolidated view of information that is reflected in a catalog. The introduction of big data increases the need for catalogs that describe what data is stored, its classification, ownership, and related information governance definitions. From this catalog, you can control the usage of the data.
  • Data virtualization:Agile approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data
  • Landing, exploration, and archive: Allows for large datasets to be stored, explored, and augmented using a wide variety of tools since massive and unstructured datasets may mean that it is no longer feasible to design the data set before entering any data. Data may be used for archival purposes with improved availability and resiliency thanks to multiple copies distributed across commodity storage.

SparkR Best Practices for R Data Scientists
  • Deep analytics and modeling: The application of statistical models to yield information from large data sets comprised of both unstructured and semi-structured elements. Deep analysis involves precisely targeted and complex queries with results measured in petabytes and exabytes. Requirements for real-time or near-real-time responses are becoming more common.
  • Interactive analysis and reporting: Tools to answer business and operations questions over Internet-scale data sets. Tools also use popular spreadsheet interfaces for self-service data access and visualization. APIs implemented by data repositories allow output to be efficiently consumed by applications.
  • Data warehousing: Populates relational databases that are designed for building a correlated view of business operation. A data warehouse usually contains historical and summary data derived from transaction data but can also integrate data from other sources. Warehouses typically store subject-oriented, non-volatile, time-series data used for corporate decision-making. Workloads are query intensive, accessing millions of records to facilitate scans, joins, and aggregations. Query throughput and response times are generally a priority.

IBM Power leading Cognitive Systems

IBM offers a wide variety of offerings for consideration in building data repositories:
  • InfoSphere Information Governance Catalog maintains a repository to support the catalog of the data lake. This repository can be accessed through APIs and can be used to understand and analyze the types of data stored in the other data repositories.
  • IBM InfoSphere Federation Server creates consolidated information views of your data to support key business processes and decisions.
  • IBM BigInsights for Apache Hadoop delivers key capabilities to accelerate the time to value for a data science team, which includes business analysts, data architects, and data scientists.
  • IBM PureData™ System for Analytics, powered by Netezza technology, is changing the game for data warehouse appliances by unlocking data's true potential. The new IBM PureData System for Analytics is an integral part of a logical data warehouse.
  • IBM Analytics for Apache Spark is a fully-managed Spark service that can help simplify advanced analytics and speed development.
  • IBM BLU Acceleration® is a revolutionary, simple-to-use, in-memory technology that is designed for high-performance analytics and data-intensive reporting.
  • IBM PureData System for Operational Analytics is an expert integrated data system optimized specifically for the demands of an operational analytics workload. A complete solution for operational analytics, the system provides both the simplicity of an appliance and the flexibility of a custom solution.

IBM Big Data Analytics Concepts and Use Cases

Bluemix offers a wide variety of services for data repositories:

  • BigInsights for Apache Hadoop provisions enterprise-scale, multi-node big data clusters on the IBM SoftLayer cloud. Once provisioned, these clusters can be managed and accessed from this same service.

Big Data: Introducing BigInsights, IBM's Hadoop- and Spark-based analytical platform
  • Cloudant® NoSQL Database is a NoSQL Database as a Service (DBaaS). It's built from the ground up to scale globally, run non-stop, and handle a wide variety of data types like JSON, full-text, and geospatial. Cloudant NoSQL DB is an operational data store optimized to handle concurrent reads and writes and provide high availability and data durability.
  • dashDB™ stores relational data, including special types such as geospatial data. Then analyze that data with SQL or advanced built-in analytics like predictive analytics and data mining, analytics with R, and geospatial analytics. You can leverage the in-memory database technology to use both columnar and row-based tables. The dashDB web console handles common data management tasks, such as loading data, and analytics tasks like running queries and R scripts.

IBM BigInsights: Smart Analytics for Big Data

IBM product support for big data and analytics solutions in the cloud

Now that we've reviewed the component model for a big data and analytics solution in the cloud, let's look at how IBM products can be used to implement a big data and analytics solution. In previous sections, we highlighted IBM's end-to-end solution for deploying a big data and analytics solution in cloud.
The figure below shows how IBM products map to specific components in the reference architecture.

Figure 5. IBM product mapping

Ml, AI and IBM Watson - 101 for Business

IBM product support for data lakes using cloud architecture capabilities

The following images show how IBM products can be used to implement a data lake solution. In previous sections, we highlighted IBM's end-to-end solution for deploying data lake solutions using cloud computing.

Benefits of Transferring Real-Time Data to Hadoop at Scale

Mapping on-premises and SoftLayer products to specific capabilities

Figure 7 shows how IBM products can be used to run a data lake in the cloud.

Figure 7. IBM product mapping for a data lake using cloud computing

What is Big Data University?

Big Data Scotland 2017

Big Data Scotland is an annual data analytics conference held in Scotland. Run by DIGIT in association with The Data Lab, it is free for delegates to attend. The conference is geared towards senior technologists and business leaders and aims to provide a unique forum for knowledge exchange, discussion and cross-pollination.

The programme will explore the evolution of data analytics; looking at key tools and techniques and how these can be applied to deliver practical insight and value. Presentations will span a wide array of topics from Data Wrangling and Visualisation to AI, Chatbots and Industry 4.0.


More Information:












27 November 2017

Oracle Introduces Autonomous Database Cloud, Robotic Security

Oracle Introduces Autonomous Data Warehouse Cloud that has Demonstrated Performance of 10x Faster at Half the Cost of Amazon

The World’s #1 Database Is Now the World’s First Self-Driving Database.   

Oracle Sets New Standard with World’s First Autonomous Database

Journey to Autonomous Database

Oracle is revolutionizing how data is managed with the introduction of the world’s first "self-driving" database. This ground-breaking Oracle Database technology automates management to deliver unprecedented availability, performance, and security—at a significantly lower cost.

Powered by Oracle Database 18c, the next generation of the industry-leading database, Oracle Autonomous Database Cloud offers total automation based on machine learning and eliminates human labor, human error, and manual tuning.

Get unmatched reliability and performance at half the cost.

  • No Human Labor: Database automatically upgrades, patches, and tunes itself while running; automates security updates with no downtime window required.
  • No Human Error: SLA guarantees 99.995% reliability and availability, which minimizes costly planned and unplanned downtime to less than 30 minutes a year.
  • No Manual Performance Tuning: Database consumes less compute and storage because of machine learning and automatic compression. Combined with lower manual admin costs, Oracle offers even bigger cost savings.

At Oracle OpenWorld 2017, Oracle Chairman of the Board and CTO Larry Ellison unveiled his vision for the world’s first autonomous database cloud.

Oracle OpenWorld 2017 Review (31st October 2017 - 250 slides)

Powered by Oracle Database 18c, the next generation of the industry-leading database, Oracle Autonomous Database Cloud uses ground-breaking machine learning to enable automation that eliminates human labor, human error and manual tuning, to enable unprecedented availability, high performance and security at a much lower cost.

Oracle database cloud architecture | Video tutorial

“This is the most important thing we’ve done in a long, long time,” said Ellison. “The automation does everything. We can guarantee availability of 99.995 percent, less than 30 minutes of planned or unplanned downtime.”

The Oracle Autonomous Database Cloud eliminates the human labor associated with tuning, patching, updating and maintaining the database and includes the following capabilities:

Self-Driving: Provides continuous adaptive performance tuning based on machine learning. Automatically upgrades and patches itself while running. Automatically applies security updates while running to protect against cyberattacks.
Self-Scaling: Instantly resizes compute and storage without downtime. Cost savings are multiplied because Oracle Autonomous Database Cloud consumes less compute and storage than Amazon, with lower manual administration costs.
Self-Repairing: Provides automated protection from downtime. SLA guarantees 99.995 percent reliability and availability, which reduces costly planned and unplanned downtime to less than 30-minutes per year.

The Oracle Autonomous Database Cloud handles many different workload styles, including transactions, mixed workloads, data warehouses, graph analytics, departmental applications, document stores and IoT. The first Autonomous Database Cloud offering, for data warehouse workloads, is planned to be available in calendar year 2017.

Oracle Autonomous Data Warehouse Cloud

Oracle Autonomous Data Warehouse Cloud is a next-generation cloud service built on the self-driving Oracle Autonomous Database technology using machine learning to deliver unprecedented performance, reliability and ease of deployment for data warehouses. As an autonomous cloud service, it eliminates error-prone manual management tasks and frees up DBA resources, which can now be applied to implementing more strategic business projects.

“Every organization is trying to leverage the overwhelming amount of data generated in our digital economy,” said Carl Olofson, research vice president, IDC. “With a history of established leadership in the database software market segment, it is no surprise that Oracle is pioneering a next-generation data management platform. Oracle Autonomous Data Warehouse Cloud is designed to deliver industry-leading database technology performance with unmatched flexibility, enterprise scale and simplicity. The intent is to ensure that businesses get more value from their data and modernize how data is managed.”

Highlights of the Oracle Autonomous Data Warehouse Cloud include:

Simplicity: Unlike traditional cloud services with complex, manual configurations that require a database expert to specify data distribution keys and sort keys, build indexes, reorganize data or adjust compression, Oracle Autonomous Data Warehouse Cloud is a simple “load and go” service. Users specify tables, load data and then run their workloads in a matter of seconds—no manual tuning is needed.
Industry-Leading Performance: Unlike traditional cloud services, which use generic compute shapes for database cloud services, Oracle Autonomous Data Warehouse Cloud is built on the high-performance Oracle Exadata platform. Performance is further enhanced by fully-integrated machine learning algorithms which drive automatic caching, adaptive indexing and advanced compression.
Instant Elasticity: Oracle Autonomous Data Warehouse Cloud allocates new data warehouses of any size in seconds and scales compute and storage resources independently of one another with no downtime. Elasticity enables customers to pay for exactly the resources that the database workloads require as they grow and shrink.

Oracle Database 18c

Oracle OpenWorld 2017: Keynote by Larry Ellison

Oracle Autonomous Database Cloud is powered by the next generation of the world’s #1 database, Oracle Database 18c. Oracle Database 18c delivers breakthrough automation capabilities, as well as greatly enhanced OLTP, analytics and consolidation technologies.

If everything Oracle CTO and co-founder Larry Ellison said the evening of Oct. 1 is true, then the company's board of directors, investors and stockholders had better have a meeting and find out whether Oracle will actually be able to make a profit from this new-fangled cloud-service business.

Highly Automated IT

Ellison spent a good portion of his opening keynote at Oracle OpenWorld 2017 demonstrating how "cheap" Oracle's in-cloud workload processing is versus Amazon Web Services' RDS (relational database system). He explained in a series of demos that because Oracle's cloud service is anywhere from 6 to 15 times faster that AWS in processing the same workload, Oracle thus is 6 to 15 times "cheaper" than AWS.

Case in point: For the same "market research" workload, Ellison pitted an 8-node Oracle Autonomous Data Warehouse Cloud instance against a similar 8-node AWS DS2.xlarge cloud. The same eight queries were fed to both cloud services.

Partner Webcast – Data Management Platform for Innovation

Timers were started. Oracle claimed AWS's processors took 244 seconds to do the job, costing the user 27 cents' worth of computing time. Oracle then claimed its own service took a  mere 38 seconds to do the job, costing the user 2 cents' worth of cloudtime. This is going to be the new normal for the super-fast new DB, Ellison contended.

Larry Ellison introduced not only the aforementioned "world's first autonomous database cloud," but also an as-yet unnamed automated security product, which he said he would detail later in the week. He also used some of his time onstage to skewer Equifax following the security breach it suffered earlier this year in which more than 140 million people had their personal credit information compromised. We'll get to that in a minute.

Oracle Technology Monthly Oktober 2017

Larry Ellison said both the Autonomous Database Cloud and the security system use machine learning for automation to eliminate human labor, human error and manual tuning, and to enable availability, high performance and security at a much lower cost than competitors that include AWS.

“These systems are highly, highly automated--this means we do everything we can to avoid human intervention," Larry Ellison said. "This is the most important thing we've done in a long, long time. The automation does everything.

Security is 'Robotic,' 'Autonomous'

"They're robotic; they're autonomous. In security, it's our computers versus their (hackers') computers. It's cyber warfare. We have to have a lot better computer systems, a lot more automation if we're going to defend our data."

Ellison said the automated security system would scan the entire system 24/7, know immediately when an intruder gets into it, and would be able to stop and isolate the intruder faster than any human can do it. He didn't mention that there are already systems out there that do the same thing, such as Vectra Networks, Vera and others.

MOUG17 Keynote: What's New from Oracle Database Development

On the DB side, Ellison said the database cloud eliminates human labor that touches tuning, patching, updating and maintaining the database. The company listed the following capabilities:

Self-Driving: Provides continuous adaptive performance tuning based on machine learning. Automatically upgrades and patches itself while running. Automatically applies security updates while running to protect against cyberattacks.
Self-Scaling: Instantly resizes compute and storage without downtime. Cost savings are multiplied because Oracle Autonomous Database Cloud consumes less compute and storage than Amazon, with lower manual administration costs.

Self-Repairing: Provides automated protection from downtime. SLA guarantees 99.995 percent reliability and availability, which reduces costly planned and unplanned downtime to less than 30-minutes per year.
Oracle said the database cloud is designed to handle a high number of different workloads, including transactions, mixed workloads, data warehouses, graph analytics, departmental applications, document stores and IoT.

The first Autonomous Database Cloud offering, for data warehouse workloads, is planned to be available in calendar year 2017, Ellison said.

Details on Oracle's Autonomous Data Warehouse Cloud

Oracle Autonomous Data Warehouse Cloud ostensibly eliminates error-prone manual management tasks and frees up DBA resources, which can now be applied to implementing more strategic business projects.

Key features, according to Oracle, include:

  • Simplicity: Unlike traditional cloud services with complex, manual configurations that require a database expert to specify data distribution keys and sort keys, build indexes, reorganize data or adjust compression, Oracle Autonomous Data Warehouse Cloud is a simple “load and go” service. Users specify tables, load data and then run their workloads in a matter of seconds—no manual tuning is needed.
  • Performance: Unlike conventional cloud services, which use generic compute shapes for database cloud services, Oracle Autonomous Data Warehouse Cloud is built on the high-performance Oracle Exadata platform. Performance is further enhanced by fully-integrated machine learning algorithms which drive automatic caching, adaptive indexing and advanced compression.
  • Elasticity: Oracle Autonomous Data Warehouse Cloud allocates new data warehouses of any size in seconds and scales compute and storage resources independently of one another with no downtime. Elasticity enables customers to pay for exactly the resources that the database workloads require as they grow and shrink.
  • Oracle Autonomous Database Cloud: is powered by the company's latest database, Oracle Database 18c. Oracle Database 18c offers new automation capabilities in addition to enhanced OLTP, analytics and consolidation technologies.

Ellison on Equifax, Security

"You've got to know (about a breach) during the reconnaissance phase of a cyber attack," Ellison said, "when someone is nosing around in your computer system--trying to steal a password, trying to steal someone's identity. As they come in and start looking around, you'd better detect that that's happening."

Ellison chastised Equifax for not patching its system in time.

"I know it's a shock, but there was a patch available for Equifax, but somebody didn't apply it. I saw where the CEO  lost his job--which doesn't bother me now, I'm not a CEO. That's a risky job those guys have," Ellison said with a slight laugh. "But no, I'd lose my job, too. It's a clean sweep (with a breach like Equifax's); directors aren't safe, nobody's safe when something like that happens."

This is going to get a lot worse before it gets better, Ellison said.

"People are going to get better at stealing data; we have to get better at protecting it," he said.

The Oracle Autonomous Cloud will become available on-premises or in the Oracle public or private clouds for data warehousing production workloads in December. It will become available for other specific workloads in June 2018.

Databases Are Moving to Cloud, Are You?

With Oracle CTO Larry Ellison’s Keynote at Oracle OpenWorld 2017 #oow17 about World’s first Autonomous Database 18c  I received 100s of messages, some worried ‘Is DBA as a Career Over‘ while others ready to prepare for now & future ‘What Oracle DBAs should do to prepare for Future or for Cloud‘

MOUG17: DBA 2.0 is Dead; Long Live DBA 3.0 (How to Prepare!)

First of all you as a DBA, there is nothing to worry about. There will still be a role of DBAs but it will transition from less of routine tasks like Install, Patch or Upgrade to more innovative tasks like Architecture Design, Deployment, Security, Integration, and Migration to Cloud.

Databases are already moving to Cloud and over next few years, more and more Databases will move to Cloud (mainly in PaaS space – If you are not familiar with SaaS, PaaS & IaaS then check here).

In my view, every change brings new opportunity & here is your chance to learn about this new Role as Oracle Cloud DBA to stay ahead in professional Career, Earn More & Enjoy What You Do.

Role of Oracle Cloud DBA

One of the most common questions being asked in our private Facebook Group dedicated to Oracle Cloud and those join our Oracle Database Cloud Administration (Cloud DBA) Training is about Role of Cloud DBA

Oracle Database Cloud: DBCS architecture for DBAs

Looking at so many requests, I created a video on How the Role of DBA changes with Cloud and how the Roles and Responsibilities changes as you upgrade yourself from DBA to Cloud DBA.

Cloud DBA: Role of DBA in Cloud

As shown in the Video these are the following tasks that you will be doing as Oracle Cloud DBA:

  • Design & Specifications of Database i.e. CPU, Memory, Disk Space, Future Growth, High Availability & Disaster Recovery (Yes even in Cloud, you have to consider HA & DR).
  • Creating & Configuration of Oracle Database is very simple in Cloud with Click of a Button or with REST API & JSON but someone need to invoke these Scripts or Click (Cloud DBA will perform this task)
  • As an Oracle Cloud DBA, you need to learn new tools for Start/Stop i.e. DBaaSCLI (Database as a Service Command Line Interface) or Database Service Console
  • Still, you have to Patch but using new tools like DBaaSCLI with DBPATCHM or RACCLI
  • You still have to do the Back-up and Recovery using new tools and Oracle Storage Service (OSS).

Patching an Oracle Database Cloud Service

  • You still will be learning about Migration to the Cloud (Lift & Shift) and this is where you can expect a lot of work to migrate existing on-premise databases to Cloud.

Oracle Multitenant: Isolation and Agility with Economies of Scale

  • You still will be learning about Disaster Recovery (or Data Guard ) in Cloud and setting up Disaster Recovery (DR) on Cloud for Data Center Failover (Yes this is not available out of Box & You as Cloud DBA will have to set it up).

Oracle RAC on Oracle Database Cloud Bare Metal Services

  • You still will be configuring RAC on Cloud or you will be deploying maximum availability Architecture. Check my Video Blog on Setting Up RAC Database in Oracle Cloud
  • You still will be configuring OEM CC 13c: Hybrid Cloud Management to Manage both On-Premise & Cloud Database or using Oracle Management Cloud (OMC) 

Oracle Bare Metal Cloud Services overview

Experience of being a Oracle Bare Metal Cloud DBA by Satyendra Pasalapudi

Pluggable Databases on Oracle Cloud

More Information:












24 October 2017

SQL Server 2017 on Linux

SQL Server 2017 on Linux

Microsoft has heard from you, our customers, that your data estate gets bigger, more complicated, and more diverse every year. You need solutions that work across platforms, whether on-premises or in the cloud, and that meet your data workloads where they are. Embracing this choice, earlier today we announced the general availability of SQL Server 2017 on Linux, Windows, and Docker on October 2, 2017.

Today, Microsoft and Red Hat are delivering on choice by announcing the availability of Microsoft SQL Server 2017 on Red Hat Enterprise Linux, the world’s leading enterprise Linux platform. As Microsoft’s reference Linux platform for SQL Server, Red Hat Enterprise Linux extends the enterprise database and analytics capabilities of SQL Server by delivering it on the industry-leading platform for performance, security features, stability, reliability, and manageability.

Customers will be able to bring the performance and security features of SQL Server to Linux workloads. SQL Server 2017 on Red Hat Enterprise Linux delivers mission-critical OLTP database capabilities and enterprise data warehousing with in-memory technology across workloads. SQL Server 2017 embraces developers by delivering choice in language and platform, with container support that seamlessly facilitates DevOps scenarios. The new release of SQL Server delivers all of this, built-in. And, it runs wherever you want, whether in your datacenter, in Azure virtual machines, or in containers running on Red Hat OpenShift Container Platform!

Also starting October 2nd until June 30th, 2018, we are launching a SQL Server on Red Hat Enterprise Linux offer to help with upgrades and migrations. This offer provides up to 30% off SQL Server 2017 through an annual subscription. When customers purchase a new Red Hat Enterprise Linux subscription to support their SQL Server, they will be eligible for another 30% off their Red Hat Enterprise Linux subscription price.

In addition to discounts on SQL Server and Red Hat Enterprise Linux, all of this is backed by integrated support from Microsoft and Red Hat.

Bootcamp 2017 - SQL Server on Linux

 SQL Server 2017 is generally available for purchase and download! The new release is available right now for evaluation or purchase through the Microsoft Store, and will be available to Volume Licensing customers later today. Customers now have the flexibility for the first time ever to run industry-leading SQL Server on their choice of Linux, Docker Enterprise Edition-certified containers and, of course, Windows Server. It’s a stride forward for our modern and hybrid data platform across on-premises and cloud.

Everything you need to know about SQL Server 2017

In the 18 months since announcing our intent to bring SQL Server to Linux, we’ve been focused on making SQL Server perform and scale to the industry-leading levels customers expect from SQL Server, making SQL Server feel familiar yet native to Linux, and ensuring compatibility between SQL Server on Windows and Linux. With all the enterprise database features you rely on, from Active Directory authentication, to encryption, to Always On availability groups, to record-breaking performance, SQL Server is at parity on Windows and Linux. We have also brought SQL Server Integration Services to Linux so that you can perform data integration just like on Windows. SQL Server 2017 supports Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu.

There are a number of new features for SQL Server that we think make this the best release ever. Here are just a few:

  • Container support seamlessly facilitates your development and DevOps scenarios by enabling you to quickly spin up SQL Server containers and get rid of them when you are finished. SQL Server supports Docker Enterprise Edition, Kubernetes and OpenShift container platforms.
  • AI with R and Python analytics enables you to build intelligent apps using scalable, GPU-accelerated, parallelized R and now Python analytics running in the database.
  • Graph data analysis will enable customers to use graph data storage and query language extensions for graph-native query syntax in order to discover new kinds of relationships in highly interconnected data.
  • Adaptive Query Processing is a new family of features in SQL Server that bring intelligence to database performance. For example, Adaptive Memory Grants in SQL Server track and learn from how much memory is used by a given query to right-size memory grants.
  • Automatic Plan Correction ensures continuous performance by finding and fixing performance regressions.

Above and beyond these top-line features, there are more enhancements that you haven’t heard as much about, but we hope will truly delight you:

  • Resumable online index rebuild lets you stop and start index maintenance. This gives you the ability to optimize index performance by re-indexing more frequently – without having to wait for a long maintenance window. It also means you can pick up right where you left off in the event of a disruption to database service.
  • LOB compression in columnstore indexes. Previously, it was difficult to include data which contained LOBs in a columnstore index due to size. Now those LOBs can be compressed, making LOBs easier to work with and broadening the applicability of the columnstore feature.
  • Clusterless availability groups enable you to scale out reads by building an Always On availability group without having to use an underlying cluster.
  • Continued improvement to key performance features such as columnstore, in-memory OLTP, and the query optimizer to drive new record-setting performance. We’ll share some even more exciting perf and scale numbers soon!
  • Native scoring in T-SQL lets you score operational data using advanced analytics in near real-time because you don’t have to load the Machine Learning libraries to access your model.
  • SQL Server Integration Services (SSIS) scale-out enables you to speed package execution performance by distributing execution to multiple machines. These packages are executed in parallel, in a scale-out mode.
What’s new in SQL Server 2017

Many enhancements were made to SQL Server Analysis Services including:
  • Modern “get data” experience with a number of new connectors like Oracle, MySQL, Sybase, Teradata, and more to come. New transformations enable mashing up of the data being ingested into tabular models.
  • Object-level security for tables and columns.
  • Detail rows and ragged hierarchies support, enabling additional drill-down capabilities for your tabular models.
Enhancements were made to SQL Server Reporting Services as well, including:
  • Lightweight installer with zero impact on your SQL Server databases or other SQL Server features.
  • REST API for programmatic access to reports, KPIs, data sources, and more.
  • Report comments, enabling users to engage in discussion about reports.

In addition to the ability to upgrade existing SQL Server to 2017, there are a few more benefits to renewing your software assurance:

  • Machine Learning Server for Hadoop, formerly R Server, brings R and Python based, scalable analytics to Hadoop and Spark environments, and it is now available to SQL Server Enterprise edition customers as a Software Assurance benefit.
  • SQL Server Enterprise Edition Software Assurance benefits also enable you to run Power BI Report Server. Power BI Report Server enables self-service BI and enterprise reporting, all in one solution by allowing you to manage your SQL Server Reporting Services (SSRS) reports alongside your Power BI reports. Power BI Report Server is also included with the purchase of Power BI Premium.
  • Lastly, but importantly, we are also modernizing how we service SQL Server. Please see our release management blog for all the details on what to expect for servicing SQL Server 2017 and beyond.

Microsoft will continue to invest in SQL Server 2017 and cloud-first development model, to ensure that the pace of innovation stays fast.

SQL Server 2017 sets the standard when it comes to speed and performance. Based on the incredible work of SQL Server 2016 (See the blog series It Just Runs Faster), SQL Server 2017 is fast: built-in, simple, and online. Maybe you caught my presentation at Microsoft Ignite where I demonstrated 1 million transactions per minute on my laptop using the popular tool HammerDB¹ by simply installing SQL Server out of the box with no configuration changes (with the HammerDB client and SQL Server on the same machine!)

SQL Server 2017 on Linux Introduction

Consider for a minute all the built-in capabilities that power the speed of SQL Server. From a SQLOS scheduling engine that minimizes OS context switches to read-ahead scanning to automatic scaling as you add NUMA and CPUs. And we parallelize everything! From queries to indexes to statistics to backups to recovery to background threads like LogWriter. We partition and parallelize our engine to scale from your laptop to the biggest servers in the world.

Like the enhancements we made as described in It Just Runs Faster, in SQL Server 2016, we are always looking to tune our engine for speed, all based on customer experiences. Take, for example, indirect checkpoint, which is designed to provide a more predictable recovery time for a database. We boosted scalability of this feature based on customer feedback. We also made scalability improvements for parallel scanning and consistency check performance. No knobs required. Just built-in for speed.

One of the coolest performance aspects to built-in speed is online operations. We know you need to perform other maintenance tasks than just run queries, but keep your application up and running, so we support online backups, consistency checks, and index rebuilds. SQL Server 2017 enhances this functionality with resumable online index builds allowing you to pause an index build and resume it at any time (even after a failure).

Microsoft SQL Server 2017 Deep Dive

SQL Server 2017 is faster than you think. SQL Server 2017 was designed from the beginning to run fast on popular Linux distributions such as Red Hat Enterprise Linux, SUSE Linux Enterprise, and Ubuntu whether that is on your server or a Docker Container. Don’t believe it? Check out our world record 1TB TPC-H benchmark result (non-clustered) for SQL Server on Red Hat Enterprise Linux. Even though this is our first release on Linux, we know how to configure and run SQL Server on Linux for maximum speed. Read our best practices guide for performance settings on Linux in our documentation. We know it performs well because our customers tell us. Read the amazing story of dv01 and how SQL Server on Linux exceeded their performance expectations as they migrated from PostgreSQL

SQL Server 2017 Deep Dive - @Ignite 2017

One of the key technologies to achieve a result like this is columnstore indexes. This is one of the most powerful features of SQL Server for high-speed analytic queries and large databases. Columnstore indexes boost performance by organizing columnar data in a new way than traditional indexes, compressing data to reduce memory and disk footprint, filtering scans automatically through rowgroup elimination and processing queries in batches. SQL Server runs at warp speed for data warehouses and columnstore is the fuel. At Microsoft Ignite, I demonstrated how columnstore indexes can make PowerBI with Direct Query against SQL Server faster handling the self-service, ad-hoc nature of PowerBI queries.

Microsoft Ignite 2017 - SQL Server on Kubernetes, Swarm, and Open Shift

SQL Server also excels at transaction processing, the heart of many top enterprise workloads. Got RAM? Not only does columnstore use in-memory technologies to achieve speed, but our In-Memory OLTP feature focuses on optimized access to memory-optimized tables. This feature is named OLTP, but it can be so much more. ETL staging tables, IoT workloads, table types (no more tempdb!), and “caching” tables. One of our customers was able to get a throughput of 1.2M batch requests/sec using SCHEMA_ONLY memory-optimized tables. To really boost transaction processing, also consider using SQL Server’s support for Persistent Memory (NVDIMM-N) and our optimization for transaction log (get ready for WRITELOG waits = 0!) performance. SQL Server 2017 supports any Persistent Memory technology supported on Windows Server 2016 and later releases.

Many customers I talk to have great performance when they first deploy SQL Server and their application. Keeping SQL Server fast and tuned is more of the challenge. SQL Server 2017 comes with features to keep you fast and tuned automatically and adaptively. Our Query Processing engine has all types of capabilities to create and build query plans to maximize the performance of your queries. We have created a new feature family in SQL Server 2017 to make it smarter, called Adaptive Query Processing. Imagine running a query that is not quite the speed you expect because of insufficient memory grants (which is a thorn in the side of many SQL Server users, as it can lead to a spill to tempdb). With Adaptive Query Processing, future executions of this query will have a corrected calculated memory grant avoiding the spill, all without requiring a recompilation of the query plan. Adaptive Query Processing handles other scenarios such as adaptive joins and interleaved execution of Table Valued Functions.

Choosing technologies for a big data solution in the cloud

Another way to keep you tuned is the amazing feature we added in SQL Server 2016 called Query Store. Query Store provides built-in capabilities to track and analyze query performance all stored in your database. For SQL Server 2017, we made tuning adjustments to Query Store to make it more efficient based on learnings in our Azure SQL Database service where Query Store is enabled for millions of databases. We added wait statistics so now you have an end-to-end picture of query performance. Perhaps though the most compelling enhancement in SQL Server 2017 is Automatic Tuning. Parameter Sniffing got you down? Automatic Tuning uses Query Store to detect query plan regressions and automatically forces a previous plan that used to run fast. What I love about this feature is that even if you don’t have it turned on, you can see recommendations it has detected about plan regressions. Then you can either manually force plans that you feel have regressed or turn on the feature to have SQL Server do it for you.

Introduction to PolyBase

SQL Server 2017 is the fastest database everywhere you need it. Whether it is your laptop, in your private cloud, or in our Azure public cloud infrastructure. Whether it is running on Linux, Windows, or Docker Containers, we have the speed to power any workload your application needs.

As I mentioned above, back in April, we announced our world record TPC-H 1TB data warehousing workload (non-clustered) for SQL Server 2017 running on a HPE ProLiant DL380 Gen9 using RedHat Enterprise Linux².

Perhaps you missed the announcement in June of 2017, of a new world record TPC-E benchmark result³ on SQL Server 2017 on Windows Server 2016 running on a Lenovo ThinkSystem SR650 continuing to demonstrate our leadership in database performance. This benchmark running on a 2 socket system using Intel’s Xeon Scalable Processors has set a new standard for price and performance, becoming the first TPC-E benchmark result ever to be under $100/tpsE.

We continued to show our proven speed for analytics by announcing in July of 2017 a new TPC-H 10TB (non-clustered) world record benchmark result4 of 1,336,109 QppH on Windows Server 2016 using a Lenovo ThinkSystem SR950 system with 6TB RAM and 224 logical CPUs.

While benchmarks can show the true speed of SQL Server, we believe it can perform well with your workload and maximize the computing power of your server. Perhaps you caught the session at Ignite where my colleague Travis Wright showed how we can scan a 180 Billion row table (from a 30TB database) in our labs in under 20 seconds powering 480 CPUs to 100% capacity. And if you don’t believe SQL Server is deployed in some of the biggest installations and servers in the world, I recently polled some of our field engineers, SQL Customer Advisor Team, and MVPs asking them for their largest SQL Server deployments. Over 30 people responded, and the average footprint of these installations was 3TB+ RAM on machines with 128 physical cores. Keep in mind that SQL Server on can theoretically scale to 24TB RAM on Windows and 64TB on Linux. And it supports the maximum CPUs of those systems (64 sockets with unlimited cores on Windows and 5120 logical CPUs on Linux). Look for more practical and fun demonstrations of the speed of SQL Server in the future.

Microsoft cloud big data strategy

It could be that you are consolidating your deployments and want to run SQL Server using Azure Virtual Machine, but not sure if the capacity is there for your performance needs. Consider that Azure Virtual machine has the new M-Series, which supports up to 128 vCPUs, 2TB RAM, and 64 Data Disks with a capacity of 160,000 IOPS. It could be that in your environment you want to scale out your read workload with Availability Group secondary replicas but don’t want to invest in Failover Clustering. SQL Server 2017 introduces the capability of read-scale availability groups without clustering supported both on Windows and Linux. Two other very nice performance features new to SQL Server 2017 are SSIS Scale Out, for those with data loading needs, and native scoring, which integrates machine learning algorithms into the SQL Server engine for maximum performance.

Microsoft Technologies for Data Science 201612

SQL Server 2017 brings to the database market a unique set of features and speed. A database engine that is fast, built-in with the power to scale, and even faster when taking advantage of technologies like columnstore Indexes and In-Memory OLTP. An engine that provides automation and adapts to keep you fast and tuned. And the fastest database everywhere you need it.

Machine learning services with SQL Server 2017

More Information:
















26 September 2017

Oracle Sparc M8 and Oracle Advanced Analytics

Oracle SPARC M8 released with 32 cores 256 threads 5.0GHz

Oracle announced its eighth generation SPARC platform, delivering new levels of security capabilities, performance, and availability for critical customer workloads. Powered by the new SPARC M8 microprocessor, new Oracle systems and IaaS deliver a modern enterprise platform, including proven Software in Silicon with new v2 advancements, enabling customers to cost-effectively deploy their most critical business applications and scale-out application environments with extreme performance both on-premises and in Oracle Cloud.

Oracle’s Advanced Analytics & Machine Learning 12.2c New Features & Road Map; Bigger, Better, Faster, More!

SPARC M8 processor-based systems, including the Oracle SuperCluster M8 engineered systems and SPARC T8 and M8 servers, are designed to seamlessly integrate with existing infrastructures and include fully integrated virtualization and management for private cloud. All existing commercial and custom applications will run on SPARC M8 systems unchanged with new levels of performance, security capabilities, and availability. The SPARC M8 processor with Software in Silicon v2 extends the industry's first Silicon Secured Memory, which provides always-on hardware-based memory protection for advanced intrusion protection and end-to-end encryption and Data Analytics Accelerators (DAX) with open API's for breakthrough performance and efficiency running Database analytics and Java streams processing. Oracle Cloud SPARC Dedicated Compute service will also be updated with the SPARC M8 processor.

Spark SQL: Another 16x Faster After Tungsten: Spark Summit East talk by Brad Carlile

"Oracle has long been a pioneer in engineering software and hardware together to secure high-performance infrastructure for any workload of any size," said Edward Screven, chief corporate architect, Oracle. "SPARC was already the fastest, most secure processor in the world for running Oracle Database and Java. SPARC M8 extends that lead even further."

The SPARC M8 processor offers security enhancements delivering 2x faster encryption and 2x faster hashing than x86 and 2x faster than SPARC M7 microprocessors. The SPARC M8 processor's unique design also provides always-on security by default and built-in protection of in-memory data structures from hacks and programming errors.

SPARC M8's silicon innovation provides new levels of performance and efficiency across all workloads, including: 
  • Database: Engineered to run Oracle Database faster than any other microprocessor, SPARC M8 delivers 2x faster OLTP performance per core than x86 and 1.4x faster than M7 microprocessors, as well as up to 7x faster database analytics than x86.
  • Java: SPARC M8 delivers 2x better Java performance than x86 and 1.3x better than M7 microprocessors.  DAX v2 produces 8x more efficient Java streams processing, improving overall application performance.
  • In Memory Analytics: Innovative new processor delivers 7x Queries per Minute (QPM)/core than x86 for database analytics.
Oracle is committed to delivering the latest in SPARC and Solaris technologies and servers to its global customers. Oracle's long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034.

Oracle Sparc M8 is available for:

  • Oracle SPARC M8
  • Oracle SPARC T8-1 server
  • Oracle SPARC T8-2 server
  • Oracle SPARC T8-4 server
  • Oracle SPARC M8-8 server
  • Oracle SuperCluster M8 engineered system

More information in: Oracle SPARC M8 Launch Webcast:  http://www.oracle.com/us/corporate/events/next-gen-secure-infrastructure-platform/index.html

About Oracle 

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.

Big data analytics using oracle advanced analytics and big data sql

The Oracle SPARC M8 is now out and is a monster of a chip. Each SPARC M8 processor supports up to 32 cores and 64MB L3 cache. Each core can handle 8 threads for up to 256 threads. Compare this to the AMD EPYC 7601, the world’s only 32 core x86 processor as of this writing, which handles 64 threads and also has 64MB L3 cache. The cores can also clock up to 5.0GHz faster than current x86 high-core count server chip designs from Intel and AMD. That is quite astounding given the SPARC M8 is still using 20nm process technology.

Beyond simple the simple core specs, there is much more going on. Oracle has specific accelerators for cryptography, JAVA performance, database performance and ETC. For example, there are 32 on-chip Data Analytics Accelerator (DAX) engines. DAX engines offload query processing and perform real-time data decompression. Oracle’s software business for the Oracle Database line is still strong and these capabilities are what is often referred to as “SQL in Silicon.” Oracle claims that Oracle Database 12c is up to 7 times faster by using M8 with DAX than competing CPUs. That is a big deal for software licensing costs. Another interesting feature is the inline decompression feature allows decompression of data stored in memory with no claimed performance penalty.

Oracle SPARC M8 Processor Key Specifications

Here are the key specs for the new Oracle SPARC CPUs:

  • 32 SPARC V9 cores, maximum frequency: 5.0 GHz
  • Up to 256 hardware threads per processor; each core supports up to 8 threads
  • Total of 64 MB L3 cache per processor, 16-way set-associative and inclusive of all inner caches
  • 128 KB L2 data cache per core; 256 KB L2 instruction cache shared among four cores
  • 32 KB L1 instruction cache and 16 KB L1 data cache per core
  • Quad-issue, out-of-order integer execution pipelines, one floating-point unit, and integrated cryptographic stream processing per core
  • Sophisticated branch predictor and hardware data prefetcher
  • 32 second-generation DAX engines; 8 DAX units per processor with four pipelines per DAX unit
  • Encryption instruction accelerators in each core with direct support for 16 industry-standard cryptographic algorithms plus random-number generation: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, MD5, RSA, SHA-1, SHA-3, SHA-224, SHA-256, SHA-384, and SHA-512
  • 20 nm process technology
  • Open Oracle Solaris APIs available for software developers to leverage the Silicon Secured Memory and DAX technologies in the SPARC M8 processor
  • On Solaris Support Until 2034

In the official Oracle SPARC M8 release, Oracle has a note that is a clear nod to its Organizationals changes (we mentioned in a recent Oracle server release.)

Oracle is committed to delivering the latest in SPARC and Solaris technologies and servers to its global customers. Oracle’s long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034.

Oracle is clearly hearing from its customers about the mass layoffs of Solaris engineering teams.

New Oracle SPARC M8 Systems

There are five new SPARC V9 systems are available from Oracle today:

  • Oracle SPARC T8-1 server
  • Oracle SPARC T8-2 server
  • Oracle SPARC T8-4 server
  • Oracle SPARC M8-8 server
  • Oracle SuperCluster M8 engineered system

The Evolution and Future of Analytics

We live in a world where things around us are ever changing.

Measurement metrics are just in time, predictive and need a lot of augmented intelligence; however, we're developing more complex mind analytics when it comes to buying patterns.

This new type of analytics can give us insight into how the customer feels and what he or she experiences.

Oracle's Machine Learning & Advanced Analytics 12.2 & Oracle Data Miner 4.2 New Features

Thus, the availability of smart information will emerge.

In the future, you may walk into a store and find one or all of the below, which can be built as solutions:

a) A robot welcoming you and taking over to interact with you using connected back end and analytics.

b) Natural language or human analytics that can automatically read your mood to ultimately improve customer satisfaction.

c) Historical data about you as a customer to help up sell or cross sell products based on your interests.

d) Automatic analysis about what you're doing to bring near real-time context of data; this will enable the retailer to build a mobile based intuitive presence or no billing architecture.

e) A personal assistant model to better serve you as a customer, empowering retailers to provide solutions to unsure customers.

f) IN product or things analytics to provide information about the product that makes things intelligent through RFID, intelligent tagging, sensors etc.

g) Discounts/coupons based on mixing historical buying patterns; post purchase analytics.

h) Interactive dashboards that make augmented decisions about a few areas based on reviews; this would take expert reviews, phone calls, product management and more into account.

i) A store platform of grammar, syntax, semantics and data science grammar to create recurring patterns, challenges and build new solutions which are continuous in nature.

Based on the above, let's dive into different types of analytics available on the market. We'll look at how they will blend and intersect to develop more augmented applications for the future.

Insights into Real-world Data Management Challenges

1) Historical Analytics

This is the traditional analytics of business intelligence focused on analyzing stored data and reporting. We would build repositories and create analyses and dashboards for historical data. Solutions would include Oracle Business Intelligence.

2) Current Analytics 

Here the analytics is measurement over current process. For example, we would measure the effectiveness of a process as it happens (business activity monitoring) using a stream that processes arriving data and analyzes it in real-time.

3) Enterprise Performance Management

Here the objective is to focus on projections/what-if analysis with the current data and make projections for the future. An example would be a Hyperion or an EPM based solution which could help derive and plan reporting as projections. EPM today is also available as a cloud service.

4) Predictive Analytics

With the Big Data market growing, and with unstructured data adding parameters of velocity, variety and volume, the data world is moving on to more predictive analytics, with a blended mix of data. There is one world of data in the hadoop world and another in the classical data warehouse world. We can mix and match and do Big Data analytics.

Predictive analytics is more of a compass-like decision making with data analysis patterns. Oracle has an end-to-end Big Data solution from DW, Hadoop and analytics that can help develop predictive solutions.

MSE 2017 - Customer Voted Session: Rocketing Your Knowledge Management Success with Analytics

5) Prescriptive Analytics

To extend the predictive analytics, we would also develop systems to make decisions once we have the prediction; i.e. sending emails and connecting systems as the patterns are detected. This is the basics of building more heuristics systems to make decisions about arrived patterns.

6) Machine Analytics 

Every device and machine is going to generate data. Machine analytics is a blended form of data that can be embedded into the standard source to enhance and improve the overall data pattern. Oracle provides IOT CS as a solution to connect, analyze and integrate data from various machines and enrich new applications like  ERP, CRM and more.

Oracle Analytics and Big Data: Unleash the Value

7) AI Based Analytics

AI or deep learning is the next gen analytics pattern where we can train the systems or any entity to think and then embed the analytics pattern in the solution.

8) IORT / Robotics Analytics

With Robots / Bots and personal assistant complementing solutions, there are a lot of patterns of thinking and execution distributed to multiple systems. IORT or robotics analytics is a new branch that will focus on how we can analyze the pattern from semi thinking devices.

9) Data Science as Service 

A new branch where the analysis goes deeper in terms of algorithms and storage and is also more domain-driven. Even though data science is used as one branch in analytics, you will see a lot of analytics development. Data scientists who specialize in identifying patterns will go a long way to build patterns that are more replicable.

10) Integrated Analytics

In the future, we can form an integrated view of the above. This could be ONE IDE and you would derive patterns based on business need and use case. Today, we have a fragmented set of tools to manage analytics and it would slowly get integrated into one view.
Oracle has solution at different levels; most of them are also available as a cloud service (Software as a Service, Platform as a Service).

MSE 2017 - Advanced Analytics for Developers

It's imperative to build the right mix of solutions for the right problem and integrate these solutions.

  • Historical perspective you would use --> Business Intelligence 
  • Current processing  -->  Streaming (event processing) and Business Activity Monitoring
  • Enterprise performance management  --> Hyperion
  • Heterogeneous source of data and also large analysis of data --> Big Data Solution
  • Predictive and Prescriptive analytics --> R language and Advanced Analytics
  • Machine related --> IOT Solutions and Cloud Service

Oracle Architectural Elements Enabling R for Big Data

Oracle University provides competency solutions for all the above and empowers you with skill development and well-respected certifications that validate your expertise:

  • Big Data Analytics training
  • BI Data Analytics training
  • Hyperion training
  • Cloud PAAS Platform for Analytics and BI training

More Information:













Oracle Visual Analytics