11 June 2013

SQL Server 2014: A Closer Look



SQL Server 2014: A Closer Look

Microsoft SQL Server 2014 was announced by Quentin Clark during the Microsoft TechEd 2013 keynote. Designed and developed with our cloud-first principles in mind, SQL Server 2014 builds on the momentum of SQL Server 2012, released just 14 months ago. We are excited to share a closer look at some of the exciting capabilities included in SQL Server 2014 that will help you unlock real-time insights with mission critical and cloud performance.

SQL Server 2014 helps organizations by delivering:

  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel





Mission Critical Performance with SQL Server 2014

SQL Server 2014 delivers new in-memory capabilities built into the core database for OLTP and data warehousing, which complement existing in-memory data warehousing and business intelligence capabilities for a comprehensive in-memory database solution. In addition to in-memory, there are new capabilities to improve the performance and scalability for your mission critical applications.

In Memory Built-In

  • New In-Memory OLTP – built in to core SQL Server database and uniquely flexible to work with traditional SQL Server tables allowing you to improve performance of your database applications without having to refresh your existing hardware. We are seeing customers such as EdgeNet and bwin achieve significant performance gains to scale and accelerate their business.
  • Enhanced In-Memory ColumnStore for Data Warehousing – now updatable with even faster query speeds and with greater data compression for more real-time analytics support.
  • New buffer pool extension support to non-volatile memory such as solid state drives (SSDs) – Increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging.
  • New Enhanced Query Processing – speeds all SQL Server queries regardless of workload.














Enhanced Availability, Security and Scalability

  • Enhanced AlwaysOn – Built upon the significant capabilities introduced with SQL Server 2012, delivers mission critical availability with up to 8 readable secondaries and no downtime during online indexing operations.
  • Greater scalability of compute, networking and storage with Windows Server 2012 R2 –

- Increased scale – Continue to benefit from scale for up to 640 logical processors and 4TB of memory in a physical environment and up to 64 virtual processors and 1TB of memory per VM.
- Network Virtualization – Abstracts networking layer so that it you can easily migrate SQL Server from one datacenter to another.
- Storage Virtualization with Storage Spaces – Create pools of storage and storage tiers allowing your hot data to access the premium storage and cold data to access standard storage improving resilience, performance and predictability.

  • Enhanced Resource Governance – With Resource Governor, SQL Server today helps you with scalability and predictable performance, and in SQL Server 2014, new capabilities allow you to manage IO, in addition to compute and memory to provide more predictable performance.
  • Enhanced Separation of Duties – Achieve greater compliance with new capabilities for creating role and sub-roles. For example, a database administrator can now manage the data without seeing sensitive data or personally identifiable information.

Platform for Hybrid Cloud

SQL Server 2014 creates a strong platform for hybrid cloud where cloud scale can be leveraged to extend the scalability and availability of on-premises database applications as well as reduce costs.

Simplified Cloud Backup and Disaster Recovery

  • Backup to Azure Storage – reduce costs and achieve greater data protection by backing up your on-premises database to Azure Storage at an instance level. Optimize backup policy with intelligence built in to SQL Server that monitors and tracks backup usage patterns to provide optimal cloud backup. Backups can be automatic or manual, and in case of an on-premises failure, a backup can be restored to a Windows Azure Virtual Machine.
  • AlwaysOn integration with Windows Azure Infrastructure Services – Benefit from Microsoft’s global data centers by deploying a Windows Azure Virtual Machine as an AlwaysOn secondary for cost-effective global data protection. Increase performance and scale reporting for your global business units by running reporting off the readable secondaries in Windows Azure. Run backups on the secondaries in Windows Azure to increase data protection and performance.
  • SSMS Wizard for deploying AlwaysOn secondaries in Window Azure – Easily deploy an AlwaysOn secondaries to Windows Azure Virtual Machine with a point and click experience within SQL Server Management Studio (SSMS).

Easy Migration of On-Premises SQL Servers to Windows Azure Virtual Machines




  • SSMS Migration Wizard for Windows Azure Infrastructure Services – Easily migrate an on-premises SQL Server database to a Windows Azure Virtual machine with a point and click experience in SSMS. The newly deployed database application can be managed through SSMS or System Center 2012 R2.
     

    Faster Insights on Any Data

    SQL Server 2014 is at the heart of our modern data platform which delivers a comprehensive BI solution that simplifies access to all data types big and small with additional solutions like HDInsight, Microsoft’s 100% Apache compatible Hadoop distribution and project code name “Data Explorer”, which simplifies access to internal or external data. New data platform capabilities like Polybase included in Microsoft Parallel Data Warehouse allows you to integrate queries across relational and non-relational data using your existing SQL Server skills.

    With SQL Server 2014, you can accelerate insights with our new in-memory capabilities with faster performance across workloads. You can continue to refine and manage data using Data Quality Services and Analysis Services in SQL Server and finally analyze the data and unlock insights with powerful BI tools built into Excel and SharePoint.

    Learn more and sign up for the Preview

    SQL Server 2014 brings to market many new exciting capabilities that will deliver tremendous value to customers. SQL Server 2014 can help you unlock real-time insights with mission critical and cloud performance along with one of the most comprehensive BI solutions in the marketplace today.



    SQL Server 2014: Unlocking Real-Time Insights

    Today at TechEd North America we announced a wave of products and services that will help customers embrace the “enterprise cloud era.” The next version of our data platform – SQL Server 2014 – is a key part of the day’s news. Designed and developed with our cloud-first principles in mind, it delivers built-in in-memory capabilities, new hybrid cloud scenarios and enables even faster data insights.

    The journey to SQL Server 2014

    It’s been a mere 14 months since we shipped SQL Server 2012 and in that time we’ve advanced our data platform portfolio with a preview of Windows Azure HDInsight, customer successes on Windows Azure SQL Database and releases of PowerPivot and Power View for Office 2013, project codename “Data Explorer” in Excel, Parallel Data Warehouse 2.0 and so much more. It’s been an incredibly busy time building a modern data platform that empowers businesses to unlock real-time insights from big data.



    Among all these enhancements, SQL Server 2014 has been special to me. It started a few years ago when a group of SQL Server database engineers from my team, collaborated with Microsoft Research on a project to explore how to take advantage of changes in database design. The power of modern chipsets, the availability and scale of memory and the advent of solid state drives are just a few of the underlying system architectural changes that challenged our assumptions about design. We nurtured this project, in parallel to our work developing in-memory analytics capabilities in Excel, SQL Server Analysis Services and the RDMBS in-memory column index, because we had an intuition that the need for real-time analytics in businesses was growing. We’d seen early success with StreamInsight and this new project, codenamed “Hekaton,” was focused on extreme scale and performance for online transaction processing (OLTP) – in short, managing business activity in real time.

    In-memory in the box enables breakthrough performance

    Today, we’re delivering Hekaton’s in-memory OLTP in the box with SQL Server 2014. For our customers, “in the box” means they don’t need to buy specialized hardware or software and can migrate existing applications to benefit from performance gains. Early adopters are already experiencing significant gains from being able to run their applications at the speed of business. Edgenet, for example, has a SaaS solution for large-scale retailers that can now manage inventory updates in near real time – not the daily batch model of the past. The move from “we need to check the stock on shelves to see if what the system shows is correct” to “yes, we have one in stock and we know it’s still there” improves the end customer experience – and helps their margins.

    As you can see from the Edgenet scenario, SQL Server 2014 is helping businesses manage their data in nearly real-time. The ability to interact with your data and the system supporting business activities is truly transformative. SQL Server 2014 builds on the in-memory analytics and data warehousing features already in SQL Server 2012 to help customers experience real-time business value with a familiar platform and built-in technology.

    Delivering mission critical capabilities through new hybrid scenarios

    In this era of big data, applications and data are going to depend on one another like never before. As a result, mission-critical performance and scale has to be part of every application and solution, not just the select few.

    SQL Server 2014 includes comprehensive, high-availability technologies that now extend seamlessly into Windows Azure to make the highest level of service level agreements possible for every application while also reducing CAPEX and OPEX for mission-critical applications. Simplified cloud backup, cloud disaster recovery and easy migration to Windows Azure Virtual Machines are empowering new, easy to use, out of the box hybrid capabilities.

    We’ve also improved the AlwaysOn features of the RDBMS with support for new scenarios, scale of deployment and ease of adoption. We continue to make major investments in our in-memory columnstore for performance and now compression, and this is deeply married to our business intelligence servers and Excel tools for faster business insights.

    Unlocking real-time insights

    Our big data strategy to unlock real-time insights continues with SQL Server 2014. We are embracing the role of data – it dramatically changes how business happens. Real-time data integration, new and large data sets, data signals from outside LOB systems, evolving analytics techniques and more fluid visualization and collaboration experiences are significant components of that change. Another foundational component of this is embracing cloud computing: nearly infinite scale, dramatically lowered cost for compute and storage and data exchange between businesses. Data changes everything and across the data platform, we continue to democratize technology to bring new business value to our customers.

    This is an exciting day for me – the SQL Server 2014 release will mean a lot to our customers and will bring new scenarios and possibilities to fruition. Our industry is evolving rapidly and together we will change how business works with real-time insights, powerful analytics and a complete data platform.


    Project code name “GeoFlow” preview for Excel

    Code name “GeoFlow” makes it possible for users to plot geographic and temporal data visually, analyze that data in three dimensions (3D), and create interactive tours to share with others (Figure 4). With code name “GeoFlow,” users can:

    Map data by plotting more than one million rows of data from an Excel workbook, including the Excel Data Model or PowerPivot, in three-dimensional maps in Microsoft Bing. Choose from columns, heat maps, and bubble visualizations.

    Discover new insights by seeing data in geographic space and seeing time-stamped data change over time. Annotate or compare data in a few clicks.

    Share stories by capturing scenes and building cinematic, guided tours that can be shared broadly to engage audiences in more interesting ways.




    Unlocking insights within geospatial data such as ticket sales is possible with code name “GeoFlow”


    SQL Server and Oracle Database compared














    Breakthrough performance with in-memory technologies


    In a blog post earlier this year on “The coming database in-memory tipping point”, I mentioned that Microsoft was working on several in-memory database technologies. At the SQL PASS conference this week, Microsoft unveiled a new in-memory database capability, code named “Hekaton1”, which is slated to be released with the next major version of SQL Server. Hekaton dramatically improves the throughput and latency of SQL Server’s transaction processing (TP) capabilities. Hekaton is designed to meet the requirements of the most demanding TP applications and we have worked closely with a number of companies to prove these gains. Hekaton’s technology adoption partners include financial services companies, online gaming and other companies which have extremely demanding TP requirements. What is most impressive about Hekaton is that it achieves breakthrough improvement in TP capabilities without requiring a separate data management product or a new programming model. It’s still SQL Server!

    As I mentioned in the “tipping point” post, much of the energy around in-memory data management systems thus far has been around columnar storage and analytical workloads. As the previous blog post mentions, Microsoft already ships this form of technology in our xVelocity analytics engine and xVelocity columnstore index. xVelocity columnstore index will be updated in SQL Server 2012 Parallel Data Warehouse (PDW v2) to support updatable clustered columnar indexes. Hekaton, in contrast, is a row-based technology squarely focused on transaction processing (TP) workloads. Note that these two approaches are not mutually exclusive. The combination of Hekaton and SQL Server’s existing xVelocity columnstore index and xVelocity analytics engine, will result in a great combination.

    The fact that Hekaton and xVelocity columnstore index are built-in to SQL Server, rather than a separate data engine, is a conscious design choice. Other vendors are either introducing separate in-memory optimized caches or building a unification layer over a set of technologies and introducing it as a completely new product. This adds complexity forcing customers to deploy and manage a completely new product or, worse yet, manage both a “memory-optimized” product for the hot data and a “storage-optimized” product for the application data that is not cost-effective to reside primarily in memory.

    Hekaton is designed around four architectural principles:

    1) Optimize for main memory data access: Storage-optimized engines (such as the current OLTP engine in SQL Server today) will retain hot data in a main memory buffer pool based upon access frequency. The data access and modification capabilities, however, are built around the viewpoint that data may be paged in or paged out to disk at any point. This perspective necessitates layers of indirection in buffer pools, extra code for sophisticated storage allocation and defragmentation, and logging of every minute operation that could affect storage. With Hekaton you place tables used in the extreme TP portion of an application in memory-optimized main memory structures. The remaining application tables, such as reference data details or historical data, are left in traditional storage optimized structures. This approach lets you memory-optimize hotspots without having to manage multiple data engines.

    Hekaton’s main memory structures do away with the overhead and indirection of the storage optimized view while still providing the full ACID properties expected of a database system. For example, durability in Hekaton is achieved by streamlined logging and checkpointing that uses only efficient sequential IO.

    2) Accelerate business logic processing: Given that the free ride on CPU clock rate is over, Hekaton must be more efficient in how it utilizes each core. Today SQL Server’s query processor compiles queries and stored procedures into a set of data structures which are evaluated by an interpreter in SQL Server’s query processor. With Hekaton, queries and procedural logic in T-SQL stored procedures are compiled directly into machine code with aggressive optimizations applied at compilation time. This allows the stored procedure to be executed at the speed of native code.

    3) Provide frictionless scale-up: It’s common to find 16 to 32 logical cores even on a 2-socket server nowadays. Storage-optimized engines rely on a variety of mechanisms such as locks and latches to provide concurrency control. These mechanisms often have significant contention issues when scaling up with more cores. Hekaton implements a highly scalable concurrency control mechanism and uses a series of lock-free data structures to eliminate traditional locks and latches while guaranteeing the correct transactional semantics that ensure data consistency.

    4) Built-in to SQL Server: As I mentioned earlier – Hekaton is a new capability of SQL Server. This lays the foundation for a powerful customer scenario which has been proven out by our customer testing. Many existing TP systems have certain transactions or algorithms which benefit from Hekaton’s extreme TP capabilities. For example, the matching algorithm in financial trading, resource assignment or scheduling in manufacturing, or matchmaking in gaming scenarios. Hekaton enables optimizing these aspects of a TP system for in-memory processing while the cooler data and processing continue to be handled by the rest of SQL Server.

    To make it easy to get started, we’ve built an analysis tool that you can run so you can identify the hot tables and stored procedures in an existing transactional database application. As a first step you can migrate hot tables to Hekaton as in-memory tables. Doing this simply requires the following T-SQL statements2:

    While Hekaton’s memory optimized tables must fully fit into main memory, the database as a whole need not. These in-memory tables can be used in queries just as any regular table, however providing optimized and contention-free data operation at this stage.

    After migrating to optimized in-memory storage, stored procedures operating on these tables can be transformed into natively compiled stored procedures, dramatically increasing the processing speed of in-database logic. Recompiling these stored procedures is, again, done through T-SQL, as shown below:

    What can you expect for a performance gain from Hekaton? Customer testing has demonstrated up to 5X to 50X throughput gains on the same hardware, delivering extreme TP performance on mid-range servers. The actual speedup depends on multiple factors, such as how much data processing can be migrated into Hekaton and directly sped up; and, how much cross transaction contention is removed as a result of the speed up and other Hekaton optimizations such a lock free data structures.

    Hekaton is now in private technology preview with a small set of customers. Keep following our product blogs for updates and a future public technology preview.


    Copyrights:

    Dave Campbell
    Technical Fellow
    Microsoft SQL Server




    [1] Hekaton is from the Greek word ἑκατόν for “hundred”. Our design goal for the Hekaton original proof of concept prototype was to achieve 100x speedup for certain TP operations.

    [2] The syntax for these operations will likely change. The examples demonstrate how easy it will be to take advantage of Hekaton’s capabilities.







     For more information please contact me at:Drs. Albert Spijkers
    DBA Consulting
    web:            http://www.dbaconsulting.nl
    blog:            DBA Consulting blog
    profile:         DBA Consulting profile
    Facebook :   DBA Consulting on Facebook
    email:          info@dbaconsulting.nl 

    Share:

    0 reacties:

    Post a Comment