IBM Consulting

DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

Oracle Consulting

For Oracle related consulting and Database work and support and Migration call DBA Consulting.

Novell and RedHatConsulting

For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat product like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

Microsoft Consulting

For all Microsoft related consulting services.

Citrix Consulting

Citrix VDI in a box, Desktop vertualizations and Citrix Netscaler security.

Welcome to DBA Consulting Blog. The specialist for IBM, Oracle, Novell, RedHat, Citrix and Microsoft.

DBA Consulting is a consultancy services specialist who can help you with OS related support and migration and install. Also BI implementations from IBM like Cognos 10 and Microsoft BI like SQL Server 2008, 2012 and the CMS systems releated to it like Microsoft Sharepoint and Drupal and Oracle OBIEE 11R1. We focus on quality and service and customer whishes are central, just like quality of services and customer satisfaction. Our focus on cost savings and no vendor lock in are central in business values.

Oracle related video's

Loading...

Monday, August 10, 2015

Microsoft Announced New Container Technologies for the Next Generation Cloud



Windows-based containers: Modern app development with enterprise-grade control 





Microsoft on October 15th 2014 announced that it will deliver new container technologies in the upcoming wave of Windows Server releases. In addition, a new partnership between Microsoft Corp. and Docker Inc. will bring Windows Server support to Docker tools. MS Open Tech will contribute to this partnership, and will build upon our existing support for Linux hosted containers on Microsoft Azure.



As part of this announcement, MS Open Tech is contributing code to the Docker client that supports the provisioning of multi-container Docker applications on Azure. This code removes the need for our cross-platform CLI to bootstrap the Docker host. In other words, we have taken a simple process and made it even simpler. A demonstration of this new capability will be a part of Docker’s Global Hack Day as well as the Microsoft TechEd Europe conference. For more information on other aspects of this partnership, see the Azure blog.





Docker is an open source engine that automates the deployment of any application as a portable, self-sufficient container that can run almost anywhere. This partnership will enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider. This level of interoperability is what we at MS Open Tech strive to deliver through contributions to open source projects such as Docker.

Docker and Microsoft: How Azure is Bringing the World of Windows and Linux Together 




What are containers?

They are an isolated, resource controlled and portable operating environment.

Basically, a container is an isolated place where an application can run without affecting the rest of the system and without the system affecting the application. Containers are the next evolution in virtualization.



If you were inside a container, it would look very much like you were inside a physical computer or a virtual machine. And, to Docker, a Windows Server Container looks like any other container.
Containers for Developers

When you containerize an app, only the app and the components needed to run the app are combined into an "image". Containers are then created from this image as you need them. You can also use an image as a baseline to create another image, making image creation even faster. Multiple containers can share the same image, which means containers start very quickly and use fewer resources. For example, you can use containers to spin up light-weight and portable app components – or ‘micro-services’ – for distributed apps and quickly scale each service separately.




Windows Containers: What, Why and How 




Because the container has everything it needs to run your application, they are very portable and can run on any machine that is running Windows Server 2016. You can create and test containers locally, then deploy that same container image to your company's private cloud, public cloud or service provider. The natural agility of Containers supports modern app development patterns in large scale, virtualized and cloud environments.





- With containers, developers can build an app in any language. These apps are completely portable and can run anywhere - laptop, desktop, server, private cloud, public cloud or service provider - without any code changes.

- Containers helps developers build and ship higher-quality applications, faster.

Containers for IT Professionals

- IT Professionals can use containers to provide standardized environments for their development, QA, and production teams. They no longer have to worry about complex installation and configuration steps. By using containers, systems administrators abstract away differences in OS installations and underlying infrastructure.

- Containers help admins create an infrastructure that is simpler to update and maintain.
What else do I get?

- Containers and the container ecosystem provide agility, productivity, and freedom-of-choice in building, deploying, and managing modern apps.

- When combined with Docker, Visual Studio, and Azure, containers become an important part of a robust ecosystem. Read more about the Windows Server Container ecosystem.






Docker containers simplify the development of software applications that consist of micro-services. Each service then operates as an isolated execution unit on the host. Common use cases for Docker include:

    Automating the packaging and deployment of applications
    Creation of lightweight, private PaaS environments
    Automated testing and continuous integration/deployment
    Deploying and scaling web apps, databases and backend services

Docker’s container technology aims to drive developer productivity and agility. Containers do not include a full operating system, consequently rapid development and scaling of container-based applications is possible through very quick boot and restart operations. Furthermore, highly efficient creation of modified container images, by virtue of only capturing the differences between the original and new containers, enables improved management and distribution of containerized applications; the resulting images are both small and highly portable across almost any platform.





This partnership brings the .NET and Windows Server ecosystem together with Docker's expertise and open source community to deliver uniform container functionality across Linux and Windows Server containers.

In June MS Open Tech announced the availability of Docker Engine on Microsoft Azure, to coincide with the 1.0 release of the Docker tools. That work provided the ability to create Azure virtual machines with the Docker Engine already installed. The resulting virtual machines become hosts for Docker containers, the standard Docker tooling then provides management of containers on those hosts. Our goal with this project is to make it as simple as possible to get started with Docker on Azure. Since June, we have continued to work with the Docker community to make things even simpler.



Last October, Microsoft and Docker, Inc. jointly announced plans to bring containers to developers across the Docker and Windows ecosystems via Windows Server Containers, available in the next version of Windows Server. We will be unveiling the first live demonstration in a few weeks, starting at the BUILD conference. Today, we are taking containerization one step further by expanding the scenarios and workloads developers can address with containers:

• Hyper-V Containers, a new container deployment option with enhanced isolation powered by Hyper-V virtualization
• Nano Server, a minimal footprint installation of Windows Server that is highly optimized for the cloud, and ideal for containers.

First-of-Their-Kind Hyper-V Containers

Leveraging our deep virtualization experience, Microsoft will now offer containers with a new level of isolation previously reserved only for fully dedicated physical or virtual machines, while maintaining an agile and efficient experience with full Docker cross-platform integration. Through this new first-of-its-kind offering, Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host.



While Hyper-V containers offer an additional deployment option between Windows Server Containers and the Hyper-V virtual machine, you will be able to deploy them using the same development, programming and management tools you would use for Windows Server Containers. In addition, applications developed for Windows Server Containers can be deployed as a Hyper-V Container without modification, providing greater flexibility for operators who need to choose degrees of density, agility, and isolation in a multi-platform, multi-application environment.


Microsoft Containers in the Docker Ecosystem

Windows Server Containers




Docker plays an important part in enabling the container ecosystem across Linux, Windows Server and the forthcoming Hyper-V Containers. We have been working closely with the Docker community to leverage and extend container innovations in Windows Server and Microsoft Azure, including submitting the development of the Docker engine for Windows Server Containers as an open contribution to the Docker repository on GitHub. In addition, we’ve made it easier to deploy the latest Docker engine using Azure extensions to setup a Docker host on Azure Linux VMs and to deploy a Docker-managed VM directly from the Azure Marketplace. Finally, we’ve added integration for Swarm, Machine and Compose into Azure and Hyper-V.

“Microsoft has been a great partner and contributor to the Docker project since our joint announcement in October of 2014,” said Nick Stinemates, Head of Business Development and Technical Alliances. “They have made a number of enhancements to improve the developer experience for Docker on Azure, while making contributions to all aspects of the Docker platform including Docker orchestration tools and Docker Client on Windows. Microsoft has also demonstrated its leadership within the community by providing compelling new content like dockerized .NET for Linux. At the same time, they’ve been working to extend the benefits of Docker containers- application portability to any infrastructure and an accelerated development process--to its Windows developer community.”

Introducing Nano Server: The Nucleus of Modern Apps and Cloud

Nano Server: The Future of Windows Server Starts Now 





The operating system has evolved dramatically with the move to the cloud. Many customers today need their OS for the primary purpose of powering born-in-the-cloud applications. Leveraging our years of experience building and running hyper-scale datacenters, Microsoft is uniquely positioned to provide a purpose-built OS to power modern apps and containers.





The result is Nano Server, a minimal footprint installation option of Windows Server that is highly optimized for the cloud, including containers. Nano Server provides just the components you need – nothing else, meaning smaller server images, which reduces deployment times, decreases network bandwidth consumption, and improves uptime and security. This small footprint makes Nano Server an ideal complement for Windows Server Containers and Hyper-V Containers, as well as other cloud-optimized scenarios. A preview will be available in the coming weeks, and you can read more about the technology on the Windows Server blog. http://blogs.technet.com/b/windowsserver/archive/2015/04/08/microsoft-announces-nano-server-for-modern-apps-and-cloud.aspx?WT.mc_id=Blog_ServerCloud_Announce_TTD

Containers are bringing speed and scale to the next level in today’s cloud-first world. Microsoft is uniquely positioned to propel more organizations forward into the next era of containerization, by offering flexibility and choice through Windows Server containers, Linux containers, and Hyper-V containers both in the cloud and on-premises. Today’s announcements are just the beginning of what’s to come, as we continue to fuel both the growth of containers in the industry, and new levels of application innovation for all developers.


More Information:  


https://msopentech.com/blog/2014/10/15/docker-containers-coming-microsoft-linux-server-near/

http://blogs.technet.com/b/server-cloud/archive/2015/04/08/microsoft-announces-new-container-technologies-for-the-next-generation-cloud.aspx

http://blogs.technet.com/b/windowsserver/archive/2015/04/08/microsoft-announces-nano-server-for-modern-apps-and-cloud.aspx?WT.mc_id=Blog_ServerCloud_Announce_TTD

https://azure.microsoft.com/blog/2015/06/23/container-apps-now-available-in-the-azure-marketplace/

http://blogs.msdn.com/b/msgulfcommunity/archive/2015/06/21/what-is-windows-server-containers-and-hyper-v-containers.aspx

http://www.microsoftvirtualacademy.com/liveevents/what-s-new-in-windows-server-2016-preview-jump-start

http://blogs.technet.com/b/nanoserver/archive/2015/05/12/welcome-to-the-new-nano-server-blog.aspx

https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/about_overview

https://channel9.msdn.com/Series/Windows-10-Update-for-ITPRO

https://channel9.msdn.com/Series/A-Developers-Guide-to-Windows-10

https://channel9.msdn.com/Series/Getting-Started-with-Windows-10-for-IT-Professionals?sort=recent#tab_sortBy_recent



Monday, July 13, 2015

Immutable Service Containers



Immutable Service Containers and Oracle Solaris Studio 12.4

Economics of Oracle Solaris


Solaris Immutable Service Containers




While the need for security and integrity is well-recognized, it is less often well-implemented. Security assessments and industry reports regularly show how sporadic and inconsistent security configurations become for organizations both large and small. Published recommended security practices and settings remain unused in many environments and existing, once secured, deployments suffer from atrophy due to neglect.

Why is this? There is no one answer. Some organizations are simply unaware of the security recommendations, tools, and techniques available to them. Others lack the necessary skill and experience to implement the guidance and maintain secured configurations. It is not uncommon for these organizations to feel overwhelmed by the sheer number of recommendations, settings and options. Still others may feel that security is not an issue in their environment. The list goes on and on, yet the need for security and integrity has never been more important.

Interestingly, the evolution and convergence of technology is cultivating new ideas and solutions to help organizations better protect their services and data. One such idea is being demonstrated by the Immutable Service Container (ISC) project. Immutable Service Containers are an architectural deployment pattern used to describe a platform for highly secure service delivery. Building upon concepts and functionality enabled by operating systems, hypervisors, virtualization, and networking, ISCs provide a secured container into which a service or set of services is deployed. Each ISC embodies at its core the key principles inherent in the Sun Systemic Security framework including: self-preservation, defense in depth, least privilege, compartmentalization and proportionality. Further, ISC design borrows from Cloud Computing principles such as service abstraction, micro-virtualization, automation, and "fail in place".

By designing service delivery platforms using the Immutable Service Containers mode, a number of significant security benefits:


For application owners:

  • ISCs help to protect applications and services from tampering
  • ISCs provide a consistent set of security interfaces and resources for applications and services to use


For system administrators:

  • ISCs isolate services from one another to avoid contamination
  • ISCs separate service delivery from security enforcement/monitoring
  • ISCs can be (mostly) pre-configured by security experts


For IT managers:

  • ISCs creation can be automated, pre-integrating security functionality making them faster and easier to build and deploy
  • ISCs leverage industry accepted security practices making them easier to audit and support


It is expected that Immutable Service Containers will form the most basic architectural building block for more complex, highly dynamic and autonomic architectures. The goal of the ISC project is to more fully describe the architecture and attributes of ISCs, their inherent benefits, their construction as well as to document practical examples using various software applications.

While the notion of ISCs is not based upon any one product or technology, an instantiation has been recently developed using OpenSolaris 2009.06. This instantiation offers a pre-integrated configuration leveraging OpenSolaris security recommended practices and settings. With ISCs, you are not starting from a blank slate, but rather you can now build upon the security expertise of others. Let's look at the OpenSolaris-based ISC more closely.

In an ISC configuration, the global zone is treated as a system controller and exposed services are deployed (only) into their own non-global zones. From a networking perspective, however, the entire environment is viewed as a single entity (one IP address) where the global zone acts as a security monitoring and arbitration point for all of the services running in non-global zones.

As a foundation, this highly optimized environment is pre-configured with:


non-executable stack
encrypted swap space (w/ephemeral key)
encrypted scratch space (w/ephemeral key)
security hardened operating system (global and non-global zones)

Further, the default OpenSolaris ISC uses:


Non-Global Zone. Exposed services are deployed in a non-global zone. There they can take advantage of the core security benefits enabled by OpenSolaris non-global zones such as restricted access to the kernel, memory, devices, etc. For more information on non-global zone security capabilities, see the Sun BluePrint titled "Understanding the Security Capabilities of Solaris Zones Software". Using a fresh ISC, you can simply install your service into the provided non -global zone as you normally would.
Further in the ISC model, each non-global zone has its own encrypted scratch space (w/its own ephemeral key), its own persistent storage location, as well as a pre-configured auditing and networking configuration that matches that of the global zone. You do not need to use the encrypted scratch space or persistent storage, but it is there if you want to take advantage of it. Obviously, additional resource controls (CPU, memory, etc.) can be added as necessary. These are not pre-configured due to the variability of service payloads.

Solaris Auditing. A default audit policy is implemented in the global zone and all non-global zones that tracks login and logout events, administrative events as well as all commands (and command line arguments) executed on the system. The audit configuration and audit trail are kept in the global zone where they cannot be accessed by any of the non-global zones. The audit trail is also pre-configure d to be delivered by SYSLOG (by default this information is captured in /var/log/auditlog).

Private Virtual Network. A private virtual network is configured by default for all of the non-global zones. This network isolates each non-global zone to its own virtual NIC. By default, the global and non-global zones can freely initiate external communications, although this can be restricted if needed. A non-global zone is not permitted to accept connections, by default. Non-global zone service s can be exposed through the global zone IP address by adjusting the IP Filter and IP NAT policies (below).

Solaris IP NAT. Each non-global zone is pre-configured to have a private address assigned to its virtual NIC. To allow the non -global zone to communicate with external systems and networks, an IP NAT policy is implemented. Outgoing connections are masked using the IP address of the global zone. Incoming connections are redirected based upon the port used to communicate. Beyond simple hardening of the non-global zone (a state which can be altered from within the non-global zone itself), this mechanism ensures that the global zone can control which services are exposed by the non-global zone and on which ports.

Solaris IP Filter. A default packet filtering policy is implemented in the global zone allowing only DHCP (for the exposed network interface) and SSH (to the global zone). Additional rules are available (but disabled) to allow access to non-global zones on an as-needed basis. Further, rules are implemented to deny external access to any non-global zone that has changed its pre-assigned (private) IP address. Packet filtering is pre-configured to log packets to SYSLOG (by default this information is captured in /var/log/ipflog).

So what does all of this really mean? Using the ISC model, you can deploy your services in a micro-virtualized environment that offers protection against kernel-based root kits (and some forms of user-land root kits), offers flexible file system immutability (based upon read-only file systems mounted into the non-global zone), can take advantage of process least privilege and resource controls, and is operated in a hardened environment where there is a packet filtering, NAT and auditing policy that is effectively out of the reach of the deployed service. This means that should a service be compromised in a non-global zone, it will not be able to impact the integrity or validity of the auditing, packet filtering, and NAT configuration or logs. While you may not be able to stop every form of attack, having reliable audit trails can significantly help to determine the extent of the breach and facilitate recovery.

The following diagram puts all of the pieces together:



Solaris 11 – Immutable Zones

Immutable zones are read-only zones, but still contain “whole root” file systems.  The immutable zone can be configured as a completely read-only zone or it can be partially read-only.  The immutable zone is controlled by a mandatory write access control (MWAC) kernel policy.  This MWAC policy enforces the zone’s root file system write privilege through a zonecfg file-mac-profile property. The policy is enabled at zone boot.

By default, a zone’s file-mac-profile property is not set in a non-global zone. The default policy for a nonglobal zone is to have a writable root file system. In a Solaris read-only zone, the file-mac-profile property is used to configure a read-only zone root. A read-only root restricts access to the run-time environment from inside the zone. Through the zonecfg utility, the file-mac-profile can be set to one of the following values.



- See more at: http://unixed.com/blog/2013/02/the-solaris-11-immutable-zone/#sthash.sCSNGFSo.dpuf









Oracle Virtualization  



Oracle Software in Silicon Cloud

Speed and Simplify Your Business  (https://www.oracle.com/corporate/features/software-in-silicon/index.html)

Accelerate application performance and significantly improve security.


Oracle’s revolutionary Software in Silicon technology hardwires key software processes directly onto the processor. Because accelerated functions are run on special engines on the processor's silicon, yet kept separate from its cores, the technology speeds up performance of an application while retaining the overall functionality of the processor.

Oracle’s revolutionary Software in Silicon technology hardwires key software processes directly onto the processor.

Because accelerated functions are run on special engines on the processor's silicon, yet kept separate from its cores, the technology speeds up performance of an application, and implements data security in hardware, while retaining the overall functionality of the processor.

At Oracle OpenWorld 2014 John Fowler announced Oracle Software in Silicon Cloud, which provides early access to revolutionary Software in Silicon technology that dramatically improves reliability and security, and accelerates application performance.





Introducing Oracle Solaris Studio 12.4


Oracle Solaris Studio 12.4 - Technical Mini-Casts


Oracle Solaris OpenStack


Network Virtualization Using Crossbow Technology



More Information:

http://docs.oracle.com/cd/E23824_01/html/821-1460/glhep.html


http://www.c0t0d0s0.org/archives/5033-Immutable-Service-Containers.html


http://www.unixarena.com/


http://www.unixarena.com/2013/06/solaris-11-new-features-capabilities-of.html


http://unixed.com/blog/2013/02/the-solaris-11-immutable-zone/


https://blogs.oracle.com/gbrunett/entry/new_opensolaris_immutable_service_containers


https://kenai.com/projects/isc/pages/OpenSolaris/revisions/2


http://securitywannabe.com/blog/2009/07/02/what-is-an-immutable-service-container.html


http://c59951.r51.cf2.rackcdn.com/5028-128-galvin.pdf


http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/32358


http://learnings-on-solaris.blogspot.nl/2014/01/immutable-zones.html


https://blogs.oracle.com/solaris/entry/solaris_11_3_bloglist


https://blogs.oracle.com/markusflierl/entry/release_of_solaris_11_3


https://www.oracle.com/corporate/features/software-in-silicon/index.html


Tuesday, June 16, 2015

SQL Server 2016





SQL Server 2016 Evolution and Azure DataWarehouse

At this year’s inaugural Ignite Conference in held in Chicago Microsoft announced that the next release of SQL Server, previously referred to as SQL Server vNext, will officially be SQL Server 2016. There’s no doubt that SQL Server has been on a fast track release program and the upcoming SQL Server 2016 release will be just two short years after the last SQL Server 2014 release. For business critical enterprise software this is a torrid release cycle that many businesses will have trouble keeping up with. But Microsoft fully intends to make the SQL Server 2016 release worth getting. You can find out more about the upcoming SQL Server 2016 features at the SQL Server 2016 Preview page http://www.microsoft.com/en-us/server-cloud/products/sql-server-2016/  , and the SQL Server Blog  http://blogs.technet.com/b/dataplatforminsider/archive/2015/05/04/sql-server-2016-public-preview-coming-this-summer.aspx  . You might also check out the Ignite session SQL Server Evolution on this Blog.

Get an early look at the next Microsoft data platform

The first public preview of SQL Server 2016 is now available for download. It is the biggest leap forward in Microsoft's data platform history with real-time operational analytics, rich visualizations on mobile devices, built-in advanced analytics, new advanced security technology, and new hybrid cloud scenarios.



SQL Server 2016 delivers breakthrough mission-critical capabilities with in-memory performance and operational analytics built-in. Comprehensive security features like new Always Encrypted technology help protect your data at rest and in motion, and a world-class high availability and disaster recovery solution adds new enhancements to AlwaysOn technology.

Organizations will gain deeper insights into all of their data with new capabilities that go beyond business intelligence to perform advanced analytics directly within their database and present rich visualizations for business insights on any device.

You can also gain the benefits of hyper-scale cloud with new hybrid scenarios enabled by new Stretch Database technology that lets you dynamically stretch your warm and cold transactional data to Microsoft Azure in a secured way so your data is always at hand for queries, no matter the size. In addition, SQL Server 2016 delivers a complete database platform for hybrid cloud, enabling you to easily build, deploy and manage solutions that span on-premises and cloud.


BENEFITS


  •  Enhanced in-memory performance provides up to 30x faster transactions, more than 100x faster queries than disk-based relational databases and real-time operational analytics
  • New Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without application changes
  • Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner without application changes
  • Built-in advanced analytics provide the scalability and performance benefits of building and running your advanced analytics algorithms directly in the core SQL Server transactional database
  • Business insights through rich visualizations on mobile devices with native apps for Windows, iOS and Android
  • Simplify management of relational and non-relational data by querying both with T-SQL using PolyBase
  • Faster hybrid backups, high availability and disaster recovery scenarios to back up and restore your on-premises databases to Microsoft Azure and place your SQL Server AlwaysOn secondaries in Azure









Here are eight great features to look for in SQL Server 2016.

1. Always Encrypted



Always Encrypted is designed to protect data at rest or in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and the encryption key can reside with the application. Encryption and decryption of data happens transparently inside the application. This means the data stored in SQL Server will be encrypted which can secure it from DBA and administrators but that also has considerations for ad-hoc queries, reporting and exporting the data.

2. Stretch Database



The idea behind this feature is certainly interesting. The upcoming stretch database feature will allow you to dynamically stretch your on-premise database to Azure. This would enable your frequently accessed or hot data to stay on-premise and your infrequently accessed cold data to be moved to the cloud. This could enable you to take advantage of low cost Azure store and still have high performance applications. However, this is one trick where Microsoft really needs to get the partitioning right to keep your queries from straying into the cloud and killing your performance.

3. Real-time Operational Analytics



This feature uses the dynamic duo of SQL Server’s in-memory technologies; it combines In-Memory OLTP with the in-memory columnstore for real-time operational analytics. Its purpose is to tune your system for optimal transactional performance as well as increase workload concurrency. This sounds like a great combination and applying analytics to your system’s performance is something a lot of customers have asked for a long time but you will certainly need to have the memory to take advantage of it.

4. PolyBase into SQL Server



Big Data continues to grow in strategic importance but unless you had the SQL Server Parallel Data Warehouse (PDW) connecting SQL Server to Dig Data and Hadoop in particular was limited and difficult. In previous releases, PDW was the only version of SQL Server that came with PolyBase – a technology that bridged SQL Server and Hadoop by enabling you to construct and run SQL queries over Hadoop data stores eliminating the need to understand HDFS or MapReduce. SQL Server 2016 promises to bring the PolyBase technology mainstream into the primary SQL Server SKUs (probably the Enterprise edition).

5. Native JSON Support



JSON (JavaScript Object Notation) is a standardized data exchange format that is currently not supported natively by SQL Server. To perform JSON imports and exports you need to hand-code complex T-SQL, SQLCLR or JavaScript. SQL Server 2016 promises to simply this by incorporating JSON support directly into SQL Server much like XML. SQL Server 2016 will natively parse and store JSON as relational data and will support exporting relational data to JSON.

6. Enhancements to AlwaysOn



SQL Server 2016 will also continue to advance high availability and disaster recovery with several enhancements to AlwaysOn. The upcoming SQL Server 2016 release will enhance AlwaysOn with the ability to have up to three synchronous replicas. Additionally, it will include DTC (Distributed Transaction Coordinator) support as well as support for round-robin load balancing of the secondaries replicas. There will also be support for automatic failover based on database health.


7. Enhanced In-Memory OLTP



First introduced with SQL Server 2014, In-Memory OLTP will continue to mature in SQL Server 2016. Microsoft will enhance In-Memory OLTP by extending the functionality to more applications while also enhancing concurrency. This means they will be expanding the T-SQL surface area, increasing the total amount of memory supported into the terabyte range as well as supporting a greater number of parallel CPUs.

8. Revamped SQL Server Data Tools



Another welcome change in SQL Server 2016 is the reconsolidation of SQL Server Data Tools (SSDT). As Microsoft worked to supplant the popular and useful Business Development Studio (BIDS) with SQL Server Data Tools they wound up confusing almost everyone by creating not one but two versions of SQL Server Data Tools both of which needed to be downloaded separately from installing SQL Server itself. With the SQL Server 2016 release Microsoft has indicated that they intend to reconsolidate SQL Server Data Tools.

SQL Server Evolution 2016 Part 1








SQL Server 2016 Evolution Part2 








Microsoft Azure SQL Data Warehouse Overview 



Azure SQL Data Warehouse: Deep Dive 



More Information:

https://www.microsoft.com/en-us/server-cloud/products/sql-server-2016/

http://blogs.technet.com/b/dataplatforminsider/archive/2015/05/04/sql-server-2016-public-preview-coming-this-summer.aspx

https://msdn.microsoft.com/en-us/library/hh231622%28v=sql.130%29.aspx

http://azure.microsoft.com/en-gb/campaigns/sql-data-warehouse/

http://www.microsoft.com/en-us/server-cloud/products/sql-server-2016/?WT.srch=1&WT.mc_id=SEM_BING_USEvergreenSearch_Cloud_Cloud|Bing|SEM|DI|SQL%20Server|Brand|US_MSFT

http://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016

https://www.petri.com/windows-server/windows-server-2016

https://connect.microsoft.com/SQLServer

https://connect.microsoft.com/SQLServer/Feedback

http://devconnections.com/

http://sqlmag.com/

https://www.petri.com/

http://sqlmag.com/sql-server/sql-select-8-great-new-features-sql-server-2016#slide-0-field_images-24161

https://msdn.microsoft.com/en-us/library/dn935011%28v=sql.130%29.aspx

http://www.infoq.com/news/2015/06/SQL-Server-Stretch

Saturday, May 9, 2015

IBM BLU Accelerators for IBM DB2 10.5 and 11


IBM BLU Accelerators

Introduction

IBM DB2 with BLU : The In-memory database for Power Systems



BLU Acceleration is a new collection of technologies for analytic queries that are introduced in DB2 for Linux, UNIX, and Windows Version 10.5 (DB2 10.51). At its heart, BLU Acceleration is about providing faster answers to more questions and analyzing more data at a lower cost. DB2 with BLU Acceleration is about providing order-of-magnitude benefits in performance, storage savings, and time to value.
These goals are accomplished by using multiple complementary technologies, including:

The data is in a column store, meaning that I/O is performed only on those columns and values that satisfy a particular query.

The column data is compressed with actionable compression, which preserves order so that the data can be used without decompression, resulting in huge storage and CPU savings and a significantly higher density of useful data held in memory.

Parallel vector processing, with multi-core parallelism and single instruction, multiple data (SIMD) parallelism, provides improved performance and better utilization of available CPU resources.

Data skipping avoids the unnecessary processing of irrelevant data, thereby further reducing the I/O that is required to complete a query.



DB2 BLU Acceleration and more



These and other technologies combine to provide an in-memory, CPU-optimized, and I/O-optimized solution that is greater than the sum of its parts.
BLU Acceleration is fully integrated into DB2 10.5, so that much of how you leverage DB2 in your analytics environment today still applies when you adopt BLU Acceleration. The simplicity of BLU Acceleration changes how you implement and manage a BLU-accelerated environment. Gone are the days of having to define secondary indexes or aggregates, or having to make SQL or schema changes to achieve adequate performance.

What's new in IBM DB2 BLU?  




Four key capabilities make BLU Acceleration a next generation solution for in-memory computing:

1. BLU Acceleration does not require the entire dataset to fit in memory while still processing at lightning-fast speeds.
Instead, BLU Acceleration uses a series of patented algorithms that nimbly handle in-memory data processing. This includes the ability to anticipate and “prefetch” data just before it’s needed and to automatically adapt to keep necessary data in or close to the CPU. Add some additional CPU acceleration techniques, and you get highly efficient in-memory computing at lightning-speed.
2. BLU Acceleration works on compressed data- saving time and money.
Why waste time and CPU resources on decompressing data, analyzing it and recompressing it? Instead of all these extra steps, BLU Acceleration preserves the order of data and performs a broad range of operations—including joins and predicate evaluations—on compressed data without the need for decompression. This is another next-generation technique to speed processing, skip resource-intensive steps and add agility.
3. BLU Acceleration intelligently skips processing of data it doesn’t need to get the answers you want.
With a massive data set, chances are good that you don’t need all of the data to answer a particular query. BLU Acceleration employs a series of metadata management techniques to automatically determine which data would not qualify for analysis within a particular query, enabling large chunks of data to be skipped. This results in a more agile computing, including storage savings and system hardware efficiency. What’s more this metadata is kept updated on a real-time basis so that data changes are continually reflected in the analytics. Less data to analyze in the first place means faster, simpler and more agile in-memory computing. We call this data skipping.



4. BLU Acceleration is simple to use.
As your business users demand more analytics faster, you need in-memory computing that keeps the pace. BLU Acceleration delivers optimal performance out of the box—no need for indexes, tuning, or time-consuming configuration efforts. You simply convert your row-based data to columns and run your queries. Because BLU Acceleration is seamlessly integrated with DB2, you can manage both row-based and column-based data from a single proven system, thus reducing complexity. This helps free the technical team to deliver value to the business – less routine maintenance and more innovation.



Simplicity in DB2 10.5 with BLU Acceleration




Fast and simple in-memory computing

Fast answers

DB2 with BLU Acceleration includes six advances for fast in-memory computing:

•In the moment business answers from within the transaction environment, new with DB2 10.5 “Cancun Release”, utilizes BLU Shadow Tables to automatically maintain a column-based version of the row-based operational data. Analytic queries are seamlessly routed to these column organized BLU Shadow Tables that are ideal for fast analytic processing.

•Next-generation in-memory computing  delivers the benefits of in-memory columnar processing without the limitations or cost of in-memory only systems that require all data to be stored in system memory to achieve breakthrough performance. BLU Acceleration dynamically optimizes movement of data from storage to system memory to CPU memory (cache).  This patented IBM innovation enables BLU Acceleration to maintain in-memory performance even when active data sets are larger than system memory.

•Actionable compression preserves the order of the data, enabling compressed data in BLU Acceleration tables to be used without decompression. A broad range of operations like predicates and joins are completed on compressed data. The most frequent values are encoded with fewer bits to optimize the compression.

•CPU acceleration is designed to process a huge volume of data simultaneously by multiplying the power of the CPU. Multi-core processing, SIMD processor support and parallel data processing are all used to deeply exploit the CPU and process data with less system latency and fewer bottlenecks.

•Data skipping eliminates processing of irrelevant and duplicate data. This is accomplished by examining small sections of data to determine if it contains information that is relevant to the analytics problem at hand. Deciding on these “hot” portions of data in more granular sections means that less irrelevant data is being processed in the first place.

Oracle SQL compatibility streamlines and reduces risk in moving data from Oracle database to DB2 with BLU Acceleration. This leverages existing skills and investments, while taking advantage of the speed and simplicity of BLU Acceleration to deliver fast business insights.

Simply delivered

IBM believes that in-memory computing should be easy on IT resources:

•Load and go set-up allows you to start deriving value from your data in a couple simple steps.  Simply create the table, load the data and go. It’s fast out of the gate – no tuning, no tweaking required.  This means you can more quickly satisfy business needs even as they change and evolve.

•One administration environment for analytics or transactional data helps ease management. BLU Acceleration is built seamlessly into DB2 10.5 for Linux, UNIX and Windows, a proven enterprise-class database. A single set of enterprise-class administration functions for either row- or column-organized data reduces complexity, while a series of automation capabilities help free IT talent for higher value projects.

IBM Accelerating Analytics with BLU



Flexible multi-platform deployment for Linux on Intel, zLinux, AIX on Power and Windows  makes the most of IT resources whether you are using existing hardware or the latest technology.  This is the only in-memory computing technology to deploy on the cloud or on multiple platforms, offering greater flexibility in meeting the need for business answers.


IBM DB2 10.5 with BLU Acceleration vs Oracle Exadata



DB2 and BLU Acceleration on Cloud Tech Talk



BLU Acceleration: Delivering Speed of Thought Analytics 

Big data poses big challenges for accessing and analyzing information. BLU Acceleration from IBM delivers speed of thought analytics that help you make better decisions faster. See BLU Acceleration’s innovative dynamic in-memory processing, actionable compression, parallel vector processing and data skipping. Learn how to get started using your existing infrastructure and skills.





New in-memory capabilities help you capitalize on business answers even more easily

Technology never stands still and BLU Acceleration is no exception! This product has been enhanced in key areas so you can:

•Gain access to the fast answers BLU Acceleration delivers on Windows and zLinux to support a broader range of organizations, as well as data mart consolidation on these new platforms

•Protect data at rest while saving administration time with native application-transparent data encryption

•Deliver in the moment business answers from within the transaction environment

•Leverage Oracle skills with SQL compatibility to enable simple, low-risk migration from Oracle database to DB2 with BLU Acceleration

•Reduce risk and improve performance of SAP environments with

•Significant enhancements to SAP Business Warehouse support

IBM DB2 11 SQL Improvements




Take advantage of faster query processing and better data reliability by using BLU Acceleration on the POWER8 processor



Bigdata Webcast on Blu acceleration



Best practices for DB2 pureScale performance and monitoring



Let's Get Hands-on: 60 Minutes in the Cloud—Predictive Analytics Made Easy



IBM dashDB - Keeping data warehouse infrastructure out of your way




IBM DB2 with BLU Acceleration & Cognos BI - A great combo!




More Information:

DB2 Tech Talk: Deep Dive BLU Acceleration Super Easy In-memory Analytics


Join DB2 expert Sam Lightstone for an in-depth discussion of the all-new BLU Acceleration features in DB2 10.5 for Linux, UNIX and Windows. BLU Acceleration in-memory computing is designed to deliver results from data-intensive analytic workloads with speed and precision that is termed "speed of thought" analytics.

In this Tech Talk, Sam will explain the details of this ground-breaking technology such as:

•Dynamic in-memory analytics that do not require all of the data to fit in memory in order to perform analytics processing

•Parallel vector processing, driving spectacular CPU exploitation


https://www.brighttalk.com/webcast/7637/74621






DB2 Tech Talk: Introduction and Technical Tour of DB2 with BLU Acceleration


Join Distinguished Engineer and DB2 expert Berni Schiefer and host Rick Swagerman for a technical tour of the all new DB2 10.5 with BLU Acceleration in-memory technology. You will learn about new features such as:

•BLU Acceleration, for “Speed of Thought” analytics
Designed to handle data-intensive analytics workloads, BLU Acceleration extends the capabilities of traditional in-memory systems by providing in-memory performance even when the data set size exceeds the size of the memory. Learn about RAM data loading capabilities, plus “data skipping”; parallel data analysis; actionable compression for analysis without decompressing data and more.

• New DB2 pureScale capabilities that enable online rolling maintenance updates and capacity growth with no planned downtime, plus new integration HADR capabilities to help ensure always available transactions.

• SQL and Oracle Database compatibility refinements in DB2 10.5, helping to ensure fast, easy moves to DB2 as well as increased flexibility for DB2 applications.

• Enhancements to NoSQL technologies that are now business-ready in DB2 10.5. Although not part of the DB2 10.5 announcement, we will fill you in on other No SQL technology introduction plans as well.

• New packaging editions of DB2 that that handle either OLTP or data warehousing needs.

• DB2 tools advances that support these new functions.

Join us for this Tech Talk to find out about these exciting enhancements and how they can help you deliver the data analytics your organization needs while providing tools to keep your OLTP systems in top shape.

https://www.brighttalk.com/webcast/7637/71677







A deeper dive into dashDB - know more in a dash

dashDB is a newly announced data warehouse as a service deployed in the cloud that leverages technologies like BLU Acceleration, in-database analytics and Cloudant to allow you to focus more on the business and less on the business of IT. In this DB2 Tech Talk you will learn a little more about IBM’s cloud initiatives and the value proposition around dashDB as well as...
-dashDB’s architecture and use cases
-pricing and offerings as as service
-competitive differentiations and customer feedback

https://www.brighttalk.com/webcast/7637/140371






http://ibmdatamanagement.co/2013/08/19/how-blu-acceleration-really-works/

http://researcher.watson.ibm.com/researcher/files/us-ipandis/vldb13db2blu.pdf

DB2 with BLU Acceleration on Power Systems

Best practices Optimizing analytic workloads using DB2 10.5 with BLU Acceleration

http://www.ibmbluhub.com/get-technical/blu-whatsnew-cancun/

http://www.ibmbluhub.com

http://www.ibmbluhub.com/why-blu-acceleration/

http://www.ibmbigdatahub.com/topic/624

http://www.ibm.com/developerworks/data/library/techarticle/dm-1309db2bluaccel/

http://www.vldb.org/2014/

http://www.ibmbigdatahub.com/whitepaper/ibm-db2-blu-acceleration-ibm-power-systems-how-it-compares

http://www-01.ibm.com/software/data/db2/linux-unix-windows/db2-blu-acceleration/

http://ibmdatamag.com/2013/06/5-steps-for-migrating-data-to-ibm-db2-with-blu-acceleration/


Monday, April 13, 2015

Oracle Database Appliance X5-2




Oracle Database Appliance X5-2 Introduction



Oracle Server X5-2L is the ideal 2U platform for databases and enterprise storage solutions. Supporting the standard and enterprise editions of Oracle Database, this server delivers best-in-class database reliability in single-node configurations. With support for up to four high-bandwidth NVM Express (NVMe) flash drives, Oracle Database can be accelerated using Database Smart Flash Cache, a feature of Oracle Database. Optimized for compute, memory, I/O, and storage density simultaneously, Oracle Server X5-2L delivers extreme storage capacity at lower cost when combined with Oracle Solaris and ZFS file system compression. Each server comes with built-in, proactive fault detection, and advanced diagnostics, along with firmware that is already optimized for Oracle software, to deliver extreme reliability.










Introduction

Oracle Server X5­2, Oracle’s latest two ­socket server, is the newest addition to the family of Oracle's x86 servers that are purpose ­built to be best for running Oracle software. The new Oracle Server X5­2 1U system is optimal for running Oracle Database in a clustered configuration with Oracle Real Application Clusters (Oracle RAC) and other clustered database solutions, as well as enterprise applications in virtualized environments.


Oracle Big Data Appliance X5-2




Explaining Big Data: The Big Data Life Cycle | Oracle




Product Overview

Oracle Server X5­-2 supports up to two Intel® Xeon® E5­2600 v3 processors. Each Intel Xeon processor provides up to 18 cores, with a core frequency of up to 2.6 GHz, and has up to 45 MB L3 cache along with 24 dual inline memory module (DIMM) slots, when fully populated with twenty­four 32 GB DDR4­2133 DIMMs, provides 768 GB of memory. Memory bandwidth increases to 2,133 MT/sec per channel compared to 1,600 MT/sec in the previous generation.

In addition, Oracle Server X5­-2 has four PCIe Gen3 slots (2 x16, 2 x8 lanes), four 10GBase­T ports, six USB ports, and eight 2.5­inch drive bays providing 9.6 TB of hard disk drive (HDD) storage or 3.2 TB of solid state drive (SSD) storage. An optional DVD drive is supported to allow local access for operating system installation.

The SSD drives used in Oracle Server X5-­2 are SAS­-3 drives with a bandwidth of 12 Gb/sec providing double the performance of the previous generation. Oracle Server X5­2 can also be configured with up to four NVM Express (NVMe) drives from Oracle for a
total of 6.4 TB of high ­performance, high­ endurance PCIe flash.


Best for Oracle Software

Oracle Server X5-­2 systems are ideal x86 platforms for running Oracle software. Only Oracle provides customers with an optimized hardware and software stack that comes complete with choice of OS, virtualization software, and cloud management tools—all at no extra charge. Oracle's optimized hardware and software stack has enabled a 10x performance gain in its engineered systems and has delivered world­record benchmark results. Oracle's comprehensive, open standards­based x86 systems provide the best platform on which to run Oracle software with enhanced reliability for data center environments.

In today’s connected world, vast amounts of unstructured data flow into an enterprise, creating an immediate business need to extract queriable structured data grams from this slew of information. Online transaction processing (OLTP) is a technology that historically has been used for traditional enterprise applications such as enterprise resource planning (ERP) and human capital management (HCM). Now OLTP finds itself in a unique position to accelerate business intelligence and analytics. As such, this places greater demands on the database, I/O, and main memory requirements in data centers. Oracle Database is designed to take advantage of hardware features such as high­ core­ count central processing units (CPUs), non-­uniform memory access (NUMA) memory architectures, and tiered storage of data that enhance system performance.

Benefits include increased transaction throughput and improved application response
times, which reduce the overall cost per transaction.


Oracle Server X5­-2, NVM Express and Oracle Database Smart Flash Cache

Oracle Database utilizes a feature called Database Smart Flash Cache. This feature is available on Oracle Linux and Oracle Solaris and allows customers to increase the effective size of the Oracle Database buffer cache without adding more main memory to the system. For transaction ­based workloads, Oracle Database blocks are normally loaded into a dedicated shared memory area in main memory called the system global area (SGA). Database Smart Flash Cache allows the database buffer cache to be expanded beyond the SGA in main memory to a second level cache on flash memory.

Oracle Server X5­2 introduces a new flash technology called NVM Express that provides a high ­bandwidth, low­ latency PCI Express (PCIe) interface to large amounts of flash within the system. Oracle Database with Database Smart Flash Cache and Oracle Solaris ZFS are specifically engineered to take advantage of this low­ latency, high bandwidth interface to flash in Oracle Server X5­2. Oracle Solaris and Oracle Linux are co-­engineered with Oracle Server X5­2 to function in enterprise­ class workloads by enabling hot­ pluggable capabilities.

Traditional SSDs with a SAS/SATA interface are a popular method of adding flash to a server, and these take advantage of legacy storage controller and disk cage infrastructure. NVM Express is an entirely new end­ to­ end design that eliminates the performance bottlenecks of using conventional storage interfaces. The new NVMe flash drives in Oracle Server X5­2 provide a high­ bandwidth, low­ latency flash implementation that vastly improves OLTP transaction times.

Figure 1 illustrates a block diagram of a traditional SAS­3 SSD connected to a server. The server PCIe root complex is connected to a PCIe/SAS controller that translates PCIe to SAS protocol to allow the server to read and write the SAS­3 SSD. As NVMe SSDs already use the PCIe protocol, there is no need for the PCIe/SAS controller translation as shown in Figure 2.





Oracle’s NVMe drives have a much lower latency and higher bandwidth than standard SAS­3 drives due to the fact that the drive connects directly to four lanes of the PCIe Gen3 with an aggregate bandwidth of 32 Gb/sec as opposed to 12 Gb/sec for a  traditional SAS­3 SSD.

Oracle Server X5­-2 can be configured with up to four NVMe small form factor (SFF) SSDs that support up to 6.4 TB of flash storage. As flash technologies are temperature sensitive, most high ­performance flash drives will throttle down their I/O speeds as temperatures rise in order to protect the flash from damage. Oracle's NVMe SSDs, on the other hand, include multiple temperature sensors that are monitored by Oracle Server X5-­2's Oracle Integrated Lights Out Manager (Oracle ILOM) service processor (SP) to ensure the drive maintains optimum operating temperature.

Oracle ILOM modulates the fan speed to ensure sufficient cooling for maximum system performance at all times. The benefits of this being that the system consistently operates at maximum performance across the full operating temperature range of the server independent of system configuration.




Oracle Database Appliance X5-2



Exadata X5-2: Extreme Flash and Elastic Configurations



Next Generation of X5 Engineered Systems





See what the Advantages are of engineered systems with this short video tutorial:

Migrate a 1TB Datawarehouse in 20 Minutes (Part 1)



Migrate a 1TB Datawarehouse in 20 Minutes (Part 2)



Migrate a 1TB Datawarehouse in 20 Minutes (Part 3)



Migrate a 1TB Datawarehouse in 20 Minutes (Part 4)








More Information:

https://www.oracle.com/engineered-systems/database-appliance/index.html

https://www.oracle.com/servers/x86/x5-2/index.html

https://www.oracle.com/servers/x86/x5-2l/index.html

http://www.oracle.com/technetwork/server-storage/sun-x86/documentation/x5-2-system-architecture-2328157.pdf

http://www.exadata-certification.com/search/label/X5-2%20Oracle%20Exadata%20Machine

https://www.oracle.com/engineered-systems/database-appliance/resources.html

https://www.oracle.com/big-data/index.html

IBM Videos

Loading...