• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

17 April 2016

SUSE Enterprise Linux 12 and Docker Containers

Propel your enterprise to the next level of productivity and competitiveness with SUSE Linux Enterprise 12 Service Pack 1.

Service Pack 1 further adds to SUSE Linux Enterprise making it the most interoperable platform for mission-critical computing across physical, virtual and cloud environments.

SUSE Linux Enterprise 12 Install and overview | The Advanced Foundation for Enterprise Computing  

Solutions based on SUSE Linux Enterprise 12 Service Pack 1 (SP1) feature unique Docker and hardware support along with new and updated capabilities so you can:
  • Achieve SLAs for application uptime
  • Run highly efficient data center development and operations
  • Bring innovative solutions to market faster
Docker in SUSE Linux Enterprise Server


Increase Uptime

SUSE Enterprise Linux DownTime isn’t and Option

Minimize planned and unplanned downtime and maximize service availability. Take advantage of our rugged reliability, high availability and live kernel patching to meet service-level agreements and keep your business running. Learn more about how SUSE helps you achieve 99.999% availability and move towards zero downtime.
  • SUSE Linux Enterprise works perfectly on a variety of hardware platforms that can prevent hardware downtime
  • Maximize service availability with high availability clustering, geo clustering and live kernel patching
  • Minimize human mistakes with a wide range of tools and services including system rollback of SUSE Linux Enterprise service packs

SUSE Linux Enterprise Server 12 Zero Downtime


Improve Operational Efficiency

Boost your efficiency by simplifying systems management and by ensuring high levels of resource utilization.
  • Stay ahead on implementations with container technologies. Take advantage of SUSE’s enterprise ready solution of Docker. See how the ecosystem of SUSE applications creates additional value for your business, so you can just focus on building the apps.
  • Save time and resources with JeOS (Just enough Operating System), a lightweight Linux OS that needs fewer resources than the full OS but provides the same enterprise-grade performance and availability.
  • Reduce IT maintenance workload with easy-to-use management tools such as YaST/AutoYaST (single system management), Wicked (network management), and HAWK (cluster resource management).
  • Maximize your efficiency with virtualization technologies of Xen and KVM.

The Evolution of Linux Containers and Integration of Docker with SLES 12 

Accelerate Innovation

Harness the power of the newest CPUs on the market. Get fast, timely access to abundant open source and partner innovations. Reduce time to value through SUSE-certified quality and ease of integration.
  • Get the benefits of the latest open source innovation sooner by updating with modules
  • Get partner innovation quickly through SUSE SolidDriver Program
  • Reduce time to value through SUSE certifications for hardware and applications



Welcome Docker to SUSE Linux Enterprise Server

Lightweight virtualization is a hot topic these days. Also called “operating system-level virtualization,” it allows you to run multiple applications or systems on one host without a hypervisor. The advantages are obvious: not having a hypervisor, the layer between the host hardware and the operating system and its applications, is eliminated, allowing a much more efficient use of resources. That, in turn, reduces the virtualization overhead while still allowing for separation and isolation of multiple tasks on one host. As a result, lightweight virtualization is very appealing in environments where resource use is critical, like server hosting or outsourcing business.



One specific example of operating system-level virtualization is Linux Containers, also sometimes called “LXC” for short. We already introduced Linux Containers to SUSE customers and users in February 2012 as a part of SUSE Linux Enterprise Server 11 SP2. Linux Containers employ techniques like Control Groups (cgroups) to perform resource isolation to control CPU, memory, network, block I/O and namespaces to isolate the process view of the operating system, including users, processes or file systems. That provides advantages similar to those of “regular” virtualization technologies – such as KVM or Xen –, but with much smaller I/O overhead, storage savings and the ability to apply dynamic parameter changes without the need to reboot the system. The Linux Containers infrastructure is supported in SUSE Linux Enterprise 11 and will remain supported in SUSE Linux Enterprise 12.

Full system roll-back and systemd in SUSE 

Now, we are taking a next step to further enhance our virtualization strategy and introduce you to Docker. Docker is built on top of Linux Containers with the aim of providing an easy way to deploy and manage applications. It packages the application including its dependencies in a container, which then runs like  a virtual machine. Such packaging allows for application portability between various hosts, not only across one data center, but also to the cloud. And starting with SUSE Linux Enterprise Server 12 we plan to make Docker available to our customers so they can start using it to build and run their containers. 

SUSE Linux Enterprise Live Patching Roadmap: Live Kernel Patching using kGraft   

This is the another step in enhancing the SUSE virtualization story, building on top of what we have already done with Linux Containers. Leveraging the SUSE ecosystem, Docker and Linux Containers are not only a great way to build, deploy and manage applications; the idea nicely plugs into tools like Open Build Service and Kiwi for easy and powerful image building or SUSE Studio, which offers a similar concept already for virtual machines. Docker easily supports rapid prototyping and a fast deployment process; thus when combined with Open Build Service, it’s a great tool for developers aiming to support various platforms with a unified tool chain. This is critical for the future because those platforms easily apply also to clouds, public, private and hybrid. Combining Linux Containers, Docker, SUSE’s development and deployment infrastructures and SUSE Cloud, our OpenStack-based cloud infrastructure offering, brings flexibility in application deployment to a completely new level.

SUSE Linux Enterprise High Availability Roadmap: Secure your Data and Service from Local to Geo 


Introducing Docker follows the SUSE philosophy by offering choice in the virtualization space, allowing for flexibility, performance and simplicity for Linux in data centers and the cloud.

Securing Your System Hardening and Tweaking SUSE Linux Enterprise Server 12

More Information:

















SUSE Embedded Offers a Medical Device Operating System 

15 March 2016

Oracle Big Data in the Enterprise

Oracle Enterprise Big Data

Oracle is the first vendor to offer a complete and integrated solution to address the full spectrum of enterprise big data requirements. Oracle’s big data strategy is centered on the idea that you can evolve your current enterprise data architecture to incorporate big data and deliver business value. By evolving your current enterprise architecture, you can leverage the proven reliability, flexibility and performance of your Oracle systems to address your big data requirements.
Defining Big Data

Big data typically refers to the following types of data:

  • Traditional enterprise data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data.
  • Machine-generated /sensor data – includes Call Detail Records (“CDR”), weblogs, smart meters, manufacturing sensors, equipment logs (often referred to as digital exhaust), trading systems data.
  • Social data – includes customer feedback streams, micro-blogging sites like Twitter, social media platforms like Facebook

The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, there are four key characteristics that define big data:

  • Volume. Machine-generated data is produced in much larger quantities than non-traditional data. For instance, a single jet engine can generate 10TB of data in 30 minutes. With more than 25,000 airline flights per day, the daily volume of just this single data source runs into the Petabytes. Smart meters and heavy industrial equipment like oil refineries and drilling rigs generate similar data volumes, compounding the problem.
  • Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day).
  • Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. As new services are added, new sensors deployed, or new marketing campaigns executed, new data types are needed to capture the resultant information.
  • Value. The economic value of different data varies significantly. Typically there is good information hidden amongst a larger body of non-traditional data; the challenge is identifying what is valuable and then transforming and extracting that data for analysis.

Big Data gets Real time with Oracle Fast Data




To make the most of big data, enterprises must evolve their IT infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and integrate them with the pre-existing enterprise data to be analyzed.

Building a Big Data Platform

As with data warehousing, web stores or any IT platform, an infrastructure for big data has unique requirements. In considering all the components of a big data platform, it is important to remember that the end goal is to easily integrate your big data with your enterprise data to allow you to conduct deep analytics on the combined data set.



Infrastructure Requirements

The requirements in a big data infrastructure span data acquisition, data organization and data analysis.

Acquire Big Data

The acquisition phase is one of the major changes in infrastructure from the days before big data. Because big data refers to data streams of higher velocity and higher variety, the infrastructure required to support the acquisition of big data must deliver low, predictable latency in both capturing data and in executing short, simple queries; be able to handle very high transaction volumes, often in a distributed environment; and support flexible, dynamic data structures.



NoSQL databases are frequently used to acquire and store big data. They are well suited for dynamic data structures and are highly scalable. The data stored in a NoSQL database is typically of a high variety because the systems are intended to simply capture all data without categorizing and parsing the data into a fixed schema.

For example, NoSQL databases are often used to collect and store social media data. While customer facing applications frequently change, underlying storage structures are kept simple. Instead of designing a schema with relationships between entities, these simple structures often just contain a major key to identify the data point, and then a content container holding the relevant data (such as a customer id and a customer profile). This simple and dynamic structure allows changes to take place without costly reorganizations at the storage layer (such as adding new fields to the customer profile).

Organize Big Data

In classical data warehousing terms, organizing data is called data integration. Because there is such a high volume of big data, there is a tendency to organize data at its initial destination location, thus saving both time and money by not moving around large volumes of data. The infrastructure required for organizing big data must be able to process and manipulate data in the original storage location; support very high throughput (often in batch) to deal with large data processing steps; and handle a large variety of data formats, from unstructured to structured.



Hadoop is a new technology that allows large data volumes to be organized and processed while keeping the data on the original data storage cluster. Hadoop Distributed File System (HDFS) is the long-term storage system for web logs for example. These web logs are turned into browsing behavior (sessions) by running MapReduce programs on the cluster and generating aggregated results on the same cluster. These aggregated results are then loaded into a Relational DBMS system.

Analyze Big Data

Since data is not always moved during the organization phase, the analysis may also be done in a distributed environment, where some data will stay where it was originally stored and be transparently accessed from a data warehouse. The infrastructure required for analyzing big data must be able to support deeper analytics such as statistical analysis and data mining, on a wider variety of data types stored in diverse systems; scale to extreme data volumes; deliver faster response times driven by changes in behavior; and automate decisions based on analytical models.

Big Data Discovery: Unlock the Potential in a Big Data Reservoir





Most importantly, the infrastructure must be able to integrate analysis on the combination of big data and traditional enterprise data. New insight comes not just from analyzing new data, but from analyzing it within the context of the old to provide new perspectives on old problems.



For example, analyzing inventory data from a smart vending machine in combination with the events calendar for the venue in which the vending machine is located, will dictate the optimal product mix and replenishment schedule for the vending machine.

Statistics and Predictive Analytics in Oracle Database and Hadoop



Solution Spectrum

Many new technologies have emerged to address the IT infrastructure requirements outlined above. At last count, there were over 120 open source key-value databases for acquiring and storing big data, while Hadoop has emerged as the primary system for organizing big data and relational databases maintain their footprint as a data warehouse and expand their reach into less structured data sets to analyze big data. These new systems have created a divided solutions spectrum comprised of:

  • Not Only SQL (NoSQL) solutions: developer-centric specialized systems
  • SQL solutions: the world typically equated with the manageability, security and trusted nature of relational database management systems (RDBMS)

NoSQL systems are designed to capture all data without categorizing and parsing it upon entry into the system, and therefore the data is highly varied. SQL systems, on the other hand, typically place data in well-defined structures and impose metadata on the data captured to ensure consistency and validate data types.



Distributed file systems and transaction (key-value) stores are primarily used to capture data and are generally in line with the requirements discussed earlier in this paper. To interpret and distill information from the data in these solutions, a programming paradigm called MapReduce is used. MapReduce programs are custom written programs that run in parallel on the distributed data nodes.

The key-value stores or NoSQL databases are the OLTP databases of the big data world; they are optimized for very fast data capture and simple query patterns. NoSQL databases are able to provide very fast performance because the data that is captured is quickly stored with a single indentifying key rather than being interpreted and cast into a schema. By doing so, NoSQL database can rapidly store large numbers of transactions.

However, due to the changing nature of the data in the NoSQL database, any data organization effort requires programming to interpret the storage logic used. This, combined with the lack of support for complex query patterns, makes it difficult for end users to distill value out of data in a NoSQL database.

To get the most from NoSQL solutions and turn them from specialized, developer-centric solutions into solutions for the enterprise, they must be combined with SQL solutions into a single proven infrastructure that meets the manageability and security requirements of today’s enterprises.

Oracle’s Big Data Solution

Oracle is the first vendor to offer a complete and integrated solution to address the full spectrum of enterprise big data requirements. Oracle’s big data strategy is centered on the idea that you can extend your current enterprise information architecture to incorporate big data.

Oracle big data appliance and solutions





New big data technologies, such as Hadoop and Oracle NoSQL database, run alongside your Oracle data warehouse to deliver business value and address your big data requirements.

Ask the Oracle Experts Big Data Analytics with Oracle Advanced Analytics



Oracle Big Data Appliance

Oracle Big Data Appliance is an engineered system that combines optimized hardware with a comprehensive big data software stack to deliver a complete, easy-to-deploy solution for acquiring and organizing big data.

Oracle Big Data Appliance comes in a full rack configuration with 18 Sun servers for a total storage capacity of 648TB. Every server in the rack has 2 CPUs, each with 8 cores for a total of 288 cores per full rack. Each server has 64GB1 memory for a total of 1152GB of memory per full rack.

Oracle big data appliance and solutions




Oracle Big Data Appliance includes a combination of open source software and specialized software developed by Oracle to address enterprise big data requirements.

The Oracle Big Data Appliance software includes:

  • Full distribution of Cloudera’s Distribution including Apache Hadoop (CDH4)
  • Oracle Big Data Appliance Plug-In for Enterprise Manager
  • Cloudera Manager to administer all aspects of Cloudera CDH
  • Oracle distribution of the statistical package R
  • Oracle NoSQL Database Community Edition2
  • And Oracle Enterprise Linux operating system and Oracle Java VM

Big Data: Myths and Realities



Oracle NoSQL Database

Oracle NoSQL Database is a distributed, highly scalable, key-value database based on Oracle Berkeley DB. It delivers a general purpose, enterprise class key value store adding an intelligent driver on top of distributed Berkeley DB. This intelligent driver keeps track of the underlying storage topology, shards the data and knows where data can be placed with the lowest latency. Unlike competitive solutions, Oracle NoSQL Database is easy to install, configure and manage, supports a broad set of workloads, and delivers enterprise-class reliability backed by enterprise-class Oracle support.



The primary use cases for Oracle NoSQL Database are low latency data capture and fast querying of that data, typically by key lookup. Oracle NoSQL Database comes with an easy to use Java API and a management framework. The product is available in both an open source community edition and in a priced enterprise edition for large distributed data centers. The former version is installed as part of the Big Data Appliance integrated software.

Oracle Big Data Connectors

Where Oracle Big Data Appliance makes it easy for organizations to acquire and organize new types of data, Oracle Big Data Connectors tightly integrates the big data environment with Oracle Exadata and Oracle Database, so that you can analyze all of your data together with extreme performance.

The Oracle Big Data Connectors consist of four components:

1. Oracle Loader for Hadoop

Oracle Loader for Hadoop (OLH) enables users to use Hadoop MapReduce processing to create optimized data sets for efficient loading and analysis in Oracle Database 11g. Unlike other Hadoop loaders, it generates Oracle internal formats to load data faster and use less database system resources. OLH is added as the last step in the MapReduce transformations as a separate map – partition – reduce step. This last step uses the CPUs in the Hadoop cluster to format the data into Oracle’s internal database formats, allowing for a lower CPU utilization and higher data ingest rates on the Oracle Database platform. Once loaded, the data is permanently available in the database providing very fast access to this data for general database users leveraging SQL or business intelligence tools.


2. Oracle SQL Connector for Hadoop Distributed File System

Oracle SQL Connector for Hadoop Distributed File System (HDFS) is a high speed connector for accessing data on HDFS directly from Oracle Database. Oracle SQL Connector for HDFS gives users the flexibility of querying data from HDFS at any time, as needed by their application.

Oracle Big Data SQL - Create Value with Data




It allows the creation of an external table in Oracle Database, enabling direct SQL access on data stored in HDFS. The data stored in HDFS can then be queried via SQL, joined with data stored in Oracle Database, or loaded into the Oracle Database. Access to the data on HDFS is optimized for fast data movement and parallelized, with automatic load balancing. Data on HDFS can be in delimited files or in Oracle data pump files created by Oracle Loader for Hadoop.

3. Oracle Data Integrator Application Adapter for Hadoop

Hortonworks Oracle Big Data Integration




Oracle Data Integrator Application Adapter for Hadoop simplifies data integration from Hadoop and an Oracle Database through Oracle Data Integrator’s easy to use interface. Once the data is accessible in the database, end users can use SQL and Oracle BI Enterprise Edition to access data.

Hortonworks and Oracle Data Transformation and Acquisition Techniques to Handle Petabytes of Data




Enterprises that are already using a Hadoop solution, and don’t need an integrated offering like Oracle Big Data Appliance, can integrate data from HDFS using Big Data Connectors as a stand-alone software solution.

4. Oracle R Connector for Hadoop

Oracle R Connector for Hadoop is an R package that provides transparent access to Hadoop and to data stored in HDFS.



R Connector for Hadoop provides users of the open-source statistical environment R with the ability to analyze data stored in HDFS, and to scalably run R models against large volumes of data leveraging MapReduce processing – without requiring R users to learn yet another API or language. End users can leverage over 3500 open source R packages to analyze data stored in HDFS, while administrators do not need to learn R to schedule R MapReduce models in production environments.

Big Data Discovery: Unlock the Potential in a Big Data Reservoir




R Connector for Hadoop can optionally be used together with the Oracle Advanced Analytics Option for Oracle Database. The Oracle Advanced Analytics Option enables R users to transparently work with database resident data without having to learn SQL or database concepts but with R computations executing directly in-database.

Oracle Data Integration 




In-Database Analytics

Once data has been loaded from Oracle Big Data Appliance into Oracle Database or Oracle Exadata, end users can use one of the following easy-to-use tools for in-database, advanced analytics:


  • Oracle R Enterprise – Oracle’s version of the widely used Project R statistical environment enables statisticians to use R on very large data sets without any modifications to the end user experience. Examples of R usage include predicting airline delays at a particular airports and the submission of clinical trial analysis and results.
  • In-Database Data Mining – the ability to create complex models and deploy these on very large data volumes to drive predictive analytics. End-users can leverage the results of these predictive models in their BI tools without the need to know how to build the models. For example, regression models can be used to predict customer age based on purchasing behavior and demographic data.
  • In-Database Text Mining – the ability to mine text from micro blogs, CRM system comment fields and review sites combining Oracle Text and Oracle Data Mining. An example of text mining is sentiment analysis based on comments. Sentiment analysis tries to show how customers feel about certain companies, products or activities.
  • In-Database Graph Analysis – the ability to create graphs and connections between various data points and data sets. Graph analysis creates, for example, networks of relationships determining the value of a customer’s circle of friends. When looking at customer churn customer value is based on the value of his network, rather than on just the value of the customer.
  • In-Database Spatial – the ability to add a spatial dimension to data and show data plotted on a map. This ability enables end users to understand geospatial relationships and trends much more efficiently. For example, spatial data can visualize a network of people and their geographical proximity. Customers who are in close proximity can readily influence each other’s purchasing behavior, an opportunity which can be easily missed if spatial visualization is left out.
  • In-Database MapReduce – the ability to write procedural logic and seamlessly leverage Oracle Database parallel execution. In-database MapReduce allows data scientists to create high-performance routines with complex logic. In-database MapReduce can be exposed via SQL. Examples of leveraging in-database MapReduce are sessionization of weblogs or organization of Call Details Records (CDRs).

Getting Started with Oracle Big Data Discovery v1.0



Every one of the analytical components in Oracle Database is valuable. Combining these components creates even more value to the business. Leveraging SQL or a BI Tool to expose the results of these analytics to end users gives an organization an edge over others who do not leverage the full potential of analytics in Oracle Database.

Big Data Analyics using Oracle Advanced Analytics12c and BigDataSQL



Connections between Oracle Big Data Appliance and Oracle Exadata are via InfiniBand, enabling high-speed data transfer for batch or query workloads. Oracle Exadata provides outstanding performance in hosting data warehouses and transaction processing databases.

Oracle Active Data Guard with Axxana Phoenix for Disaster Recovery Zero Transactions Loss



Now that the data is in mass-consumption format, Oracle Exalytics can be used to deliver the wealth of information to the business analyst. Oracle Exalytics is an engineered system providing speed-of-thought data access for the business community. It is optimized to run Oracle Business Intelligence Enterprise Edition with in-memory aggregation capabilities built into the system.

Oracle Big Data Appliance, in conjunction with Oracle Exadata Database Machine and the new Oracle Exalytics Business Intelligence Machine, delivers everything customers need to acquire, organize, analyze and maximize the value of Big Data within their enterprise.

Conclusion

Analyzing new and diverse digital data streams can reveal new sources of economic value, provide fresh insights into customer behavior and identify market trends early on. But this influx of new data creates challenges for IT departments. To derive real business value from big data, you need the right tools to capture and organize a wide variety of data types from different sources, and to be able to easily analyze it within the context of all your enterprise data. By using the Oracle Big Data Appliance and Oracle Big Data Connectors in conjunction with Oracle Exadata, enterprises can acquire, organize and analyze all their enterprise data – including structured and unstructured – to make the most informed decisions.

More Information:

Oracle Enterprise Big Data  https://www.oracle.com/big-data/index.html

Oracle Big Data SQL    http://www.oracle.com/us/products/database/big-data-sql/overview/index.html

Oracle NoSQL Database
http://www.oracle.com/us/products/database/nosql/overview/index.html
http://www.oracle.com/technetwork/database/database-technologies/nosqldb/overview/index.html

Big Data for Data Scientists  https://www.oracle.com/big-data/roles/data-scientist.html

Oracle: Big Data for the Enterprise   http://www.oracle.com/us/products/database/big-data-for-enterprise-519135.pdf

Oracle Big Data Appliance  https://www.oracle.com/engineered-systems/big-data-appliance/index.html

Oracle Big Data Solutions  https://www.oracle.com/big-data/solutions/index.html

Big Data in the Cloud   https://cloud.oracle.com/bigdata

Oracle Big Data Lite Virtual Machine  http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.html

Oracle Big Data Discovery  https://cloud.oracle.com/bigdatadiscovery

Oracle Data Integrator https://blogs.oracle.com/dataintegration/entry/announcing_oracle_data_integrator_for

Oracle GoldenGate for Big Data https://blogs.oracle.com/dataintegration/entry/oracle_goldengate_for_big_data

Demystifying Big Data for Oracle Professionals  http://arup.blogspot.nl/2013/06/demystifying-big-data-for-oracle.html

How to Implement a Big Data System http://www.oracle.com/technetwork/articles/bigdata/implementing-bigdata-1502704.html

Oracle Active Data Guard with Axxana Phoenix for Disaster Recovery Zero Transactions Loss http://www.axxana.com/resources/videos/














09 February 2016

Red Hat Enterprise Linux 7.2 deployment of container-based applications


Red Hat Drives Networking, Linux Container Innovation in Latest Version of Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7.2 boosts network performance and delivers additional enhancements to support the development and deployment of container-based applications

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 7.2, the latest release of Red Hat Enterprise Linux 7. Red Hat Enterprise Linux 7.2 continues Red Hat's goal of redefining the enterprise operating system by providing a trusted path towards the future of information technology without compromising the needs of the modern enterprise.





New features and capabilities focus on security, networking, and system administration, along with a continued emphasis on enterprise-ready tooling for the development and deployment of Linux container-based applications. In addition, Red Hat Enterprise Linux 7.2 includes compatibility with the new Red Hat Insights, an add-on operational analytics offering designed to increase IT efficiency and reduce downtime through the proactive identification of known risks and technical issues.



Red Hat Enterprise Linux 7 Atomic Host & Containers




 
Security
Retaining Red Hat's commitment to security, including meeting the needs of financial, government, and military customers, Red Hat Enterprise Linux 7.2 continues to provide new security capabilities and features. True security requires both a secure foundation and secure configuration of systems. OpenSCAP is an implementation of the Security Content Automation Protocol that analyzes a system for security compliance. The new OpenSCAP Anaconda plug-in allows use of SCAP based security and configuration analysis during the installation process, ensuring a secure starting point for system deployment.

Container security: Do containers actually contain? Should you care? - 2015 Red Hat Summit



A critical part of secure distributed systems is being able to trust the address resolution performed by DNS servers. DNSSEC extends DNS to provide a secure chain of trust for address resolution. The Red Hat Identity Management system (IdM) now supports DNSSEC for DNS zones.

Beyond Containers: Agility and Security in Docker Delivery




Networking
Networking performance in Red Hat Enterprise Linux 7.2 has been significantly improved -- with throughput doubled in many network function virtualization (NFV) and software defined networking (SDN) use cases. Other enhancements to the kernel networking subsystem, include:

Tuning the network kernel stack to dramatically improve packet processing time, enable Red Hat Enterprise Linux 7.2 to perform at physical line rates in advanced (virtual and containerized) workloads.
Inclusion of the Data Plane Development Kit (DPDK), which makes it possible to rapidly develop low-latency and high throughput custom applications capable of direct packet processing in user space for NFV and other use cases. Prior to this enhancement, systems were limited to running only one type of application (DPDK-enabled or traditional-network enabled.) Enhancements in Red Hat Enterprise LInux 7.2, specifically the introduction of a new bifurcated driver, now allow for both types of applications to be hosted on the same system thus consolidating physical hardware.
The addition of TCP (DCTCP), a feature for solving TCP congestion problems in data centers that works smoothly across Windows- and Red Hat Enterprise Linux-based hosts to maximize throughput and efficiency.



Linux Containers
Red Hat Enterprise Linux 7.2 features many improvements to the underlying container support infrastructure. Updates are included for the docker engine, Kubernetes, Cockpit and the Atomic command. In addition, Red Hat Enterprise Linux Atomic Host 7.2, the latest version of Red Hat's container workload-optimized host platform, is available with most Red Hat Enterprise Linux 7.2 subscriptions.



Super privileged containers - 2015 Red Hat Summit



Also available today is the beta of the Red Hat Container Development Kit 2, a collection of images, tools, and documentation to help application developers simplify the creation of container-based applications that are certified for deployment on Red Hat container hosts, including Red Hat Enterprise Linux 7.2, Red Hat Enterprise Linux Atomic Host 7.2 and OpenShift Enterprise 3.

System Administration
As managing the modern datacenter at scale becomes increasingly complex, Red Hat Enterprise Linux 7.2 includes new and improved tools to deliver a more streamlined system administration experience. Highlighting these updates is the inclusion of Relax-and-Recover, a system archiving tool that enables administrators to create local backups in ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations.



Red Hat Insights
Red Hat Enterprise Linux 7.2 is compatible with Red Hat Insights, an operational analytics service designed for the proactive management of Red Hat Enterprise Linux environments. Available for up to 10 Red Hat Enterprise Linux 7 systems at no additional cost, the offering is designed to help customers detect technical issues before they impact business operations by analyzing infrastructure assets and identifying key risks and vulnerabilities through continuous monitoring and analysis. Red Hat Insights provides resolution steps to help IT managers and administrators respond to these issues and potentially prevent future problems.



Red Hat Enterprise Linux Server for ARM 7.2 Development Preview
Red Hat is also making available Red Hat Enterprise Linux Server for ARM 7.2 Development Preview, which was first made available to partners and their customers in June 2015. This Development Preview enables new partner hardware and additional features for the ARM architecture.



Process-driven application development using Red Hat JBoss BPM Suite - 2015 Red Hat Summit



Supporting Quote
Jim Totton, vice president and general manager, Platforms Business Unit, Red Hat
“With the launch of Red Hat Enterprise Linux 7 in June 2014, Red Hat redefined the enterprise open source operating system. Red Hat Enterprise Linux 7.2 continues this effort, delivering new capabilities for containerized application deployments and significant networking enhancements while retaining our focus on delivering a stable, reliable and more secure platform for the most critical of business applications.”
With the launch of Red Hat Enterprise Linux 7 in June 2014, Red Hat redefined the enterprise open source operating system. Red Hat Enterprise Linux 7.2 continues this effort, delivering new capabilities for containerized application deployments and significant networking enhancements while retaining our focus on delivering a stable, reliable and more secure platform for the most critical of business applications.
JIM TOTTONVICE PRESIDENT AND GENERAL MANAGER, PLATFORMS BUSINESS UNIT, RED HAT


More Information:

http://www.certdepot.net/rhel7-get-started-linux-containers/

https://access.redhat.com/documentation/en/red-hat-enterprise-linux-atomic-host/version-7/getting-started-guide/

https://access.redhat.com/documentation/en/red-hat-enterprise-linux-atomic-host/version-7/getting-started-with-containers/

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.0_Release_Notes/sect-Red_Hat_Enterprise_Linux-7.0_Release_Notes-Linux_Containers_with_Docker_Format-Using_Docker.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.0_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.0_Release_Notes-Linux_Containers_with_Docker_Format.html

http://rhelblog.redhat.com/tag/containers/

http://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/openstack-at-cisco/linux-containers-white-paper-cisco-red-hat.pdf

http://blog.globalknowledge.com/technology/the-importance-of-app-isolation-containers-and-docker-in-red-hat-enterprise-linux-7/

http://www.unixarena.com/category/redhat-linux/linux-kvm/page/2

https://www.sdxcentral.com/products/red-hat-enterprise-linux-atomic-host/

https://www.sdxcentral.com/flow/containers/

http://research.redhat.com/category/projects/project-updates/

19 January 2016

SQL Server 2016 for real time operational analytics





SQL Server, an industry leader, now packs an even bigger punch

With the upcoming release of SQL Server 2016, our best SQL Server release in history, and the recent availability of the Cortana Analytics Suite, Microsoft is offering unmatched innovation across on-premises and the cloud to help you turn data into intelligent action.

What's New in SQL Server 2016



In the recent Gartner Magic Quadrant for Operational Database Management Systems Microsoft is positioned as a leader, highest in execution and furthest in vision. SQL Server 2016 builds on this leadership, and will come packed with powerful built-in features. As the least vulnerable database for six years in a row, SQL Server 2016 offers security that no other database can match. It also has the data warehouse with the highest price-performance, and offers end-to-end mobile BI solutions on any device at a fraction of the cost of other vendors. It provides tools to go beyond BI with in-database Advanced Analytics, integrating the R language and scalable analytics functions from our recent acquisition of Revolution Analytics.



Microsoft’s cloud-first product development model means that new features get hardened at scale in the cloud, delivering proven on-premises experience. In addition, we offer a consistent experience across on-premises and cloud with common development and management tools and common T-SQL.

Security with Always Encrypted

The Always Encrypted feature in SQL Server 2016 CTP 3.0, an industry-first, is based on technology from Microsoft Research and helps protects data at rest and in motion. Using Always Encrypted, SQL Server can perform operations on encrypted data and – best of all – the encryption key resides with the application in the customers’ trusted environment. It offers unparalleled security.

One example of a customer that’s already benefitting from this new feature is Financial Fabric, an ISV that offers a service called DataHub to hedge funds. The service enables a hedge fund to collect data ranging from transactions to accounting and portfolio positions from multiple parties such as prime brokers and fund administrators, store it all in one central location, and make it available via reports and dashboards.

“Data protection is fundamental to the financial services industry and our stakeholders, but it can cause challenges with data driven business models,” said Subhra Bose, CEO, Financial Fabric. “Always Encrypted enables the storage and processing of sensitive data within and outside of business boundaries, without compromising data privacy in both on-premises and cloud databases. At Financial Fabric we are providing DataHub services with “Privacy by Design” for our client’s data, thanks to Always Encrypted in SQL Server 2016. We see this as a huge competitive advantage because this technology enables data science in Financial Services and gives us the tools to ensure we are compliant with jurisdictional regulations.”


Always Encrypted updates in CTP3 include the following; please see the SSMS team blog for additional detail.

  • Encrypting columns and key management made easy with new UI in SSMS
  • Encrypt Columns Wizard
  • Key management/rotation workflows
  • Azure Key Vault support
  • Integration with hardware security modules (.NET 4.6.1) and Azure Key Vault
  • Mission Critical Performance

With an expanded surface area, you can use the high performance In-Memory OLTP technology in SQL Server with a significantly greater number of applications. We are excited to introduce the unique capabilities of combine in-memory analytics (columnstore) with in-memory OLTP and traditional relational store in the same database to achieve real-time operational analytics. We have also made significant performance and scale improvements across all components in the SQL Server core engine.

Mission Critical features in SQL Server 2016 


Insights on All Your Data

You’ll find significant improvements in both SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) that help deliver business insights faster and improve productivity for BI developers and analysts. The enhanced DirectQuery enables high-performing access to external data sources like SQL Server Columnstore. This capability enhances the use of SSAS as a semantic model over your data for consistency across reporting and analysis without storing the data in Analysis Services.

SQL Server 2016



SQL Server Reporting Services 2016 offers a modernized experience for paginated reports and updated tools as well as new capabilities to more easily design stunning documents. To get more from your investments in SSRS and to provide easy access to on-premises reports to everyone in your organization, you can now pin paginated reports items to the Power BI dashboard. In coming months, we will add new Mobile BI capabilities to Reporting Services, allowing you to create responsive, interactive BI reports optimized for mobile devices.

PolyBase, available today with the Analytic Platform System, is now built into SQL Server, expanding the power to extract value from unstructured and structured data using your existing T-SQL skills. PolyBase CTP 3.0 improvements including better performance and scale out PolyBase nodes to use other SQL Server instances. see also: PolyBase in APS - Yet another SQL over Hadoop solution



Polybase in CTP3 includes the following new capabilities:

  • Improved PolyBase query performance with scale-out computation on external data (PolyBase scale-out groups)
  • Improved PolyBase query performance with faster data movement from HDFS to SQL Server and between PolyBase Engine and SQL Server
  • Support for exporting data to external data source via INSERT INTO EXTERNAL TABLE SELECT FROM TABLE
  • Support for push-down computation to Hadoop for string operations (compare, LIKE)
  • Support for ALTER EXTERNAL DATA SOURCE statement

SQL Server 2016 CTP Technical Deep Dive

PolyBase with Freedom of Choice

Openness

One important key differentiator of PolyBase compared to all of the existing competitive approaches is ‘openness’. We do not force users to decide on a single solution, like some Hadoop providers are pursuing. With PolyBase, you have the freedom to use an HDInsight region as a part of your APS appliance, to query an external Hadoop cluster connected to APS, or to leverage Azure services from your APS appliance (such as HDInsight on Azure).

To achieve this openness, PolyBase offers these three building blocks.


Advanced Analytics

Advanced Analytics (RRE integration)

With this release, we are very excited to announce the public availability SQL Server R Services in SQL Server 2016, an Advanced Analytics capability which supports enterprise-scale data science, significantly reducing the friction for adopting machine learning in your business. SQL Server R Services is all about helping customers embrace the highly popular open source R language in their business. R is the most popular programming language for Advanced Analytics.

SQL Server 2016 Business Intelligence




You can use it to analyze data, uncover patterns and trends and build predictive models. It offers an incredibly rich set of packages and a vibrant and fast-growing developer community. At the same time, embracing R in an enterprise setting presents certain challenges, especially as the volume of data rises and with the switch from modeling to production environments. Microsoft SQL Server R Services with in-database analytics helps customers embrace this technology by supporting several scenarios. Two of the key scenarios are:

One: Data Exploration and Predictive Modeling with R over SQL Server data

The data scientist can choose to analyze the data in-database or to pull data from SQL Server and analyze it on the client machine (or a separate server). Analyzing data in-database has the advantage of performance and speed by removing the need to move data around and leverage the strong compute resources on the SQL Server. RevoScaleR package and APIs contains a set of common functions and algorithms that were designed for performance and scale, overcoming R limitations of single-threaded execution and memory bound datasets.

Two: Operationalizing your R code using T-SQL

For SQL Server 2016 CTP3, Microsoft supports ad-hoc execution of R scripts via a new system stored procedure. This stored procedure will support pushing data from a single SELECT statement and multiple input parameters to the R side and return a single data frame as output from the R side.

SQL Server 2016 R Services




PASS 2015 Keynote: Accelerating your Business with a Modern Data Strategy




Transactional replicate from SQL Server to Azure SQL DB in new in CTP3. 

Now you can setup Azure SQL DB as a subscriber of transaction replication, allowing you to migrate data from SQL Server instance on-premises or in IaaS to Azure SQL database without downtime. The replication is one way in this release, and works with SQL Server 2016, SQL Server 2014 and SQL Server 2012. This is the same Transactional Replication technology you have been using for many years on premise. As you configure a subscriber (from SSMS or by script), instead of entering an instance name, you enter the name of your Azure SQL DB subscription along with the associated login and password. A snapshot (as in a Replication Snapshot) will used to initialize the subscription and subsequent data changes will be replicated to you Azure SQL DB in the same transactional consistent way you are used to. A transactional publication can deliver changes to subscribers both in Azure SQL DB and/or on premise/Azure VM. There is no Replication service hosted in Azure for this. Everything is driven from on-premise distribution agents. To use this feature, you just need to set it up the way you do to replicate on-premises: Install the Replication components, configure the Distributor, the Publisher and create the Publication, the Articles and you the Subscriptions. In this case, one of the subscriptions will be your Azure SQL DB.

In-Memory improvements in this release:

  • In-Memory OLTP
  • FOREIGN KEY constraints
  • CHECK constraints
  • UNIQUE constraints
  • DML triggers (AFTER only)
  • EXECUTE AS CALLER
  • Inline table-values functions
  • Security built-ins and increased math function support
  • Real-time Operational Analytics
  • Support for in-memory tables
  • Existing nonclustered columnstore index (NCCI) are updateable without requiring index rebuild
  • Parallel Index build of nonclustered columnstore index (NCCI)
  • Performance improvements (INSERT, String pushdown, bypassing delete buffer when processing deleted rows)
  • In-Memory Analytics
  • You can upgrade databases with nonclustered columnstore index and have it updateable without requiring rebuild of the index
  • General performance improvements for analytics queries with columnstore index especially involving aggregates and string predicates
  • Improved supportability with DMVs and XEvents

SQL Unplugged Episode 12

New Hybrid Scenario using Stretch Database

Stretch Database enables stretching a single database between on-premises and Azure. This will enable our customers to take advantage of the cloud economics of lower cost compute and storage without being forced into an all-or-nothing database move. Stretch Database is transparent to your application, and the trickle of data to Azure can be paused and restarted without downtime. You can use Always Encrypted with Stretch Database to extend data in a more secure manner for greater peace of mind.

StretchDB - Stretch tables to Azure SQL DB with SQL Server 2016



Stretch Database updates in this release:

Engine Update

  • Create/Drop index support
  • AlwaysEncrypted support
  • Improved credential management for remote Stretch database stores
  • Improved performance for joins between stretched tables
  • New external data source integration

Expert summit SQL Server 2016



SSMS Wizard updates

  • Database and Table level fly out menu options were updated to reflect new Stretch functionality
  • Stretch monitor functionality added to allow users to monitor current migration status, including the ability to pause the migration at the table level
  • XEVENT support for diagnostics session support in monitor
  • Updated and simplified stretch wizard flow to reduce the amount of steps required to enable or reconfigure Stretch
  • Help icon links Updated to point to new MSDN content focusing specifically on wizard topic
  • Added functionality that allows users to pause or disable migration at the table level
  • Added ability to Stretch individual tables
  • Added database scoped credential support - for AlwaysOn
  • Ability to enabling stretch on the server using the wizard
  • Updated table level validation error/warning messaging
  • The ability to Stretch to new SQL Azure or existing SQL Azure server
  • Updated SSMS Object Explorer Stretch Databases icons
  • SMO model for Stretch status query and updates

SQL Server 2016 Reporting Services




Built-in JSON support improvements in this release include:

OPENJSON - Table value function that parses JSON text and returns rowset view of JSON. By default, OPENJSON returns properties of object or elements of array that is parsed. Advanced version of OPENJSON function with defined schema allows user to define schema of resulting rowset, and mapping rules that define where can be found values in the parsed JSON text that will be returned in the resulting rowset. It enables developers to easily parse JSON text and import it into relational tables.

JSON_VALUE - Scalar function that returns a value from JSON on the specified path. It can be used in any query, view, computed column. It can be also used to define indexes on properties of JSON text stored in table columns.

ISJSON - function that validates that JSON is properly formatted. It can be used to define check constraints on the columns that contain JSON text. It is not supported in check constraints defined on in-memory tables.

JSON_QUERY - Scalar function that returns a fragment from the JSON text. Unlike JSON_VALUE that returns scalar values, JSON_QUERY returns complex object (i.e. JSON arrays and objects).

Azure Data Lake Store and Analytics Service available in preview today

Last month we announced a new and expanded Azure Data Lake that makes big data processing and analytics simpler and more accessible. Azure Data Lake includes the Azure Data Lake Store, a single repository where you can easily capture data of any size, type and speed, Azure Data Lake Analytics, a new service built on Apache YARN that dynamically scales so you can focus on your business goals, not on distributed infrastructure, and Azure HDInsight, our fully managed Apache Hadoop cluster service. Azure Data Lake is an important part of the Cortana Analytics Suite and a key component of Microsoft’s big data and advanced analytics portfolio.

The Azure Data Lake service includes U-SQL, a language that unifies the benefits of SQL with the expressive power of user code. U-SQL’s scalable distributed query capability enables you to efficiently analyze data in the store and across SQL Servers in Azure, Azure SQL Database and Azure SQL Data Warehouse. Customers can use Azure Data Lake tools for Visual Studio, which simplifies authoring, debugging and optimization and provides an integrated development environment for analytics.

ASOS.com, the UK's largest independent online fashion and beauty retailer, has been using Azure Data Lake to improve customer experience on their website. “At ASOS we are committed to putting the customer first. As a global fashion destination for 20-somethings we need to stay abreast of customer behaviour on our site, enabling us to optimize their shopping experience across all platforms of ASOS.com and wherever they are in the world. Microsoft Azure Data Lake Analytics assists in processing large amounts of unstructured clickstream data to track and optimize their experience. We have been able to get productive immediately using U-SQL because it was easy to use, extend and view and monitor the jobs all within Visual Studio” said Rob Henwood, Enterprise Architect at ASOS.com.

Azure SQL Database In-Memory OLTP and Operational Analytics

Today, we are releasing our next generation in-memory technologies to Azure with the public preview of In-Memory OLTP and real-time Operational Analytics in Azure SQL Database. In-Memory OLTP in the Azure SQL Database preview includes the expanded surface area available in SQL Server 2016, enabling more applications to benefit from higher performance. By bringing this technology to the cloud, customers will be able to take advantage of in-memory OLTP and Operational Analytics in a fully managed database-as-a-service with 99.99% SLA.

Foundation Session: Microsoft Business Intelligence



Temporal support improvements in this release include:

  • Support for using temporal system-versioning with In-Memory OLTP
  • Combining disk-based table for cost-effective storing of history data with memory-optimized tables for storing latest (actual) data
  • Super-fast DML and current data querying supported from natively compiled code
  • Temporal querying supported from interop mode
  • Internal in-memory table created to minimally impact performance of DML operations
  • Background process that flushes the data from internal in-memory to permanent disk-based history table
  • Direct ALTER for system-versioned temporal tables enables modifying table schema without introducing maintenance window
  • Support for adding/altering/dropping columns while SYSTEM_VERSIONING is ON
  • Support for ADD/DROP HIDDEN for period columns while SYSTEM_VERSIONING is ON
  • Support for temporal querying clause FOR SYSTEM_TIME ALL that enables users to query entire data history easily without specifying period boundaries
  • Optimized CONTAINED IN implementation with minimized locking on current table. If your main case is analysis on historical data only, use CONTAINED IN.

Combined with the releases earlier this month of Always Encrypted, Transparent Data Encryption, support for Azure Active Directory, Row-Level security, Dynamic Data Masking and Threat Detection, Azure SQL Database provides unparalleled data security in the cloud with fast performance. As part of our intelligent capabilities, SQL Database also has built-in advisors to help customers get started quickly with in-memory OLTP to optimize performance.

It’s never been easier to capture, transform, mash-up, analyze and visualize any data, of any size, at any scale, in its native format using familiar tools, languages and frameworks in a trusted environment, both on-premises and in the cloud.

In Summary:

SQL Server 2016 has many new features.  Some of features are enhancements to existing features, while others are entirely new features.   In this article I only explored some of the new functionality in SQL Server 2016.   When moving to SQL Server 2016 you should exploit those new features that provide value to your SQL Server environment.

BENEFITS

- Enhanced in-memory performance provides up to 30x faster transactions, more than 100x faster queries than disk-based relational databases and real-time operational analytics

- New Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without application changes

- Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner without application changes

- Built-in advanced analytics provide the scalability and performance benefits of building and running your advanced analytics algorithms directly in the core SQL Server transactional database

- Business insights through rich visualizations on mobile devices with native apps for Windows, iOS and Android

- Simplify management of relational and non-relational data by querying both with T-SQL using PolyBase

- Faster hybrid backups, high availability and disaster recovery scenarios to back up and restore your on-premises databases to Microsoft Azure and place your SQL Server AlwaysOn secondaries in Azure

Where You Can Get Additional Information

Below are some additional resources that you can use to find out more information about SQL Server 2016.

SQL Server 2016 Early Access Web Site: https://www.microsoft.com/en/server-cloud/products/sql-server-2016/

SQL Server 2016 data sheet: http://download.microsoft.com/download/F/D/3/FD33C34D-3B65-4DA9-8A9F-0B456656DE3B/SQL_Server_2016_datasheet.pdf

http://blogs.technet.com/b/dataplatforminsider/archive/2014/06/02/polybase-in-aps-yet-another-sql-over-hadoop-solution.aspx

SQL Server 2016 release notes: https://msdn.microsoft.com/en-US/library/dn876712.aspx

What’s new in SQL Server, September Update: https://msdn.microsoft.com/en-US/library/bb500435.aspx

http://www.licensepartners.nl/microsoft-licenties-kopen/microsoft-sql-server-licenties-cals/sql-server-2014?gclid=CNqSquWHs8oCFda4Gwodrz4Jsg

http://blogs.technet.com/b/dataplatforminsider/archive/2015/11/30/sql-server-2016-community-technology-preview-3-1-is-available.aspx

https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016

http://blogs.technet.com/b/dataplatforminsider/archive/2015/10/28/sql-server-2016-everything-built-in.aspx

http://blogs.technet.com/b/dataplatforminsider/archive/2015/10/28/sql-server-2016-community-technology-preview-3-0-is-available.aspx

https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016

http://blogs.msdn.com/b/analysisservices/archive/2012/11/26/poster-download-microsoft-business-intelligence-at-a-glance.aspx

http://blogs.sqlsentry.com/team-posts/latest-builds-sql-server-2016/

http://www.sqlsentry.com

http://sqlwithmanoj.com/2015/09/15/stretch-you-on-premise-databasetable-to-azure-sql-database-with-stretchdb-sql-server-2016/