IBM Consulting

DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

Oracle Consulting

For Oracle related consulting and Database work and support and Migration call DBA Consulting.

Novell and RedHatConsulting

For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat product like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

Microsoft Consulting

For all Microsoft related consulting services.

Citrix Consulting

Citrix VDI in a box, Desktop vertualizations and Citrix Netscaler security.

Welcome to DBA Consulting Blog. The specialist for IBM, Oracle, Novell, RedHat, Citrix and Microsoft.

DBA Consulting is a consultancy services specialist who can help you with OS related support and migration and install. Also BI implementations from IBM like Cognos 10 and Microsoft BI like SQL Server 2008, 2012 and the CMS systems releated to it like Microsoft Sharepoint and Drupal and Oracle OBIEE 11R1. We focus on quality and service and customer whishes are central, just like quality of services and customer satisfaction. Our focus on cost savings and no vendor lock in are central in business values.

Oracle related video's

Loading...

Saturday, May 9, 2015

IBM BLU Accelerators for IBM DB2 10.5 and 11


IBM BLU Accelerators

Introduction

IBM DB2 with BLU : The In-memory database for Power Systems



BLU Acceleration is a new collection of technologies for analytic queries that are introduced in DB2 for Linux, UNIX, and Windows Version 10.5 (DB2 10.51). At its heart, BLU Acceleration is about providing faster answers to more questions and analyzing more data at a lower cost. DB2 with BLU Acceleration is about providing order-of-magnitude benefits in performance, storage savings, and time to value.
These goals are accomplished by using multiple complementary technologies, including:

The data is in a column store, meaning that I/O is performed only on those columns and values that satisfy a particular query.

The column data is compressed with actionable compression, which preserves order so that the data can be used without decompression, resulting in huge storage and CPU savings and a significantly higher density of useful data held in memory.

Parallel vector processing, with multi-core parallelism and single instruction, multiple data (SIMD) parallelism, provides improved performance and better utilization of available CPU resources.

Data skipping avoids the unnecessary processing of irrelevant data, thereby further reducing the I/O that is required to complete a query.



DB2 BLU Acceleration and more



These and other technologies combine to provide an in-memory, CPU-optimized, and I/O-optimized solution that is greater than the sum of its parts.
BLU Acceleration is fully integrated into DB2 10.5, so that much of how you leverage DB2 in your analytics environment today still applies when you adopt BLU Acceleration. The simplicity of BLU Acceleration changes how you implement and manage a BLU-accelerated environment. Gone are the days of having to define secondary indexes or aggregates, or having to make SQL or schema changes to achieve adequate performance.

What's new in IBM DB2 BLU?  




Four key capabilities make BLU Acceleration a next generation solution for in-memory computing:

1. BLU Acceleration does not require the entire dataset to fit in memory while still processing at lightning-fast speeds.
Instead, BLU Acceleration uses a series of patented algorithms that nimbly handle in-memory data processing. This includes the ability to anticipate and “prefetch” data just before it’s needed and to automatically adapt to keep necessary data in or close to the CPU. Add some additional CPU acceleration techniques, and you get highly efficient in-memory computing at lightning-speed.
2. BLU Acceleration works on compressed data- saving time and money.
Why waste time and CPU resources on decompressing data, analyzing it and recompressing it? Instead of all these extra steps, BLU Acceleration preserves the order of data and performs a broad range of operations—including joins and predicate evaluations—on compressed data without the need for decompression. This is another next-generation technique to speed processing, skip resource-intensive steps and add agility.
3. BLU Acceleration intelligently skips processing of data it doesn’t need to get the answers you want.
With a massive data set, chances are good that you don’t need all of the data to answer a particular query. BLU Acceleration employs a series of metadata management techniques to automatically determine which data would not qualify for analysis within a particular query, enabling large chunks of data to be skipped. This results in a more agile computing, including storage savings and system hardware efficiency. What’s more this metadata is kept updated on a real-time basis so that data changes are continually reflected in the analytics. Less data to analyze in the first place means faster, simpler and more agile in-memory computing. We call this data skipping.



4. BLU Acceleration is simple to use.
As your business users demand more analytics faster, you need in-memory computing that keeps the pace. BLU Acceleration delivers optimal performance out of the box—no need for indexes, tuning, or time-consuming configuration efforts. You simply convert your row-based data to columns and run your queries. Because BLU Acceleration is seamlessly integrated with DB2, you can manage both row-based and column-based data from a single proven system, thus reducing complexity. This helps free the technical team to deliver value to the business – less routine maintenance and more innovation.



Simplicity in DB2 10.5 with BLU Acceleration




Fast and simple in-memory computing

Fast answers

DB2 with BLU Acceleration includes six advances for fast in-memory computing:

•In the moment business answers from within the transaction environment, new with DB2 10.5 “Cancun Release”, utilizes BLU Shadow Tables to automatically maintain a column-based version of the row-based operational data. Analytic queries are seamlessly routed to these column organized BLU Shadow Tables that are ideal for fast analytic processing.

•Next-generation in-memory computing  delivers the benefits of in-memory columnar processing without the limitations or cost of in-memory only systems that require all data to be stored in system memory to achieve breakthrough performance. BLU Acceleration dynamically optimizes movement of data from storage to system memory to CPU memory (cache).  This patented IBM innovation enables BLU Acceleration to maintain in-memory performance even when active data sets are larger than system memory.

•Actionable compression preserves the order of the data, enabling compressed data in BLU Acceleration tables to be used without decompression. A broad range of operations like predicates and joins are completed on compressed data. The most frequent values are encoded with fewer bits to optimize the compression.

•CPU acceleration is designed to process a huge volume of data simultaneously by multiplying the power of the CPU. Multi-core processing, SIMD processor support and parallel data processing are all used to deeply exploit the CPU and process data with less system latency and fewer bottlenecks.

•Data skipping eliminates processing of irrelevant and duplicate data. This is accomplished by examining small sections of data to determine if it contains information that is relevant to the analytics problem at hand. Deciding on these “hot” portions of data in more granular sections means that less irrelevant data is being processed in the first place.

Oracle SQL compatibility streamlines and reduces risk in moving data from Oracle database to DB2 with BLU Acceleration. This leverages existing skills and investments, while taking advantage of the speed and simplicity of BLU Acceleration to deliver fast business insights.

Simply delivered

IBM believes that in-memory computing should be easy on IT resources:

•Load and go set-up allows you to start deriving value from your data in a couple simple steps.  Simply create the table, load the data and go. It’s fast out of the gate – no tuning, no tweaking required.  This means you can more quickly satisfy business needs even as they change and evolve.

•One administration environment for analytics or transactional data helps ease management. BLU Acceleration is built seamlessly into DB2 10.5 for Linux, UNIX and Windows, a proven enterprise-class database. A single set of enterprise-class administration functions for either row- or column-organized data reduces complexity, while a series of automation capabilities help free IT talent for higher value projects.

IBM Accelerating Analytics with BLU



Flexible multi-platform deployment for Linux on Intel, zLinux, AIX on Power and Windows  makes the most of IT resources whether you are using existing hardware or the latest technology.  This is the only in-memory computing technology to deploy on the cloud or on multiple platforms, offering greater flexibility in meeting the need for business answers.


IBM DB2 10.5 with BLU Acceleration vs Oracle Exadata



DB2 and BLU Acceleration on Cloud Tech Talk



BLU Acceleration: Delivering Speed of Thought Analytics 

Big data poses big challenges for accessing and analyzing information. BLU Acceleration from IBM delivers speed of thought analytics that help you make better decisions faster. See BLU Acceleration’s innovative dynamic in-memory processing, actionable compression, parallel vector processing and data skipping. Learn how to get started using your existing infrastructure and skills.





New in-memory capabilities help you capitalize on business answers even more easily

Technology never stands still and BLU Acceleration is no exception! This product has been enhanced in key areas so you can:

•Gain access to the fast answers BLU Acceleration delivers on Windows and zLinux to support a broader range of organizations, as well as data mart consolidation on these new platforms

•Protect data at rest while saving administration time with native application-transparent data encryption

•Deliver in the moment business answers from within the transaction environment

•Leverage Oracle skills with SQL compatibility to enable simple, low-risk migration from Oracle database to DB2 with BLU Acceleration

•Reduce risk and improve performance of SAP environments with

•Significant enhancements to SAP Business Warehouse support

IBM DB2 11 SQL Improvements




Take advantage of faster query processing and better data reliability by using BLU Acceleration on the POWER8 processor



Bigdata Webcast on Blu acceleration



Best practices for DB2 pureScale performance and monitoring



Let's Get Hands-on: 60 Minutes in the Cloud—Predictive Analytics Made Easy



IBM dashDB - Keeping data warehouse infrastructure out of your way




IBM DB2 with BLU Acceleration & Cognos BI - A great combo!




More Information:

DB2 Tech Talk: Deep Dive BLU Acceleration Super Easy In-memory Analytics


Join DB2 expert Sam Lightstone for an in-depth discussion of the all-new BLU Acceleration features in DB2 10.5 for Linux, UNIX and Windows. BLU Acceleration in-memory computing is designed to deliver results from data-intensive analytic workloads with speed and precision that is termed "speed of thought" analytics.

In this Tech Talk, Sam will explain the details of this ground-breaking technology such as:

•Dynamic in-memory analytics that do not require all of the data to fit in memory in order to perform analytics processing

•Parallel vector processing, driving spectacular CPU exploitation


https://www.brighttalk.com/webcast/7637/74621






DB2 Tech Talk: Introduction and Technical Tour of DB2 with BLU Acceleration


Join Distinguished Engineer and DB2 expert Berni Schiefer and host Rick Swagerman for a technical tour of the all new DB2 10.5 with BLU Acceleration in-memory technology. You will learn about new features such as:

•BLU Acceleration, for “Speed of Thought” analytics
Designed to handle data-intensive analytics workloads, BLU Acceleration extends the capabilities of traditional in-memory systems by providing in-memory performance even when the data set size exceeds the size of the memory. Learn about RAM data loading capabilities, plus “data skipping”; parallel data analysis; actionable compression for analysis without decompressing data and more.

• New DB2 pureScale capabilities that enable online rolling maintenance updates and capacity growth with no planned downtime, plus new integration HADR capabilities to help ensure always available transactions.

• SQL and Oracle Database compatibility refinements in DB2 10.5, helping to ensure fast, easy moves to DB2 as well as increased flexibility for DB2 applications.

• Enhancements to NoSQL technologies that are now business-ready in DB2 10.5. Although not part of the DB2 10.5 announcement, we will fill you in on other No SQL technology introduction plans as well.

• New packaging editions of DB2 that that handle either OLTP or data warehousing needs.

• DB2 tools advances that support these new functions.

Join us for this Tech Talk to find out about these exciting enhancements and how they can help you deliver the data analytics your organization needs while providing tools to keep your OLTP systems in top shape.

https://www.brighttalk.com/webcast/7637/71677







A deeper dive into dashDB - know more in a dash

dashDB is a newly announced data warehouse as a service deployed in the cloud that leverages technologies like BLU Acceleration, in-database analytics and Cloudant to allow you to focus more on the business and less on the business of IT. In this DB2 Tech Talk you will learn a little more about IBM’s cloud initiatives and the value proposition around dashDB as well as...
-dashDB’s architecture and use cases
-pricing and offerings as as service
-competitive differentiations and customer feedback

https://www.brighttalk.com/webcast/7637/140371






http://ibmdatamanagement.co/2013/08/19/how-blu-acceleration-really-works/

http://researcher.watson.ibm.com/researcher/files/us-ipandis/vldb13db2blu.pdf

DB2 with BLU Acceleration on Power Systems

Best practices Optimizing analytic workloads using DB2 10.5 with BLU Acceleration

http://www.ibmbluhub.com/get-technical/blu-whatsnew-cancun/

http://www.ibmbluhub.com

http://www.ibmbluhub.com/why-blu-acceleration/

http://www.ibmbigdatahub.com/topic/624

http://www.ibm.com/developerworks/data/library/techarticle/dm-1309db2bluaccel/

http://www.vldb.org/2014/

http://www.ibmbigdatahub.com/whitepaper/ibm-db2-blu-acceleration-ibm-power-systems-how-it-compares

http://www-01.ibm.com/software/data/db2/linux-unix-windows/db2-blu-acceleration/

http://ibmdatamag.com/2013/06/5-steps-for-migrating-data-to-ibm-db2-with-blu-acceleration/


Monday, April 13, 2015

Oracle Database Appliance X5-2




Oracle Database Appliance X5-2 Introduction



Oracle Server X5-2L is the ideal 2U platform for databases and enterprise storage solutions. Supporting the standard and enterprise editions of Oracle Database, this server delivers best-in-class database reliability in single-node configurations. With support for up to four high-bandwidth NVM Express (NVMe) flash drives, Oracle Database can be accelerated using Database Smart Flash Cache, a feature of Oracle Database. Optimized for compute, memory, I/O, and storage density simultaneously, Oracle Server X5-2L delivers extreme storage capacity at lower cost when combined with Oracle Solaris and ZFS file system compression. Each server comes with built-in, proactive fault detection, and advanced diagnostics, along with firmware that is already optimized for Oracle software, to deliver extreme reliability.










Introduction

Oracle Server X5­2, Oracle’s latest two ­socket server, is the newest addition to the family of Oracle's x86 servers that are purpose ­built to be best for running Oracle software. The new Oracle Server X5­2 1U system is optimal for running Oracle Database in a clustered configuration with Oracle Real Application Clusters (Oracle RAC) and other clustered database solutions, as well as enterprise applications in virtualized environments.


Oracle Big Data Appliance X5-2




Explaining Big Data: The Big Data Life Cycle | Oracle




Product Overview

Oracle Server X5­-2 supports up to two Intel® Xeon® E5­2600 v3 processors. Each Intel Xeon processor provides up to 18 cores, with a core frequency of up to 2.6 GHz, and has up to 45 MB L3 cache along with 24 dual inline memory module (DIMM) slots, when fully populated with twenty­four 32 GB DDR4­2133 DIMMs, provides 768 GB of memory. Memory bandwidth increases to 2,133 MT/sec per channel compared to 1,600 MT/sec in the previous generation.

In addition, Oracle Server X5­-2 has four PCIe Gen3 slots (2 x16, 2 x8 lanes), four 10GBase­T ports, six USB ports, and eight 2.5­inch drive bays providing 9.6 TB of hard disk drive (HDD) storage or 3.2 TB of solid state drive (SSD) storage. An optional DVD drive is supported to allow local access for operating system installation.

The SSD drives used in Oracle Server X5-­2 are SAS­-3 drives with a bandwidth of 12 Gb/sec providing double the performance of the previous generation. Oracle Server X5­2 can also be configured with up to four NVM Express (NVMe) drives from Oracle for a
total of 6.4 TB of high ­performance, high­ endurance PCIe flash.


Best for Oracle Software

Oracle Server X5-­2 systems are ideal x86 platforms for running Oracle software. Only Oracle provides customers with an optimized hardware and software stack that comes complete with choice of OS, virtualization software, and cloud management tools—all at no extra charge. Oracle's optimized hardware and software stack has enabled a 10x performance gain in its engineered systems and has delivered world­record benchmark results. Oracle's comprehensive, open standards­based x86 systems provide the best platform on which to run Oracle software with enhanced reliability for data center environments.

In today’s connected world, vast amounts of unstructured data flow into an enterprise, creating an immediate business need to extract queriable structured data grams from this slew of information. Online transaction processing (OLTP) is a technology that historically has been used for traditional enterprise applications such as enterprise resource planning (ERP) and human capital management (HCM). Now OLTP finds itself in a unique position to accelerate business intelligence and analytics. As such, this places greater demands on the database, I/O, and main memory requirements in data centers. Oracle Database is designed to take advantage of hardware features such as high­ core­ count central processing units (CPUs), non-­uniform memory access (NUMA) memory architectures, and tiered storage of data that enhance system performance.

Benefits include increased transaction throughput and improved application response
times, which reduce the overall cost per transaction.


Oracle Server X5­-2, NVM Express and Oracle Database Smart Flash Cache

Oracle Database utilizes a feature called Database Smart Flash Cache. This feature is available on Oracle Linux and Oracle Solaris and allows customers to increase the effective size of the Oracle Database buffer cache without adding more main memory to the system. For transaction ­based workloads, Oracle Database blocks are normally loaded into a dedicated shared memory area in main memory called the system global area (SGA). Database Smart Flash Cache allows the database buffer cache to be expanded beyond the SGA in main memory to a second level cache on flash memory.

Oracle Server X5­2 introduces a new flash technology called NVM Express that provides a high ­bandwidth, low­ latency PCI Express (PCIe) interface to large amounts of flash within the system. Oracle Database with Database Smart Flash Cache and Oracle Solaris ZFS are specifically engineered to take advantage of this low­ latency, high bandwidth interface to flash in Oracle Server X5­2. Oracle Solaris and Oracle Linux are co-­engineered with Oracle Server X5­2 to function in enterprise­ class workloads by enabling hot­ pluggable capabilities.

Traditional SSDs with a SAS/SATA interface are a popular method of adding flash to a server, and these take advantage of legacy storage controller and disk cage infrastructure. NVM Express is an entirely new end­ to­ end design that eliminates the performance bottlenecks of using conventional storage interfaces. The new NVMe flash drives in Oracle Server X5­2 provide a high­ bandwidth, low­ latency flash implementation that vastly improves OLTP transaction times.

Figure 1 illustrates a block diagram of a traditional SAS­3 SSD connected to a server. The server PCIe root complex is connected to a PCIe/SAS controller that translates PCIe to SAS protocol to allow the server to read and write the SAS­3 SSD. As NVMe SSDs already use the PCIe protocol, there is no need for the PCIe/SAS controller translation as shown in Figure 2.





Oracle’s NVMe drives have a much lower latency and higher bandwidth than standard SAS­3 drives due to the fact that the drive connects directly to four lanes of the PCIe Gen3 with an aggregate bandwidth of 32 Gb/sec as opposed to 12 Gb/sec for a  traditional SAS­3 SSD.

Oracle Server X5­-2 can be configured with up to four NVMe small form factor (SFF) SSDs that support up to 6.4 TB of flash storage. As flash technologies are temperature sensitive, most high ­performance flash drives will throttle down their I/O speeds as temperatures rise in order to protect the flash from damage. Oracle's NVMe SSDs, on the other hand, include multiple temperature sensors that are monitored by Oracle Server X5-­2's Oracle Integrated Lights Out Manager (Oracle ILOM) service processor (SP) to ensure the drive maintains optimum operating temperature.

Oracle ILOM modulates the fan speed to ensure sufficient cooling for maximum system performance at all times. The benefits of this being that the system consistently operates at maximum performance across the full operating temperature range of the server independent of system configuration.




Oracle Database Appliance X5-2



Exadata X5-2: Extreme Flash and Elastic Configurations



Next Generation of X5 Engineered Systems





See what the Advantages are of engineered systems with this short video tutorial:

Migrate a 1TB Datawarehouse in 20 Minutes (Part 1)



Migrate a 1TB Datawarehouse in 20 Minutes (Part 2)



Migrate a 1TB Datawarehouse in 20 Minutes (Part 3)



Migrate a 1TB Datawarehouse in 20 Minutes (Part 4)








More Information:

https://www.oracle.com/engineered-systems/database-appliance/index.html

https://www.oracle.com/servers/x86/x5-2/index.html

https://www.oracle.com/servers/x86/x5-2l/index.html

http://www.oracle.com/technetwork/server-storage/sun-x86/documentation/x5-2-system-architecture-2328157.pdf

http://www.exadata-certification.com/search/label/X5-2%20Oracle%20Exadata%20Machine

https://www.oracle.com/engineered-systems/database-appliance/resources.html

https://www.oracle.com/big-data/index.html

Monday, March 23, 2015

Microsoft Server 2012 R2 and Hyper-V and Virtual Machine Converter


The Microsoft Virtual Machine Converter


What is Microsoft Hyper-V?



Microsoft could not ignore the virtualization trend. Microsoft introduced Hyper-V as a virtualization platform in 2008, and it continued to release new Hyper-V versions with new Windows server versions. So far, there are a total of four versions, including Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2 and Windows Server 2008.

Hyper-V Overview




Introduction to Hyper V




Since Hyper-V’s debut, it has always been a Windows Server feature, which could be installed whenever a server administrator decided to do so. It’s also available as a separate product called Microsoft Hyper-V Server. Basically, Microsoft Hyper-V Server is a standalone and shortened version of Windows Server where Microsoft cut out everything irrelevant to virtualization, services and Graphical User Interface (GUI) to make the server as small as possible. Plus, without the bells and whistles, the server requires less maintenance time and it is less vulnerable, because, for example, fewer components mean less patching.



What's New in Hyper V 2012 r2


Designing Hyper V the Right Way




Hyper-V is a hybrid hypervisor, which is installed from OS (via Windows wizard of adding roles). However, during installation it redesigns the OS architecture and becomes just like a next layer on the physical hardware (refer to pic.1).

Windows Server Hypervisor Achitecture



Microsoft Virtual Machine Converter

The Microsoft Virtual Machine Converter provides a Microsoft-supported, freely available, stand-alone solution for converting VMware-based virtual machines and virtual disks to Hyper-V-based virtual machines and virtual hard disks (VHDs)—including conversion from VMware to Hyper-V on Windows Server 2012. Because MVMC has a fully scriptable command-line interface (CLI), it integrates especially well with data center automation workflows such as those authored and run within Microsoft System Center 2012 - Orchestrator. It can also be invoked through Windows PowerShell.

MVMC simplifies low-cost, point-and-click migration of Windows 7, Windows Vista, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 R2 with SP2, and Windows Server 2003 with SP2 guest operating systems from VMware to Hyper-V.

Migrating to Hyper-V Using the Microsoft Virtual Machine Converter Tool 




In More Detail

The Microsoft Virtual Machine Converter:


  • Provides a quick, low-risk option for VMware customers to evaluate Hyper-V..
  • Converts VMware virtual machines to Hyper-V virtual machines: The VM Conversion will convert VMware-hosted virtual machines and ensure that the entire configuration, such as memory, virtual processor, and other machine configurations, is also migrated from the initial source. The tool also adds virtual NICs to the deployed virtual machine on Hyper-V.
  • Supports a clean migration to Hyper-V with uninstallation of VMware tools on the source virtual machine.
  • Provides a wizard-driven GUI, making it simple to perform virtual machine conversion.
  •  Installs integration services for Windows 2003 guests that are converted to Hyper-V virtual machines.
  • Supports conversion of virtual machines from VMware vSphere 4.1 and 5.0 hosts, including those hosted on a vSphere cluster, to Hyper-V. The tool also supports migration of virtual machines to a Hyper-V host that is part of a failover cluster.
    Note   MVMC also supports conversion of virtual machines from VMware vSphere 4.0 if the host is managed by vCenter 4.1 or vCenter 5.0. You have to connect to vCenter 4.1 or 5.0 through MVMC to convert virtual machines on vSphere 4.0.

  • Supports offline conversions of VMware-based virtual hard disks (VMDK) to a Hyper-V-based virtual hard disk file format (.vhd file).
  • Includes a fully scriptable command-line interface (CLI) for performing machine conversion and offline disk conversion, integrating with data center automation workflows such as those authored and executed within System Center 2012 - Orchestrator. The command line can also be invoked through Windows PowerShell.
This stand-alone tool has a low footprint, is easy to install, and is supported by Microsoft.

Convert virtual machines and disks from VMware hosts to Hyper-V hosts and Windows Azure or convert computers and disks to Hyper-V hosts. This page contains only setup files and a list of Windows PowerShell cmdlets that are related to MVMC. For a detailed document, see Microsoft Virtual Machine Converter 3.0 (http://technet.microsoft.com/en-us/library/dn873998.aspx) on Microsoft TechNet.  http://www.microsoft.com/en-us/download/details.aspx?id=42497

Data Deduplication in Virtualized Environments




Scaling MVMC: The Migration Automation Toolkit

The Migration Automation Toolkit (MAT) provides a powerful and scalable way for customers and partners to convert multiple VMware virtual machines simultaneously.  The MAT is a series of PowerShell scripts that take the powerful conversion engine of MVMC, and scale the solution to handle multiple migrations simultaneously, and in an automated fashion. The MAT can be run from a single coordinator machine, known as the ‘Control Server’ which orchestrates the multiple ongoing conversions. For even greater scale you can use multiple ‘Helper Nodes’ each running MVMC and managed by the ‘Control Server’. The ‘Helper Nodes’ can run within VMs themselves.

As the Migration Automation Toolkit is built using PowerShell, it can be modified and improved by the community. The download comes complete with documentation and guidance on how to use the MAT, along with the pre-requisites required for it to function correctly.


Microsoft Virtual Machine Converter Plug-in for VMware vSphere Client

The MVMC Plug-in for VMware vSphere Client:

  • Extends the vSphere Client context menu to make it easier to convert the VMware-based virtual machine to a Hyper-V-based virtual machine.
  • Is built upon the Microsoft Virtual Machine Converter.
Note This plug-in extends vSphere Client to facilitate conversions from a virtual machine context menu and without changing configurations on the VMware host. This plug-in cannot be used with the MVMC Automation Toolkit.

History of Hyper-V features up to Windows Server 2012 R2


Best Practices for Hyper-V Backups by Greg Shields


Backup strategies for Hyper-V



Veeam Availability Suite v8 features 




More Information:

Microsoft:

https://technet.microsoft.com/en-us/library/hh967435.aspx

https://technet.microsoft.com/en-us/library/hh831531.aspx

https://technet.microsoft.com/en-us/library/hh831410(28v=ws.11)

https://technet.microsoft.com/en-us/library/hh833684.aspx

http://blogs.technet.com/b/uspartner_ts2team/

http://blogs.technet.com/b/chrisavis/archive/2013/08/14/vmware-or-microsoft-simplified-microsoft-hyper-v-server-2012-host-patching-greater-security-and-more-uptime.aspx

http://www.virtualizationsquared.com/learn/

http://www.microsoftvirtualacademy.com/training-courses/introduction-to-hyper-v-jump-start

http://www.microsoft.com/en-us/download/details.aspx?id=40732

http://www.thomasmaurer.ch/2013/10/windows-server-2012-r2-hyper-v-component-architecture-poster-and-hyper-v-mini-posters/

Veeam:

http://hyperv.veeam.com/what-is-hyper-v-technology/

http://go.veeam.com/ten-hyper-v-things-to-know.html

https://www.youtube.com/user/YouVeeam/featured

http://hyperv.veeam.com/

http://www.veeam.com/blog/

http://www.veeam.com/webinars.html




Monday, February 23, 2015

RedHat Enterprise Linux 7 now Support SAP HANNA






The world’s leading enterprise Linux platform


Red Hat® Enterprise Linux® gives you the tools you need to modernize your infrastructure, boost efficiency through standardization and virtualization, and ultimately prepare your datacenter for an open, hybrid cloud IT architecture. Red Hat Enterprise Linux provides the stability to take on today’s challenges and the flexibility to adapt to tomorrow’s demands.





Build your infrastructure on a platform without boundaries


Red Hat Enterprise Linux Roadmap










Modernize

Modernize your IT infrastructure with Red Hat Enterprise Linux to lower total cost of ownership (TCO) and improve IT efficiency.




Standardize

Standardize on Red Hat Enterprise Linux to achieve greater uptime and deploy new systems faster.




Virtualize

Gain flexibility by virtualizing your datacenter, letting you respond rapidly to changing business demands.

5 Things You Thought You Knew about Linux Virtualization




Realize

Build your infrastructure with technology that lets you allocate compute, networking, and storage resources to applications when needed.




Red Hat Enterprise Linux 

The foundation for next-generation architectures, with support for all major hardware platforms and thousands of commercial and custom applications.




RedHat Enterprise Linux New Networking Features and Tools





RedHat Enterprise Linux Next Generation Firewall





RedHat Middleware Integration Products Roadmap






Red Hat Satellite

The easiest way to manage Red Hat Enterprise Linux, keeping your systems running efficiently, properly secured, and compliant with various standards.



Red Hat Enterprise Linux OpenStack Platform

Secure, scalable platform for building public and private clouds.
















Moving from monolithic apps to microservices

The industry is moving beyond self-contained, isolated, and monolithic apps. New workloads will be part of a connected application fabric—flexibly woven together to serve particular business needs, yet easily torn apart and recomposed.


What are containers?

Linux® containers keep applications and their runtime components together by combining lightweight application isolation with an image-based deployment method. Containers introduce autonomy for applications by packaging apps with the libraries and other binaries on which they depend. This avoids conflicts between apps that otherwise rely on key components of the underlying host operating system. Containers do not contain a(n) (OS) kernel, which makes them faster and more agile than virtual machines. However, it does mean that all containers on a host must use the same kernel.





Standardize the components, reap the benefits

Shipping companies can easily exchange containers—whether they transport by boat, rail, or truck—because the container dimensions comply with international standards. Similar standards need to be established for software containers and their applications.



Advancing containers the open source way

There are still many questions to be answered before containers can be considered enterprise-ready. We're working to advance both container technology and the ecosystem that supports it to make it ready for the enterprise, as we did with Linux.



Today, Red Hat is concentrating on:


  • Container portability with deployment across physical hardware, hypervisors, private clouds, and public clouds
  • An integrated application delivery platform that spans from app container to deployment target—all built on open standards
  • Trusted access to digitally signed container images that are safe to use and have been verified to work on certified container hosts


The microservices approach...dictates that instead of having one giant code base that all developers touch...there are numerous smaller code bases managed by small and agile teams...
This is better for continuous delivery as small units are easier to manage, test and deploy.



The future is open

In this 22-minute keynote, Red Hat President of Products and Technologies Paul Cormier talks about the importance of open source, Linux, and containers in the IT industry.




Cloud is here. Deploy it today. Or prepare for tomorrow.   

Infrastructure
Lay a flexible foundation for your present and future enterprise.
 
Platform-as-a-Service
Make developers and IT operations more agile and flexible.
 
Infrastructure-as-a-Service
Efficient IT gives you time to innovate.
 
Application development and integration
What you have works—now make it work together.




RedHat Enterprise Linux for SAP

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced that Red Hat Enterprise Linux for the SAP HANA® platform is now available for customers to deploy across the open hybrid cloud, including via public cloud providers certified by Red Hat. The additions of new cloud provider partners through the Red Hat Certified Cloud Provider program, as well as new SAP-certified hardware available from provider Hitachi Data Systems, provides customers of Red Hat Enterprise Linux for SAP HANA with an extensive choice of deployment options for big data applications, from new hardware configurations to the ability to leverage public, private and hybrid cloud services.

 


By extending Red Hat Enterprise Linux for SAP HANA to the public cloud, we are providing a stable, secure and reliable platform for deployments of SAP HANA across the breadth of the open hybrid cloud.
Jim Tottonvice president and general manager, Platforms Business Unit, Red Hat



Announced in June 2014 as the foundation of an enhanced collaboration between Red Hat and SAP, Red Hat Enterprise Linux for SAP HANA offers an open, scalable, integrated and highly available platform featuring the industry-leading reliability, quality and stability of Red Hat Enterprise Linux. Red Hat Enterprise Linux for SAP HANA helps organizations make smarter, faster decisions; accelerate business processes; and enable consistency of operations across the business through standardization on the Red Hat platform, which powers mission-critical systems in more than 90 percent of the global Fortune 500.


Public Cloud Availability

With today’s announcement, customers of Red Hat Enterprise Linux for SAP HANA now have the ability to harness the power of the public cloud through Red Hat certified cloud partners, including Virtustream and Secure-24. Enterprises will be able to leverage the stability, security and reliability of the world’s leading enterprise Linux platform upon which to run their public cloud-based big data workloads. These newly-available public cloud choices deliver on-demand analytical processing power while helping to reduce the overall cost of extreme data workloads, all backed by the support of Red Hat Enterprise Linux.



New Hardware Configurations

Building on the solution’s launch momentum, Hitachi Data Systems (HDS) has now joined a growing list of industry giants offering Red Hat Enterprise Linux for SAP HANA on SAP-certified hardware appliances. Hitachi Data Systems offers customers enterprise-class resiliency and scalability through Hitachi Unified Compute Platform (UCP) for the SAP HANA platform, delivering flexibility and investment protection with compute and storage that scales along with customer needs. Hitachi Data Systems supports customers’ mission-critical applications with one platform that enables future data growth, minimizes risk, and achieves a return on investment quickly.



Demonstrating the continued collaboration around SAP’s in-memory platform using Red Hat Enterprise Linux for SAP HANA and highlighting the dedication of Red Hat’s partners to Red Hat Enterprise Linux as a foundation for big data applications, launch partners Dell, Fujitsu, and IBM now offer both scale-out as well as scale-up solutions. Dell, Fujitsu, IBM, and NEC also provide solutions for SAP Business Suite powered by SAP HANA.


Red Hat® Enterprise Linux® for SAP HANA® brings the reliability, scalability, and performance of the world’s leading enterprise Linux platform to SAP HANA, the in-memory database management system that improves business performance and fuels innovation.


With the support of a strong partner ecosystem in our corner, Red Hat provides extensive choice of deployment options for big data applications, from new hardware configurations to the ability to leverage public, private and hybrid cloud services.
The availability of new certifications and deployment options enables customers to employ SAP HANA® across the open hybrid cloud, including via public cloud providers certified by Red Hat.

Customers of Red Hat Enterprise Linux for SAP HANA now have the ability to harness the power of the public cloud through Red Hat certified cloud partners, including Virtustream and Secure-24. Enterprises will be able to leverage the stability, security and reliability of the world’s leading enterprise Linux platform upon which to run their public cloud-based big data workloads. These newly-available public cloud choices deliver on-demand analytical processing power while helping to reduce the overall cost of extreme data workloads, all backed by the support of Red Hat Enterprise Linux. By extending Red Hat Enterprise Linux for SAP HANA to the public cloud, Red Hat provides a stable, secure and reliable platform for deployments of SAP HANA across the breadth of the open hybrid cloud.

Building on the solution’s June 2014 launch momentum, Hitachi Data Systems has now joined a growing list of industry giants offering Red Hat Enterprise Linux for SAP HANA on SAP-certified hardware appliances. Hitachi Data Systems offers customers enterprise class scalability and performance through Hitachi Unified Compute Platform (UCP) for the SAP HANA platform, providing solutions that deliver flexibility, scalability and investment protection, particularly for those who plan to migrate their compute and storage solutions to SAP HANA.

In addition, launch partners Dell, Fujitsu, and IBM now offer both scale-out as well as scale-up solutions.  Dell, Fujitsu, IBM, and NEC also provide solutions for SAP Business Suite powered by SAP HANA.

The new cloud-based offerings, combined with additional certified providers and configurations of hardware for SAP HANA as well as the scale-up and scale-out certifications of our launch partners, deliver a set of solutions to help meet the growing enterprise demand for big data applications, all standardized on the world’s leading enterprise Linux platform.

Read the Red Hat press announcement
See the complete list of SAP-certified hardware appliances from Red Hat partners
Learn more about Red Hat Enterprise Linux for SAP HANA


More information:

http://www.dbaconsulting.nl


http://www.redhat.com/en


https://rhn.redhat.com/errata/RHSA-2014-1976.html


Thursday, January 29, 2015

IBM New Mainframe System z13 and z-Linux

IBM's new mainframe provides unprecedented new capabilities, including: 


ibmsystemz on livestream.com. Broadcast Live Free



z13 is the first system able to process 2.5 billion transactions a day - equivalent of 100 Cyber Mondays every day of the year.  z13 transactions are persistent, protected and auditable from end-to-end, adding assurance as mobile transactions grow -- estimated to grow to 40 trillion mobile transactions per day by 2025. [1]



z13 is the first system to make practical real-time encryption of all mobile transactions at any scale.  z13 speeds real-time encryption of mobile transactions to help protect the transaction data and ensure response times consistent with a positive customer experience.  The system includes 500 new patents including cryptographic encryption technologies that enable more security features for mobile initiated transactions.



z13 is the first mainframe system with embedded analytics providing real-time insights on all transactions. This capability can help guarantee the ability of the client to run real-time fraud detection on 100 percent of their business transactions [2]
The rapid growth of mobile applications has created consumers who expect mobile transactions to be fast and seamless – regardless of which mobile payment platform, retailer, or financial organization is providing the service. As a result, businesses are being forced to evaluate whether their IT infrastructures can support mobile applications that meet and exceed these consumer expectations -- or face the potential of losing clients to competing businesses.

 “Every time a consumer makes a purchase or hits refresh on a smart phone, it can create a cascade of events on the back end of the computing environment. The z13 is designed to handle billions of transactions for the mobile economy.  Only the IBM mainframe can put the power of the world's most secure datacenters in the palm of your hand," said Tom Rosamilia, senior vice president, IBM Systems. "Consumers expect fast, easy and secure mobile transactions. The implication for business is the creation of a secure, high performance infrastructure with sophisticated analytics."




z13 Helps with Secure, Trusted Mobile Transactions

As mobile adoption grows, consumers are driving exponentially larger numbers of mobile transactions. Each of these mobile transactions triggers a cascade of events across computing systems. These events include comparisons to past purchases, data encryption and decryption, bank-to-bank reconciliations, and customer loyalty discounts. This cascade of events causes a so-called “starburst effect” – where a single transaction can trigger as few as four or as many as 100 additional system interactions.

Consequently, the starburst effect can create security vulnerabilities at each interaction point. In fact, 71 percent of CIOs and IT managers surveyed by IBM indicated that security is their most significant mobile enterprise challenge. [3] With data and transactions under constant threat from multiple points of attack, consumers want to know that their mobile transactions are as secure as financial data held by banks.



When combined with IBM MobileFirst solutions, the z13 delivers enhanced performance, availability, analytics and security that will drive optimal mobile user experiences.  IBM MobileFirst Platform enables organizations to deliver better, more secure apps. IBM MobileFirst Protect delivers seamless security and end-to-end management of clients' infrastructure and all its devices, apps, content and transactions.






Providing Insight with Every Transaction


The z13 features the world’s fastest microprocessor, 2X faster than the most common server processors, 300 percent more memory, 100 percent more bandwidth and vector processing analytics to speed mobile transactions. As a result, the z13 transaction engine is capable of analyzing transactions in “real time” and will be able to help prevent fraud as it is occurring, allowing financial institutions to halt the transaction before the consumer is impacted. IBM has designed the z13 to integrate real-time scoring and guarantees this capability as a feature of the system. This scoring can be used for fraud detection on 100 percent of a client's business transactions. [4]

In addition to assistance with fraud prevention, businesses looking to enhance their customer loyalty programs will be able to use new z13 capabilities to add more personalization by gaining a real-time view of a client's purchasing habits to offer up-sell and cross-sell promotions before they leave the store -- and in some cases before they even enter.
Businesses today don’t have the ability to analyze 100 percent of a consumer's transaction. With the z13, businesses will be able to use IBM's predictive analytics modeling technology, SPSS, and personalize the transaction as it occurs.
The z13 includes new support for Hadoop, enabling unstructured data to be analyzed in the system. Other analytics advances include faster acceleration of queries by adding DB2 BLU for Linux providing an in-memory database, enhancements to the IBM DB2 analytics accelerator, and vastly improved performance for mathematically intense analytics workloads.

The z13 is the ideal private or hybrid cloud architecture, legendary for its ability to scale and reliably and securely handle multiple workloads. In a scale-out model, it is capable of running up to 8,000 virtual servers -- more than 50 virtual servers per core, helping to lower software, energy and facilities costs.
The z13 lowers the cost of running cloud. For compared environments, it is estimated that a z Systems cloud on a z13 will have a 32 percent lower total cost of ownership over three years than an x86 cloud and a 60 percent lower total cost of ownership over three years than a public cloud. [5] Additionally, the z13 is based on open standards, fully supporting Linux and OpenStack.

As part of today’s announcement, IBM will also unveil a preview of new z/OS software that delivers advanced analytic and data serving capabilities.  When available, this new operating system will expand the ability of z13 to process in-memory analytics and provide analysis on mobile transactions, helping clients to further extend mainframe enterprise applications to the mobile user.

This launch complements IBM's ongoing investments to help clients drive mobile innovation across the enterprise. IBM’s 6,000 mobile specialists have been at the forefront of mobile enterprise innovation. IBM has secured more than 4,300 patents in mobile, social and security, which have been incorporated into IBM MobileFirst solutions that enable enterprise clients to streamline and accelerate mobile adoption, helping enterprises embrace mobile from the ground up.
IBM Global Financing offerings for the new model z13 include customized Fair Market Value leases with payment deferrals for credit qualified customers that want to upgrade from older models to z13, convert an owned z system to leasing while upgrading, or acquire a net new z13.



More information on http://thenewsmarket.com/The-z13-the-Most-Powerful-and-Secure-System-Ever-Built
For more information on z13 and the IBM z Systems portfolio, visit http://ibm.com/systems/z
For additional content and shareable assets visit Mainframe Insights http://www.mainframeinsights.com

We can bring the analytics to the data

Analytics is all about getting more value out of our data. Given that almost 55% of world’s commercial transactions run through z Systems and upwards of 70% of all commercial data originates there, it’s easy to see why running transactional analytics within the same system makes sense.



It makes more sense than the cost and time spent moving data off-system, managing multiple copies, protecting data across the network and not knowing if insights are correct. Using z Systems' capabilities, tools and innovation to keep and manage our data on a single platform that supports the full analytics lifecycle, we slash storage costs, reduce complexity and focus on operational efficiency, improving sales and customer satisfaction.

In-transaction analytics = competitive advantage

An integrated transaction analysis program like z Systems merges the systems of record and systems of engagement, delivering right-time insights at the point of impact. We can instantly analyze 100% of our transactions for revenue boosts from cross-sells or up-sells—and by personalizing communication and offers, through engagement analytics, we can lift annual customer value up to 7.6% or more.



In-transaction fraud prevention helps keep our business safe while protecting our clients and their information. And we can reduce ETL costs by up to 10 million over four years by keeping our data on a single platform. We gain customer advantage and trust, save money and move at the speed of business. It’s all win.

Why Analyse just a fraction of our transactions?

The key pieces of insight are already trapped inside our data like needles in a haystack. But if we analyze only a small percentage of our business transactions, that potential value remains largely untapped, and we lose an extraordinary opportunity.



If we analyze more types of data together in context, we get greater insights, make better decisions and create more targeted messaging. Looking from the customer perspective, the more highly relevant and contextual offers we present to them, the more brand loyalty and business we build.

Integration improves our data literacy

Integration on z Systems helps our Big Data & Analytics solutions to work at optimal levels, giving us insight into our customers, markets, products, regulations, competitors, suppliers, employees and more. And with a flexible analytic structure, we can get more knowledgeable in finding, manipulating and interpreting data.

We know right now where everyone and everything is. All of which improves our decision-making and helps us create new revenue streams and build better business outcomes.





IBM z Systems analytics solutions
New! IBM InfoSphere z Systems Connector for Hadoop
Bring the power of Hadoop to IBM z Systems' legendary qualities of service.


IBM InfoSphere BigInsights for Linux on z Systems 
Put the power of Hadoop to use on the industry’s most reliable, scalable and available high performance platform.



IBM DB2 Analytics Accelerator 
Delivers extremely fast performance to data-intensive and complex DB2 queries for data warehousing, business intelligence, and analytic workloads.



IBM Signature Solution - Next Best Action on z Systems 
Know and serve each customer as a unique individual on a massive scale with this complete solution.


IBM Capacity Management Analytics
Transform how you manage your resources, ensuring that you are able to meet your business demands.


Enterprise Linux Server for Analytics
Gain insights into all aspects of your Linux-based business.



IBM z Analytics System 9700
An end-to-end z/OS environment for the rapid deployment of enterprise analytics across your enterprise.


IBM z Analytics System 9710
Integrate insights into your enterprise at an entry-level price.


IBM Cognos TM1
A complete planning, budgeting and analytic environment for critical company-wide financial performance activities.



For More Information:






systemz resources:










oracle odi link:  http://www.ateam-oracle.com/data-integration/di-odi/

IBM Videos

Loading...