Analytics (10)

Business transformation and agility are critical in today’s fast moving world. Clients demand insights faster, more often, and with greater accuracy based on up-to-the-second transactional data that is frequently housed in mainframe systems. The dependency on technology and the rate at which information is created and consumed continues growing exponentially. Applications have to rapidly adapt to take account of the dynamic digital business models and environments. All this drives the need for flexible IT infrastructures that are built on a rock solid foundation to deliver consistent dependability, performance, and security in the face of rapid change.

Our z Systems portfolio is a perfect match for delivering on these needs. It is well recognized that mainframe hardware and software technology deliver unmatched levels of quality, reliability, security, and scalability. On top of this, z/OS clients have historically moved their systems forward conservatively, which has further helped to build the mainframe’s reputation for rock solid stability, essential for the most demanding business-critical workloads. A corollary is that the mainframe is often perceived as a stagnant platform that cannot move quickly, and cannot support the agile or DevOps needs of modern applications. Nothing, of course, could be further from the truth.  However, challenges do remain.

DB2 for z/OS, the mainframe’s flagship relational database product, is changing to a continuous delivery (CD) model to help further address these challenges. With CD, DB2 will deliver new features to the market faster, and in increments that will be much easier for customers to consume. Let’s take a closer look.

DB2 12 for z/OS is the latest release of DB2 and is currently in “beta” testing, or ESP (Early Support Program) testing as we call it. DB2 12 will deliver many new features for mobile, cloud, and analytics applications, while also bringing many new innovations to market to improve performance, availability, and security.

DB2 new versions have followed a historical pattern of about every three years or so, and our customers have grown comfortable with this cadence over the years. However, upgrading to a new DB2 release can be a major effort for customers. In the past, we have introduced innovations such as online rolling version upgrades (for data sharing), and APPLCOMPAT. These features allow IT groups to upgrade in a more streamlined way without having to take outages or involve application groups. But, nonetheless, a DB2 version upgrade can still be a cumbersome project. As a result of this, coupled with the conservative nature of mainframe environments, some customers don’t implement new DB2 versions until several years after GA of the product. With our traditional three-year delivery cycle, along with the version upgrade delays, it can be five to six years or more between the time that we complete the development of a new feature until the point at which that feature actually becomes available on a live system. In today’s fast changing world, this is no longer sufficient.

With CD, we will deliver new features continuously, as they become ready, on future DB2 releases. This will allow application developers and DBAs to access important new features more quickly without having to wait five to six years or more for the next DB2 version.

How will this be done so that it’s consumable and non-disruptive for customer environments?  Our approach to CD must meet or exceed the quality and stability requirements that the z Systems customer base demands, while making the delivery of new features consumable. Customers will receive defect fixes and new features in the same stream. They will be able to apply their DB2 maintenance upgrades just like they always have, including the ability to roll in maintenance upgrades across their data sharing groups while keeping the databases continuously available. In fact, this should become much easier because there will be fewer APARs with ++HOLD actions caused by toleration APARs (the new “function level” concept will ensure that necessary maintenance is applied across the DB2 group, therefore removing the need for many of these existing ++HOLD actions).

What is different is that the new maintenance will contain new features that are initially dormant. The customer can choose when to activate the new features via a new system activation command. APPLCOMPAT controls will be provided to ensure that applications remain stable and to allow for controlled exposure to the newly activated features. We will provide easy to access documentation on which new features are included in which function levels. We will work closely with ISVs to ensure that upcoming features are effectively communicated ahead of time so that the overall DB2 ecosystem remains stable as DB2 changes are incorporated.

We see the road ahead for DB2 for z/OS as being an exciting and rewarding journey for both IBM and its customers. Agile, quality-focused development will allow us to continuously deliver robust production-ready features to DB2 users much more rapidly than we could in the past; therefore, enhancing the vitality of the DB2 product and greatly easing the task of DB2 upgrades for customers.

For more information register for our live webcast with Q &A which we will be hosting on 27th September 2016 at 11am EST, this webcast will be available on replay

Read more…

IDUG is pleased to offer these Complimentary Workshops for FREE to squeeze the most educational value out of your conference.

Sunday, Nov 13th

Certification Preparation Courses:

Pre-Certification Workshop: IBM DB2 11 DBA for z/OS & DB2 10.1 Fundamentals (Exam 610 & Exam 312)  
Pre-Certification Workshop: DB2 10.1 DBA for LUW (Exam 611) and DB2 10.5 DBA for LUW Upgrade (Exam 311)

Thursday, Nov 17th

Read more…

IBM DB2 for z/OS in the age of cloud computing

IBM DB2 for z/OS in the age  Of cloud computing

Reliability, availability, security and mobility




• Highly virtualized server that supports mixed workloads

• Self-serving capabilities in private, membership or hybrid cloud environments

• Divisional support of responsibilities

• Customized implementations to suit your business

• Platform foundation services for cloud use cases


Download this paper and learn:-

What is cloud computing, and what is driving this market trend?


What is data as a service, and what are the drivers?


Why DB2 for z/OS?


Click here to download full paper – over 700 downloads of this top preforming asset.

Direct Link to whitepaper without registration DB2%20for%20zOS%20Age%20of%20the%20Cloud%20IMW14820USEN.pdf

Read more…

There are several use cases where data extracted from live data streams such as Twitter may need to be persisted into external databases. In this example, you will learn how to filter incoming live Twitter data and write relevant subsets of Twitter data into IBM database DB2. Sample program will work against all flavors of IBM databases i.e. DB2 for z/OS, DB2 distributed, dashDB and SQLDB.

We will use Spark Streaming to receive live data streams from Twitter and filter the tweets by a keyword . We will then extract the twitter user names associated with the matching tweets and insert them into DB2. These user names extracted from Twitter can have many applications – such as a more comprehensive analysis on whether these Twitter users are account holders of the bank by performing joins with other tables such as customer table.

1) For a background on Spark Streaming, refer to

2) We will use TwitterUtils class provided by Spark Streaming. TwitterUtils uses Twitte4J under the covers, which is a Java library for Twitter API.

3) Create a table in DB2 called TWITTERUSERS using -


4) Create a new Scala class in Eclipse with contents available at this link. Change database and Twitter credentials to yours (as shown in Step 7).

5) Make sure the Project Build Path contains the jars db2jcc.jar (DB2 JDBC driver), spark-assembly-1.3.1_IBM_1-hadoop2.6.0.jar and spark-examples-1.3.1_IBM_1-hadoop2.6.0.jar, as shown below -


6)Lines 12 to 15 loads the DB2 driver class, establishes a connection to the database and prepares an INSERT statement that is used to insert Twitter user names into DB2.

7) Lines 17 to 24 sets the system properties for consumerKey, consumerSecret, accessToken and accessTokenSecret that will be used by Twitter4j library to generate Oauth credentials. You do this by configuring consumer key/secret pair and access token/secret pair in your account at this link – Detailed instructions on how to generate the two pairs are contained at

8) Lines 26 and 27 create a local StreamingContext with 16 threads and batch interval of 2 seconds. StreamingContext is the entry point for all streaming functionality.

9) Using the StreamingContext created above, Line 30 creates an object DStream called stream. DStream is the basic abstraction in Spark Streaming and is a continuous stream of RDDs containing object of Type twitter4j.Status ( A filter is also specified (“Paris”) which will select only those tweets that have keyword “Paris” in them.

10) In Line 31, map operation on stream maps each status object to its user name to create a new DStream called users.

11) Line 32 returns a new DStream called recentUsers where user names are aggregated over 60 seconds.

12) Lines 34 to 41 iterate over each RDD in the DStream recentUsers to return number of users every 60 seconds, and inserting those users into the database table TWITTERUSERS through JDBC.

13) Lines 44 starts real processing and awaits termination.

14) Following screenshot shows a snippet of console output when the program is run. Of course, you can change the filter to any keyword in line 29.


15) You can also run SELECT * from TWITTERUSERS on your database to confirm that the Twitter users get inserted.

Above simple Twitter program can be extended to more complicated use cases using Spark Streaming to do analysis of social media data more effectively, persist subset of social media data into databases and join social media data with relational data to derive additional business insights.

You can reach us for questions (Pallavi or Param

Read more…

We are seeing a trend of DB2 data being accessed by modern distributed applications written in new APIs and frameworks. JavaScript has become extremely popular for Web application development. JavaScript adoption was revolutionized by Node.js which makes it possible to run JavaScript on the server-side. There is an increasing interest amongst developers to write analytics applications in Node.js that need to access DB2 data (both z/OS and distributed). Modern DB2 provides a Node.js driver that makes Node.js connectivity straight forward. Below are step-by-step instructions for a basic end-to-end Node.js application on Windows for accessing data from DB2 for z/OS and DB2 distributed -

1) Install Node and its companion NPM. NPM is a tool to manage Node modules. Download the installer from

2) Note that DB2 Node.js driver does not support Node 4 on Windows yet. Node 4 support is already available for Mac and Linux. We will have Node 4 support for Windows out very soon.

3) Install a 64-bit version of Node since DB2 Node.js driver does not support 32-bit.

4) Run the installer (in my case node-v0.12.7-x64.msi). You should see a screen like Screenshot 1.

9524595887?profile=originalScreenshot 1

5) Follow the instructions on license and folder choice until you reach the screen for the features you want installed. Default selection is recommended and click Next to start intsall (Screenshot 2).

9524596664?profile=originalScreenshot 2

6) Verify that the installation is complete by opening the command prompt and executing node -v and npm -v as shown in Screenshot 3.

9524596487?profile=originalScreenshot 3

7) You can write a simple JavaScript program to test the installation. Create a file called Confirmation.js with contents console.log('You have successfully installed Node and NPM.');

8) Navigate to the folder you have created the file n and run the application using command

node Confirmation.js. Output looks like Screenshot 4.

9524597055?profile=originalScreenshot 4

9) Now install the DB2 Node.js driver using the following command from Windows command line: npm install ibm_db (For NodeJS 4+, installation command would be different as follows

npm install git+

10) Under the covers, the npm command downloads node-ibm_db package from github and includes the DB2 ODBC CLI driver to provide connectivity to the DB2 backend. You should see following output (Screenshot 5).

9524596886?profile=originalScreenshot 5

11) Copy the following simple DB2 access program in a file called DB2Test.js and change the database credentials to yours -

var ibmdb = require('ibm_db');"DRIVER={DB2};DATABASE=<dbname>;HOSTNAME=<myhost>;UID=db2user;PWD=password;PORT=<dbport>;PROTOCOL=TCPIP", function (err,conn) {

if (err) return console.log(err);

conn.query('select 1 from sysibm.sysdummy1', function (err, data) {

if (err) console.log(err);

else console.log(data);

conn.close(function () {





12) Run the following command from Windows command line to execute the program: node DB2Test.js. You should see Screenshot 6, containing the output of SQL SELECT 1 from SYSIBM.SYSDUMMY1. Your simple Node application can now access DB2.

9524597068?profile=originalScreenshot 6

13) For connecting to DB2 for z/OS, modify the Connection URL, DB name, port, user name and password to DB2 for z/OS credentials.

14) DB2 for z/OS access needs DB2 Connect license entitlement. In most production DB2 for z/OS systems with DB2 Connect Unlimited Edition licensing, server side license activation would have already been done, in which case you don't need to do anything about licensing. If you get any license error on executing the program, server side activation may not have been done. In that case, copy the DB2 Connect ODBC client side license file into ibm_db/installer/clidriver/license folder.

15) Also make sure that the DB2 for z/OS server you are testing against has CLI packages already bound (this would have been already done as part of DB2 Connect setup on the DB2 z/OS server).

16) Run the program with DB2 for z/OS credentials and you will observe similar output as Step 12.

17) Attached is a Node.js program (NodeDb2zosSelect.js) that fetches rows from DB2 for z/OS Employee table in the sample database (DSN8A10.EMP). For running the same program with DB2 distributed, make sure to not only change the database credentials, but also change the table name in the SELECT SQL to EMPLOYEE. In both DB2 for z/OS and DB2 distributed, you should see an output as shown in Screenshot 7.

9524597490?profile=originalScreenshot 7

Continue enjoying your Node.js test drive with DB2!

Read more…


Big Data technologies represent an opportunity to derive new insight from data on System z and to modernize the entire System z infrastructure regarding the following areas:

  • Enabling true mixed Hybrid Transaction/Analytical Processing (HTAP) workloads and delivering a single workload-optimized system with Operational Data Store (ODS) capabilities that integrates operations and business critical analytics into one streamlined system, e.g. by using IBM DB2 Analytics Accelerator (IDAA).
  • Complementing existing 'traditional' analytical capabilities with Big Data analytics, e.g. by making use of text analytics and Natural Language Processing (NLP) as part of IBM InfoSphere BigInsights.
  • Modernizing the zEnterprise Data Warehouse (zEDW) landscape by significantly reducing the number of traditional repositories, such as a landing zone, staging area, System of Record, data marts, cubes, e.g. by leveraging IDAA.
  • Exploring and visualizing big data and deriving to business outcome oriented insight prior to complex transformation, e.g. through IBM Watson Explorer.
  • Simplifying the often complex and expensive information supply chain, e.g. by using IDAA and through ELT on Hadoop with IBM InfoSphere DataStage Balanced Optimizationfor Hadoop.
  • Integrating Hadoop data repositories as part of a System z centric data reservoir, e.g. by leveraging IBM InfoSphere Information Governance Catalog.


This blog serves the purpose to exchange ideas and to better understand the above opportunities and zEnterprise modernization scenarios in the context of real customer requirements and use case scenarios. It should include architecture patterns and deployment models that are derived from real client engagements. The intent is to describe required integration capabilities that play at the intersection between the key products that were mentioned above. Furthermore, the objective is to articulate existing capabilities in DB2 for z/OS, IDAA, InfoSphere BigInsights and related products, but also to understand gaps that need to be addressed.

This blog should facilitate a discussion on modernizing the System z infrastructure as it relates to its monetization aspects and quantifiable business value. Deliverables should be descriptions of client use case scenarios, opportunities for new insight with data on System z, integration and capability gaps, and emerging System z reference architectures.

Kind Regards,

Read more…

At a Glance #DB2z  Learn More

IBM will make DB2 12 for z/OS available to a select group of clients in a closed Early
Support Program (ESP) on March 4, 2016.

The demands of the mobile economy combined with the explosive growth of data present unique opportunities and challenges for companies wanting to take advantage of their mission-critical resources.

Built on the proven availability,security, and scalability of DB2 11 for z/OS and the IBM z Systems TM platform, DB2 12 gives you the capabilities needed to meet the demands of mobile workloads
and increased mission-critical data. It delivers world-class analytics and online transaction processing (OLTP) performance.

DB2 for z/OS delivers innovations in these key areas:

  • Scalable, low-cost, enterprise OLTP and analytics
  • Easy access, easy scale, and easy application development for the mobile
  • In-memory performance improvements
  • Easy access to your enterprise systems of record

Read More

Read more…

For those of you who are considering if you should begin weaving business analytics more tightly into the fabric of your call center operations read my blog: Business Analytics in the Call Center, an interesting opportunity!

Read more…

I'll tell you why.... The more insight we seek the more intimate we become with entities in our business and the data that relates to them.  Today's analytics delves deep into the data in our every day lives - what we purchased, what car we own, who we are related to, how much we earn. where and how we spend it, our health status, marital status, how many children we have, what we like and what we don't like - the list goes on.

So come listen to me talk with my co presenter Rebecca Wormleighton - details below.

Webcast June 12, 2013 12:00PM ETD. Why is Information Governance So Important for Modern Analytics?

The data that feeds your analytics solutions can include everything from customer details to financial records to employee data. The impact of this data getting into the wrong hands either internally or externally can have a major impact on the organizations success and can cost many millions of dollars, which brings information governance and analytics to the forefront for many organizations.

Register for this teleconference to learn how to:
*Reduce business risks and costs
*Deliver the business insights your users need to drive optimal business performance
*Decrease the opportunity for critical data to be exposed and put at risk

Join this teleconference and learn how the combination of IBM information governance offerings and analytic solutions on the z Enterprise platform can help you to enhance information integrity, availability and quality - and download the complimentary white papers.

Read more…

Next Best Action

Recently, I was meeting with a DB2 for z/OS client, and the topic of Next Best Action (NBA) came up.  

My client's challenges are that although they consider the "lifetime value" of the customer in their marketing messaging and fraud detection algorithms:

(1) Marketing messages are both segment based (or shotgun blast) and often ill-timed for the customer based on their value and behavioral lifecycle.  Customers receive multiple outbound messages per month. 

(2) In many cases, automated interactions with the customer for upsell, cross-sell, and fraud detection are based on scoring input data that is aged one or more days, not representing the current state of the customer

(3) Customer interactions are initiated via separate organizations with limited to no coordination nor cross-channel understanding of the customer.

Unfortunately, the result of these poorly targeted and ill-timed interactions with customers is that the customers feel like they are treated with little consideration of their history with my client.  One wrong or ill-timed interaction is all it takes to destroy many years of relationship building with a customer and send their lifetime value into a death spiral.  

NBA is all about taking the right action with the specific customer via the right channel at the right time based on a cross-channel view of their behaviors and value.  In short, it is mass automation of the one-to-one relationship with the customer.  It leverages a combination of automated rules-based decision making, mathematical optimization models, in-transaction and batch scoring, as well as integrated campaign management.

For more details on NBA, here are a several resources to get you started:

5 Things To Know About Making the Next Best Action with Your Customer

The IBM® Smarter Analytics Signature Solution - next best action solution 

IBM Redguide publication: Smarter Analytics: Driving Customer Interactions with the IBM Next Best Action Solution


Read more…