mainframe (9)

In-Database AI Client Experiences with Db2 for z/OS + Demo 

Don't  miss this live webcast on 2nd November 2021 - 11:00AM EST
Tom Ramey will highlight some of the key challenges facing Db2 for z/OS clients and how AI is a breakthrough technology, that when applied to Db2 for z/OS performance management and resiliency can have a major impact. Tom will be joined by Benny Van Straten from Rabobank and Tom Beavin from IBM.  Tom Beavin will share Db2 AI use cases and host a live Db2 AI demo.Rabobank is a Dutch multinational banking and financial services company; Benny will share first-hand experiences and lessons learnt around Db2 AI for z/OS and the power of in-database AI.

 What will you learn by attending this webcast?

  • Hear first-hand client feedback and experiences
  • Learn how Db2 AI improves SQL performance using machine learning based on unique patterns found when executing the queries in a production environment. 
  • Learn how Db2 AI automatically detects SQL access path performance regressions and automatically restores the performance back to its optimal level
  • Learn how Db2 AI automatically stabilizes dynamic queries with their optimal access path, reducing prepare overhead


Tom Ramey IBM WW Director, Data and AI on IBM Z

Benny van Straten IT Specialist/DB2 Rabobank

Akiko Hoshikawa IBM Distinguished Engineer

Tom Beavin IBM Db2 AI for z/OS Development Machine Learning and Optimization


Read more…

In this ninth article in the series on DB2 Locking for Application Developers, which provides information about coding your application not only for data integrity, which is the principle focus of this series of articles, but also for performance and concurrency, taking into account the transaction isolation levels in effect at run time. Background information about DB2 for z/OS locking semantics and mechanisms, transaction isolation levels, data anomalies and more are discussed in previous articles. This article concentrates on coding techniques, mostly for combining read-only cursors with searched update statements, that will provide protection against data anomalies, most specifically the lost update anomaly.

Let’s start with a restatement of why this is important. In DB2 for z/OS, the recommended programming technique for reading rows via a cursor and then updating some or all of those rows is to specify the FOR UPDATE clause on the cursor declaration and use positioned updates – UPDATE WHERE CURRENT OF. This has the advantage that, when you read a row, DB2 takes a U lock on the row or page. This allows concurrent readers with an S lock, but any concurrent transactions requesting U or X locks will have to wait until the U lock is released. When the transaction issues UPDATE WHERE CURRENT OF <cursor-name>, DB2 attempts to promote the U lock to an X lock. This ensures that no other transaction can have updated the row between the SELECT, which protects the row with a U lock, and the UPDATE.

However, it’s not always possible to use FOR UPDATE cursors. Find out why. Click here to continue reading. 


Read more…

This is the last article in the series by Gareth Copplestone-Jones on locking for developers, and is a wrap up of miscellaneous points arising from previous articles.


Row level locking

The first item under discussion is row-level locking. I mentioned previously that the design default lock size should be page-level, with row-level locking only being used where justified. This is to reduce the CPU and elapsed time overhead typically incurred by row-level locking, and especially to avoid the data sharing overhead involved. The DB2 for z/OS documentation has further information about locking in a data sharing environment, but for the purposes of this article it’s important to stress that row level locking in a data sharing environment can and typically does introduce significant overheads.

Continue reading the final instalment in Gareth Copplestone-Jones series of blogs on DB2 for z/OS Locking for Application Developers. 


Read more…

GO Application and Db2 for z/OS REST services

A few days after I posted my first GO article (Quick Start in accessing Db2 for z/OS from a GO application), customers start to ask me for alternative since they don't want to use CLIdriver. Other option is to use Db2 native REST service. In the following article, I show how to do this step by step.

Read more…

Welcome to part eight of this blog series on DB2 Locking for Application Developers, which is about considerations for coding your application not only for data integrity (the main focus of this series of articles), but also for performance and concurrency, taking into account the transaction isolation levels in effect at run time.

This article continues the discussion in part seven about read-only cursors running at a transaction isolation level of cursor stability (CS), supplemented by the fairly common practice of combining read-only cursors with searched updates. I’ll explain later in the article the perils of access-dependent cursors and why you should avoid using them. The article concludes with a brief discussion of the concept of optimistic locking, also called optimistic concurrency control.

Visit the Triton Consulting tech blog to continue reading. 


Read more…

Modernizing Applications by using API’s

This blog is the last part of a multi blogs story published by Aymeric Affouard, Guillaume Arnould, Khadija Souissi and Leif Pedersen. Here are the links to the previous entries:

Part 1: Is there a future for Analytics on IBM Z?

Part 2: IBM Db2 Analytics Accelerator

Part 3: Machine Learning on z/OS

Part 4: Operational Decison Manager

Last month, my Internet provider changed my connection from copper to fiber. The diameter of the cable is still the same, but in fact the copper connection uses a pair of copper cables twisted together, where the fiber connection uses a unique tiny fiber surrounded by Kevlar fibers to strengthen the full cable. The magic is that the light is used in 2 different length waves for the outbound and the inbound communication. This new fiber connection is 30 times faster for the downloads and 300 times faster for the uploads than my previous ADSL connection.

My mobile and my internet browser are very much happier. They can send more HTTP requests to the outside world and my user experience is really faster and funnier.


The question can be: what are those HTTP requests doing? What are those devices trying to achieve?

They just want to communicate with somewhere to exchange information. Is there a common matter to do this? Yes of course, on Internet the main language is HTTP: every internet address begins with http://blablabla or the secure version of it with a ‘s’ https:///blablabla. In fact, this standard is used by mobile devices and web browsers, what we call the SoE aka System of Engagement. But the use is of HTTP is broader than just inside the SoE. SoE developers already imposed this language to talk to data servers also called SoR aka System of Record or analytics servers called SoI aka System of Insight. They do that because it’s the language they are already talking, it is easier for them to reuse something already known, and not trying to understand and integrate the variety of different protocols used by those SoRs. They don’t want to spend time making their application talking those different protocols. That is how REST APIs came into life as the standard between the different systems: SoE, SoR and SoI. There are lot of good documentations on Internet on what is a REST API, but here is a simple definition: HTTP + JSON. Which means the communication relies on the HTTP protocol, and if you need to carry data, it will take the form of a JSON message, very less expensive than XML.


Bang! Isn’t it simpler? Everyone talking the same language, a common one, well improved.


Maybe we can push the idea further. Inside the SoR, the tendency is to explode monolithic applications into smaller pieces, easier to manage and to develop by different teams: microservices. What about the communication between those microservices? Can it be REST APIs? Well the answer to that question is not obvious. Let’s go back to HTTP. We saw this protocol is easy to speak by 99% of all the programming languages.


But does ‘easy’ means also ‘efficient’?

The answer is no and here are some counterparts on HTTP:

  • HTTP is synchronous: after pushing the request, you wait the answer doing nothing in the meantime.
  • HTTP is at the top of the OSI model at the layer 7, machines have to process those 7 layers.

So, REST APIs have a cost. And it’s for developers to find a good balance between the time we spend processing business logic, and the time we spend processing communication between the different business logic blocs. Memory to memory communication is far away faster than network communication like REST APIs. Proof is IBM introduced Shared Memory Communication SMC-R or SMC-D on mainframe to reduce data transmission network overhead when applications communicate through TCP/IP.

REST APIs are also stateless. Which means we can’t execute 2 REST requests under the same unit of work. It’s like a bye-bye to the two-phase commit, well known by mainframe people, for 2 or more REST API calls.


Indeed, let’s go back to mainframe, where business logic is processed in a form of a transaction. Mainframe is well known to process transactions quickly: let say 5-20 microseconds, in a large scale: let say 1500 transactions per second. If transactions are exposed to the outside world as REST APIs, what can be the cost of this web translation? Again, it depends, and it is for each one to choose where to put the cursor, and to choose the technology accordingly. The strategic IBM’s REST APIs gateway for mainframe applications is z/OS Connect Enterprise Edition. The cost of this gateway for web translation, if well used, is some microseconds added to the cost of the initial transaction.


What do I mean by ‘well used’?

The role of z/OS Connect EE is to expose mainframe assets. But those assets are secured inside the machine. z/OS Connect EE must be secured the same way, with encryption, authentication, authorization and maybe audit for every request. Each previous security piece has a cost. This needs to be taken into account when designing the solution.

For example, try to avoid OAuth 2.0 open standard for authentication where for every request, the mainframe will need to do an extra call to the OAuth 2.0 Introspection Endpoint to decrypt the token. Enhance OAuth 2.0 with OpenID Connect where the token can be consumed directly by the mainframe.

Another example is the encryption: when we add the ‘s’ to HTTP! HTTPS has 2 phases: the first one is the handshake where the 2 servers agree on the encryption key, and the second one is the data encryption where data is sent encrypted by the previous key. The cost of a secured handshake can’t be under half a second: 500ms, even on a mainframe with hardware cryptographic accelerators. This communication time is huge compared to the business processing time: 500ms compared to 5ms. The handshake is like the creation of a secured tunnel between the 2 servers. Once the tunnel is established, use it for a time. Do not recreate it for every request.



This a quite long story, starting from the new fiber connection to the use of REST APIs to access the mainframe. This is the path my HTTP requests are taking from phone in my home to a mainframe somewhere when I order items on my favorite retailer (running mainframe) from my sofa. I want those requests to be quick but also secured: to be sure it’s me, not a hacker somewhere. All of this using secured and well-designed REST APIs, but not so many otherwise my user experience will be changed to a wait experience.

Read more…

The value of data on the balance sheet

DB2 holds so much data.  Data makes us smarter and gives us insight and business advantage so it has monetary value right ?   I pondered for a long time whether data should be an asset on the balance sheet and blogged about it here

Essentially we can turn data into meaningful insights - particularly big data as there is so much more to consider.  Mike Ferguson Independent analyst and consultant talks about this on his upcoming webcast June 24 11:00am Eastern, highlighting a summary of his great white paper on the same topic.

Join us by registering here>>>

Let me know your thoughts.

Read more…

The IBM System z platform is known for its scalability and its unmatched security. Nonetheless, we still need to monitor the who, what, why, when, where and how of protecting information. Big data will drive increased compliance as the range of data sources expands to support decision making. All these are subject to audit, compliance, regulation and more. Taking a proactive approach could help incidents from becoming headline breaches. Read this paper and learn the full capabilities of the InforSphere Guardium portfolio for IBM System z - and why there is no excuse for data breaches and longer.

I think you will enjoy this great new white paper from Ernie Mancill of IBM - our resident DB2 for z/OS security expert.bbb

Read more…

Next Best Action

Recently, I was meeting with a DB2 for z/OS client, and the topic of Next Best Action (NBA) came up.  

My client's challenges are that although they consider the "lifetime value" of the customer in their marketing messaging and fraud detection algorithms:

(1) Marketing messages are both segment based (or shotgun blast) and often ill-timed for the customer based on their value and behavioral lifecycle.  Customers receive multiple outbound messages per month. 

(2) In many cases, automated interactions with the customer for upsell, cross-sell, and fraud detection are based on scoring input data that is aged one or more days, not representing the current state of the customer

(3) Customer interactions are initiated via separate organizations with limited to no coordination nor cross-channel understanding of the customer.

Unfortunately, the result of these poorly targeted and ill-timed interactions with customers is that the customers feel like they are treated with little consideration of their history with my client.  One wrong or ill-timed interaction is all it takes to destroy many years of relationship building with a customer and send their lifetime value into a death spiral.  

NBA is all about taking the right action with the specific customer via the right channel at the right time based on a cross-channel view of their behaviors and value.  In short, it is mass automation of the one-to-one relationship with the customer.  It leverages a combination of automated rules-based decision making, mathematical optimization models, in-transaction and batch scoring, as well as integrated campaign management.

For more details on NBA, here are a several resources to get you started:

5 Things To Know About Making the Next Best Action with Your Customer

The IBM® Smarter Analytics Signature Solution - next best action solution 

IBM Redguide publication: Smarter Analytics: Driving Customer Interactions with the IBM Next Best Action Solution


Read more…