for (13)

IBM Db2 12 for z/OS Function Level Activation and Management

This technical paper follows on from an IBM Gold Consultants survey of business strategies for handling CI/CD in Db2 for z/OS, and in particular for advancing Function Levels.

Subject: Advancing Db2 12 for z/OS Function Levels safely.

Target audience: Executive-level decision makers and Db2 for z/OS system administrators.

Db2 12 z/OS represents a significant change in the way new features and functions are delivered with the introduction of continuous delivery: new features are introduced in the maintenance stream; and the activation of those new features is user-scheduled.

Db2 12 for z/OS has built on the capability introduced in Db2 11 to separate system migration from application migration using Application Compatibility, where application migration is under user control. It can be delayed until after system migration and scheduled on an application-by-application basis.

However, some users are still concerned about managing the introduction of new features and functions, particularly system-level features; for them, it is imperative that this be done without impacting the production service.

This IIBM Gold Consultants Technical Paper on Function Level activation with Db2 12 for z/OS continuous delivery is aimed at exactly that audience, and in particular it is targeted at decision makers and executives responsible for the production service. It describes how to turn off those features, and provides detailed guidance on eliminating or managing the impact of any incompatible or unavoidable changes even when advancing function levels. This allows Db2 customers to position themselves for the next release of Db2 for z/OS, which requires that the latest Db2 12 function level be activated before starting the migration process.

Read more…

This article is the second of a four-part series addressing Db2 for z/OS and modern development utilizing an Agile methodology and DevOps processes. The first article of the series can be found here. In this article we try to define modern development terms in a manner which people most familiar with Db2 for z/OS might better understand. While Db2 for z/OS is not the first database that comes to mind when considering a DevOps development methodology, there is no reason why it shouldn’t be!

I took some of the terms used in modern application development and attempted to approach them in a manner in which most long-term Db2 for z/OS professionals might understand. Hopefully, for the Db2 for z/OS DBA this is a nice condensed version of what may be a whole set of mysteries surrounding modern application development. I did the searching, researching, and discussions with colleagues so you don’t have to. As our information technology departments transition their development practices you do not have to be left out in the cold. use this information as a jumping point to kick start your transition to these modern technologies!

To read the entire article follow this link:

https://bit.ly/Db2ZDevOpsBlog2

Read more…

Db2 for z/OS and JSON

I wrote a very short story about using the JSON/SQL support in Db2 for z/OS. I also wrote an introduction to JSON support in Db2 for z/OS article for IDUG.

The story and link to the IDUG article is here:
https://www.db2expert.com/db2expert/introduction-to-db2-for-z-os-json-sql-support/

This is a direct link to the IDUG article, but you need an IDUG login account (which is free).
https://www.db2expert.com/db2expert/introduction-to-db2-for-z-os-json-sql-support/

Read more…

This article is the first of a four-part series addressing Db2 for z/OS and modern development utilizing a Agile methodology and DevOps processes. In this article we define traditional versus modern development and how Db2 for z/OS, while remaining the premier database for high volume system of record applications, might be left behind when it comes to best practices involving innovation. Did you notice that “Db2” follows “Traditional” in the title of this article, but “Database” follows “Modern”? It is my opinion that Db2 for z/OS is not the first database that comes to mind when considering a DevOps development methodology. However, there is no reason why it shouldn’t be!

The first thing we need to do is to identify what we mean by traditional versus modern development. In this series of articles, we will be speaking specifically of the waterfall methodology versus an Agile methodology and DevOps practices.

To read the entire article please follow this link:

https://community.ibm.com/community/user/hybriddatamanagement/blogs/daniel-luksetich/2021/06/21/traditional-db2-development-vs-modern-development

Read more…

IBM DB2 12 for z/OS became generally available on Oct. 21, 2016. It's perhaps sobering to reflect on the fact that DB2 was first announced in 1983 and released in 1985, but its roots—and the roots of all relational databases—go all the way back to mathematician and IBM Fellow Edgar F. Codd's ground-breaking 1970 paper, "A Relational Model of Data for Large Shared Data Banks."

Many major enhancements have been introduced since those early days, to radically transform DB2 for z/OS into the premier database for reliability, scalability and availability it is today, with support for modern application programming paradigms and both non-relational and relational data. Updates by version include:

  • DB2 V2.1: DB2-managed referential integrity (RI)
  • DB2 V2.2: Initial distributed database support with private protocol (DB2 for MVS to DB2 for MVS)
  • DB2 V2.3: Support for packages, and strategically important distributed database support for Distributed Relational Database Architecture
  • DB2 V4.1: Most significantly data sharing, but also stored procedures
  • DB2 V5.1: Online REORG
  • DB2 V6.1: Triggers, large objects and user-defined functions
  • DB2 V7.1: Unicode support
  • DB2 V8.1: On-line schema evolution and 64-bit virtual storage
  • DB2 9: The Universal Table Space (UTS) with partition by range and partition by growth table spaces, the native XML data type and native SQL stored procedures
  • DB2 10: Large-scale 64-bit architecture exploitation, leading to almost complete elimination of virtual storage constraint, temporal tables, and performance improvements for online transaction processing (OLTP) queries
  • DB2 11: performance improvements for more complex queries, transparent archive, JSON support

Trimmed down as it is, that's an intimidating list of enhancements, so how does DB2 12 match up to its predecessors? That's what we'll be exploring, starting with an overview of the themes and a look at some of the highlights of the new release, including the high-level performance expectations. We won't go into the technical details here—we'll cover those in a series of later articles.

When planning for DB2 12, DB2 for z/OS development set themselves a series of goals, based around four broad themes:

  • Application enablement
  • Database administrator (DBA) function
  • OLTP performance
  • Query performance

These themes propel DB2 for z/OS, combined with the IBM DB2 Analytics Accelerator for z/OS, into the era of the Hybrid Transactional/Analytical Processing (HTAP) database.

Application Enablement

DB2 development set about addressing a number of key customer requirements to expand the use of the existing features of DB2, as well as delivering mobile, hybrid cloud and DevOps enablement. To continue the HTAP journey, DB2 Analytics Accelerator functionality is extended to support an increased number of use cases, and a number of incremental improvements in the SQL and SQL/PL areas make DB2 ready for the next wave of applications.

DBA Function

Even with the existing capability to grow partitioned table spaces to 128 TB (the actual limit is dependent on page size and partition size or DSSIZE), some customers have been constrained by table and partition scalability limits. DB2 12 raises the current limit to an incredible 4 PB, providing capacity for many trillions of rows. Large table management is already a headache for many customers, so DB2 12 simplifies this by making it easier to add partitions (e.g., it’s now possible to insert a new partition between existing ones, complementing the existing capability to add partitions at the end of the table space).

Although DB2 availability is second to none, DB2 12 removes some of the biggest remaining inhibitors to 24-7 continuous availability (e.g., by ensuring tables remain available while maintenance tasks are carried out).

OLTP Performance

OLTP performance remains a key requirement for DB2 customers, not just for improved response times, but to reduce the total cost of ownership, which is a pressing imperative for most IT organizations. Customers also need to be able to handle more throughput and higher transaction volumes. For these reasons, DB2 development set themselves a series of goals for DB2 12, building on the performance improvements already delivered in DB2 10 and DB2 11, to:

  • Reduce CPU consumption in the 5 to 10 percent range by exploiting in-memory features
  • Double the throughput when inserting into a non-clustered table
  • Remove system scaling bottlenecks associated with high n-way systems
  • Provide incremental improvements related to serviceability and availability

Query Performance

Query performance has become of increasing importance to customers over time, as they seek cost-effective ways to discover the valuable information often hidden in the vast amount of business and operational data. Improved analytical query performance enables them to make business decisions faster at less cost.

For DB2, query performance for online analytical processing , business intelligence and other more complex workloads has come sharply into focus customers, and DB2 12 targets four major improvements in this area, to build on the work done in DB2 11:

  • A 20 to 30 percent CPU reduction for complex query workloads
  • Improved efficiency delivered by reducing other resource consumption
  • An 80 percent performance improvement for UNION ALL
  • Simplified access path management, especially for dynamic SQL

Quick Hits

Let’s have a look at some of the highlights of DB2 12 before moving on to discuss the high-level performance expectations in a little more detail.

Scale and Speed

DB2 development has measured over 1 million Inserts per second, and believes DB2 can scale even higher. It can also support up to 256 trillion rows in a single table, using agile partition technology.

In-Memory Database

In-memory database is a major theme for this release, and DB2 development has measured up to a 23 percent CPU reduction for index lookup with advanced in-memory techniques.

Next Generation Application Support

DB2 12 continues the journey of enabling the next generation of applications, providing JSON support, investing in mobile for z Systems to allow discovery of data as a service, enabling simpler integration and consumption, and handling up to 540 million transactions per hour arriving through a RESTful web API into DB2.

Deliver Analytical Insights Faster

As discussed earlier, response time isn’t just a requirement for OLTP workloads, but also for analytical workloads, where DB2 can deliver up to a 2x speed-up for query workloads, and up to a 100x improvement for targeted queries.

High-level Performance Expectations

This is an early view of the performance expectations for DB2 12, with a more detailed performance IBM Redbooks publication.

In the area of system and OLTP performance, the expectations are that:

  • There will be a 2 to 3 percent CPU reduction without the Index In-Memory feature, the Index Fast Traversal Block (FTB) area
  • There will be a 5 to 10 percent CPU reduction by exploiting Index In-Memory, the FTB area. This will be discussed in a later article.
  • Further reduction is possible with contiguous buffer pools, and/or with persistent threads bound with RELEASE(DEALLOCATE)

In the area of query performance, DB2 development expects a wide range of improvement:

  • Typically in the area of 0 to 20 percent without a new access path
  • Typically 10 to 40 percent with a new access path
  • DB2 development has observed up to a 90 percent reduction in their testing for some specific queries

Turning to concurrent insert against a table defined in a UTS with the MEMBER CLUSTER attribute, customers can expect a 5 to 10 percent CPU reduction, providing that the current bottleneck is in space search, or in space map page or data page contention.

Performance Focus

DB2 12 has over twice the number of performance enhancements compared to DB2 11, which was itself known for impressive query performance improvements. Many of the enhancements are targeted at SQL constructs seen in both new analytics and complex transactional workloads.

Firstly, DB2 12 delivers up to a 25 percent CPU improvement for traditional query workloads through optimizations for DISTINCT, GROUP BY, reduced work-file usage, multiple index access and list prefetch.

Secondly, it delivers an up to 2x improvement for modern SQL applications, focusing on performance improvements for next-generation SAP applications, for real-time analytics and for complex OLTP workloads. These optimizations are related to outer join, UNION ALL, stage 2 join predicates, CASE expressions, VARBINARY datatype indexability, DECFLOAT datatype indexability and others.

Parallel query child tasks are now 100 percent eligible for z Systems Integrated Information Processor (zIIP) in DB2 12. In prior releases there was a complicated formula to determine which parts of the parallel query were eligible for zIIP offload. In DB2 12 this becomes much easier, with all child tasks associated with the queries now being zIIP eligible.

Deeper Look

That wraps up the first in this series looking at significant enhancements introduced in DB2. In subsequent articles we will move on to look at the following topics in more detail:

  • Performance for traditional workloads
  • Performance enablers for modern applications
  • Application enablement
  • Reliability, availability and scalability
  • DB2 utilities
  • Data sharing improvements
  • Migration to DB2 12 and the continuous delivery model


Gareth Z. Jones has worked in IT since 1985, Until 2000, he was an IBM customer, with experience as a systems programmer and DBA. He is now works in DB2 for z/OS development, as a member of the SWAT Team, which is led by John Campbell. He has worked with many customers around the world to help them be successful in their use of DB2. He has written several technical papers and presented at many conferences and many group meetings. He can be contacted via email at jonesgth@uk.ibm.com.

Read more…

In the first part of this article, we have introduced what the new approach is and how to prepare the artifacts needed by the new approach. In the second part of this article, we will introduce how to perform DB2 migration with z/OSMF.

Automating DB2 migration in z/OSMF

Creating the workflow instance

For every DB2 subsystem to be migrated, a workflow instance must be created. The path of the workflow definition file must be provided for instantiating a workflow instance. The path of the workflow input variable file is optional. However, it is always best to save the variable values for a specific subsystem into the workflow input variable file and provide it when creating the instance. Otherwise, z/OSMF prompts you to provide the value during execution, preventing a fully automated migration.  

 2j3lh4y.jpg

 

z/OSMF prompts you to input the name of the workflow instance. It is best to name it with the workflow purpose and the target system. For example, "V10 to V11 CM Non-data-sharing - DB2A". "V10 to V11 CM Non-data-sharing" indicates the migration is from Version 10 to Version 11 conversion mode. DB2A is the name subsystem to be migrated.

All the steps in the workflow are assigned to the same person to fully automate the migration process. Dependencies have been set to these steps by the installation CLIST. Only the steps in the "Ready" state can be performed.

2ugcox2.jpg

 

 

Validating JCL with symbolic substitution

It is best to validate the JCLs with variable substitution before executing them. The JCLs with variables substituted can be found on the "Perform" tab of a step. First review the values of the input variables.

 

s0wrir.png

 

By default, the values are loaded from the workflow input variable file, they don’t need to be changed here. However, if changes are required, click the exclamation mark next to the variable name, and check "Mark value editable." Then the input variable textbox becomes editable. The changed value applies to this workflow instance only.

sll2u1.png 

 

 

In the step "Create JOB statement", you are prompted to review and update "JOB statement JCL". Update the job statement with a job name, a proper CLASS or MSGCLASS. For the migration steps that require INSTALL SYSADM authorization, you can also add one line to specify USERNAME and PASSWORD.

Then review the content of the JCL including the job statement. For anything to be updated, click the "Edit JCL" button then the JCL textbox becomes editable. The change on the JCL only applies to the particular workflow instance. If there is any change applying to all the workflow instances, it's best to change on the JCL template before creating workflow instances.

 333bslv.png

 

 

Because migration is a critical activity in an enterprise, every JCL must be verified  after the substitution, for every workflow instance.

Executing the workflow

After all workflow instances are tested, execute them. Some DB2 steps can repeatedly run, such as DSNTIJPM. For DSNTIJPM, select the option "Manually perform the selected step only".

2myrc6b.png

 

 

 

Open the "Status" tab, the job status can be found, including job name, job ID, return code, and job outputs. However, return code 0 of DSNTIJPM does not mean no action should be taken. Follow the instructions on DB2 manual and check the output of each pre-migration reports to determine the actions to be taken.

2dkh7iq.png

 

 

After the first successful execution of a step, its status is marked "Complete". However, you can still re-execute the step. That is, if actions must be taken after the first execution of DSNTIJPM, take the actions and then return to z/OSMF and rerun the step. Check the reports until all pre-migration tasks are done.

Except the first step, all other steps in migration can be done automatically at the migration window. Select the first option in the "Perform Automated Step" panel. With that option, z/OSMF executes the selected steps and all the subsequent steps in the workflow, until all steps are executed successfully or an error occurs. An error in a step execution is not necessarily a nonzero return code. The maximum return code that a step can tolerate is defined in the workflow definition file. It can be 0, 4, 8 or any number. In the DB2 sample installation and migration workflows, most steps have 0 or 4 as the maximum return code.

dneqee.png

 

 

If any error occurs, check the job output on z/OSMF "Status" tab. Problem diagnosis and resolution remains is business as usual.

If everything goes smoothly, all the steps are marked as "Completed."

Migrating multiple DB2 subsystems

The biggest benefit of using z/OSMF is for migrating multiple DB2 subsystems. In particular, when the migration steps and the migration jobs used in the migration of these subsystems are the same, and only the parameters used for these migrations, such as subsystem name and buffer pool sizes, are different. With z/OSMF, the rule "define once, execute multiple times" applies.

DB2 installation CLIST can help generate one workflow input variable file that fits for one DB2 system. To create the workflow input variable files working for other systems, the original workflow input variable file should be duplicated for each subsystem. Then you can edit the workflow input variable file with the proper parameters for that subsystem.

DB2 installation CLIST also provides an UPDATE function to generate a z/OSMF workflow input variable file based on an existing workflow input variable file and an input member such as DSNTIDxx.

For example, if the workflow artifacts were generated for an OLTP member, but the DBA wants to customize the workflow input variable file, the DBA can use the UPDATE mode and provide the input member DSNTIDAP from the existing OLAP member.

dlql92.png

 

 

Enter the source and the target workflow input variable files.

2a5j42x.png

 

 

Then DSNTIVAP is generated which will use the same workflow variable definition in DSNTIVTP but use the value in the input member from DSNTIDAP.

As a comparison, the value of the variable "ACCEL" in the source workflow input variable file DSNTIVTP is NO.

2mmh1n9.png

 

In the target workflow input variable file DSNTIVDP is AUTO.

2dv5qg0.png

The DBA can use the same workflow definition file, the file templates and the new workflow input variable file to create a new workflow instance to migrate the OLAP member.

Migrating a data sharing group

To migrate a data-sharing group, at least 2 sets of workflow artifacts need to be generated. The first set is for the first member to be migrated in that data sharing group. The workflow definition of the first member in a data sharing group is the same as a non-data-sharing DB2 subsystem. It is also named DSNTIWMS. The second set, for the subsequent members, is named DSNTIWMD.

The process to generate the set of workflow artifacts for the first member is the same as the process for a non-data-sharing DB2 subsystem except YES should be specified for DATA SHRING and first member should be specified.

It is best to generate the second set for the subsequent in a different data set because the JCL jobs used by the first set and the second set might have the same name but different content.

Moving workflow artifacts

If the generated workflow artifacts need to be moved or copied to another location, the workflow variable input file must be modified to update with the name of data set containing the JCL templates.

In the workflow definition file, the references of the JCL templates contain the variable NEWSAMP2 whose input is defined in the workflow input variable file.

30rw4ch.png

After the JCL templates are moved to a new location, change the value of NEWSAMP2 in the new

workflow variable file.

 

2u47uia.png

References

DB2 11 for z/OS Installation and Migration Guide (GC19-4056)

IBM z/OS Management Facility Programming Guide (SA32-1066-04)

IBM DB2 for z/OS in the age of cloud computing

http://ibm.biz/BdXCZ6

 

Acknowledgements

 

Special thanks to Maryela Weihrauch who provided much inspiration and good advice during the writing of this article.

Authors

Kewei Wei (魏可伟) is a senior software engineer in DB2 for z/OS development.

Paul A. McWilliams is a content developer in DB2 for z/OS development

If you have any questions about the solution, please feel free to contact Kewei (weikewei@cn.ibm.com).

Read more…

SQL interface to handle JSON data in DB2 11 for z/OS Complimentary Whitepaper

SQL interface to handle JSON data in DB2 11 for z/OS Complimentary Whitepaper

How to access JSON data inside DB2 for z/OS without relying on DB2 NoSQL JSON APIs? The following whitepaper focuses on a SQL interface recently introduced in DB2 11 for z/OS that allows extraction and retrieval of JSON data from BSON objects and conversion from JSON to BSON.

 

https://ibm.biz/BdEwL8

Read more…

Information has never been so critical in running a business. Organizations are having to leverage new and existing sources of information in more innovating ways than ever before – and the volumes of data  is growing exponentially.  As the mainframe contains so much business critical data store in DB2 for z/OS it becomes a primary resource for today’s business analytics and decision making. The openness of the platform enables integration with other sources of data and its market leading qualities of service lend itself to becoming an information hub for big data initiatives.  Join the webcast and listen to Carl Olofson, IDC analyst, discuss his vast knowledge and experience of the platform, the ever increasing dependencies by large enterprise customers and how it is positioned and being used to deliver in the brave new world of big data.  Mark Simmonds will also highlight the information management portfolio roadmap for System z as it pertains to big data.


Register receive "The Mainframe as a Big Data and Analytics platform" a complimentary paper written by Carl Olofson, IDC.

Read more…

We are proud to announce that as of today, there is a third product certification test available for DB2 11 for z/OS.

This test is intended for application programmers.

Test number is 313
Test title is DB2 11 Application Developer for z/OS

Please refer to the following link in order to get additional information about this test:
http://www-03.ibm.com/certify/certs/08002601.shtml

Read more…

 

Register today and take advantage of the complimentary DB2 10 for z/OS New Functions and Migration Planning Workshop co-presented by Julian Stuhler, IBM Gold Consultant and Mike Bracey, IBM DB2 for z/OS Systems Engineer. The workshop will be held at the IDUG Technical Conference in Berlin on Sunday 4th November 2012 at 9.30am.


Attendees can expect to broaden their understanding of the features delivered in DB2 10 for z/OS, as well as the business benefits of an upgrade. The migration process will be explained and material will be provided to enable attendees to plan their own DB2 10 for z/OS migration.


Who should attend?
• Application Developers
• Database Administrators
• System Administrators
• Architects, IT decision makers, and Project Managers


What you can expect?
• An understanding of the features delivered with DB2 10 and how they can benefit your enterprise.
• Clarity of the migration process.
• References for many subjects, including: Migration, Fallback, Prerequisites & preparations.
• You will leave with: Presentation materials, Checklists, Project plan framework.
• Networking and Contacts


 Click here to register for the complimentary workshop or find out more.

Read more…

I decided I need to blog on big data - particular on how the IBM System z platform and DB2 for z/OS fits into this new emerging world of "I want to know everything about everything". For so long organizations have focused on getting more out the core data they already have - most of which is highly structured data, rich in content (I'm talking about the record/transaction based data stored in databases, used by home grown and packaged applications). It's trusted, you know where is comes from, you understand the provenance behind it. It's estimated that 95% or fortune 1000 companies store some of the data on System z because of its integrity and ability to store, secure, process data, to scale and be resilient, But then there's all that "other stuff" - someone else's problem (now I'm referring to emails, social media, machine/sensor data, time series, geospatial and so forth - "differently structured" data ). But organizations started to realize just how valuable this "other stuff" could be. It could potentially provide different insights and perspectives of who a customer is, what their real needs and wants are, how the "feel" about a service, product or your company.

Over the coming months I'm going to take you on my journey as I discover more about the realities of big data and what it means for businesses, governments, and - you and me - the people. Let me start by stating this : Yes it's a paradigm, a strategy but do you suddenly stop what you're doing today and start "big data" projects? Eh.... no. In all likelihood you are probably doing it already - particular if you do have the System z platform. So what is big data I hear you asking? In a nutshell it's your ability to process, integrate, understand data from anywhere that is relevant to your business. Of course it's not without its challenges. So why do it ? Because when it comes to doing business big data augments / expands on what you already know about the market, a product, a customer, etc. The more we know the better we can manage risk, costs and identify opportunities for growth - That's big data in a nut shell.

So stay with me as I show why DB2 for z/OS and System z can be integral to the success of big data in the enterprise and what we are doing to help you make big data a reality. Think BIG., Think z.

Read more…
Business Benefits of DB2 9 for z/OS Managing Mission Critical Databases Cost Effectively Protecting the Database - IBM offers security improvements in DB2 version 9 and beyond Client-focused Innovation -The System z server continues to evolve to meet emerging IT needs
Read more…
OK - so you are on here because you love IBM DB2 for z/OS. Well....IBM is inviting System z customers to participate in an Early Release Program for InfoSphere Master Data Management Server for z/OS - which of course is based on DB2 for z/OS. We invite customers to evaluate the capabilities of the market leading Master Data Management offering on one of the highest performance, most business resilient secure platforms on the market. If you are interested or would like to participate in this program please contact : Anthony Bosanac (abosanac@ca.ibm.com) and Henk Alblas (halblas@ca.ibm.com)

Read more…