cloud (10)

Db2 v PostgreSQL - Mark Gillis

By Mark Gillis 

Mark Gillis has been doing some migration work; porting a Db2 database to a PostgreSQL one. You could say that is going from an Enterprise strength solution to a simpler, but less expensive option, but it’s not a choice Mark is in a position to ignore.

Customers are being presented with a wealth of database options as they migrate to the Cloud, and many of them are embracing the options of simpler and less licence hungry products.

There are many positives to PostgreSQL but there are some pitfalls in attempting such a migration.

Find out more from Mark in Db2 v PostgreSQL



Read more…

Business transformation and agility are critical in today’s fast moving world. Clients demand insights faster, more often, and with greater accuracy based on up-to-the-second transactional data that is frequently housed in mainframe systems. The dependency on technology and the rate at which information is created and consumed continues growing exponentially. Applications have to rapidly adapt to take account of the dynamic digital business models and environments. All this drives the need for flexible IT infrastructures that are built on a rock solid foundation to deliver consistent dependability, performance, and security in the face of rapid change.

Our z Systems portfolio is a perfect match for delivering on these needs. It is well recognized that mainframe hardware and software technology deliver unmatched levels of quality, reliability, security, and scalability. On top of this, z/OS clients have historically moved their systems forward conservatively, which has further helped to build the mainframe’s reputation for rock solid stability, essential for the most demanding business-critical workloads. A corollary is that the mainframe is often perceived as a stagnant platform that cannot move quickly, and cannot support the agile or DevOps needs of modern applications. Nothing, of course, could be further from the truth.  However, challenges do remain.

DB2 for z/OS, the mainframe’s flagship relational database product, is changing to a continuous delivery (CD) model to help further address these challenges. With CD, DB2 will deliver new features to the market faster, and in increments that will be much easier for customers to consume. Let’s take a closer look.

DB2 12 for z/OS is the latest release of DB2 and is currently in “beta” testing, or ESP (Early Support Program) testing as we call it. DB2 12 will deliver many new features for mobile, cloud, and analytics applications, while also bringing many new innovations to market to improve performance, availability, and security.

DB2 new versions have followed a historical pattern of about every three years or so, and our customers have grown comfortable with this cadence over the years. However, upgrading to a new DB2 release can be a major effort for customers. In the past, we have introduced innovations such as online rolling version upgrades (for data sharing), and APPLCOMPAT. These features allow IT groups to upgrade in a more streamlined way without having to take outages or involve application groups. But, nonetheless, a DB2 version upgrade can still be a cumbersome project. As a result of this, coupled with the conservative nature of mainframe environments, some customers don’t implement new DB2 versions until several years after GA of the product. With our traditional three-year delivery cycle, along with the version upgrade delays, it can be five to six years or more between the time that we complete the development of a new feature until the point at which that feature actually becomes available on a live system. In today’s fast changing world, this is no longer sufficient.

With CD, we will deliver new features continuously, as they become ready, on future DB2 releases. This will allow application developers and DBAs to access important new features more quickly without having to wait five to six years or more for the next DB2 version.

How will this be done so that it’s consumable and non-disruptive for customer environments?  Our approach to CD must meet or exceed the quality and stability requirements that the z Systems customer base demands, while making the delivery of new features consumable. Customers will receive defect fixes and new features in the same stream. They will be able to apply their DB2 maintenance upgrades just like they always have, including the ability to roll in maintenance upgrades across their data sharing groups while keeping the databases continuously available. In fact, this should become much easier because there will be fewer APARs with ++HOLD actions caused by toleration APARs (the new “function level” concept will ensure that necessary maintenance is applied across the DB2 group, therefore removing the need for many of these existing ++HOLD actions).

What is different is that the new maintenance will contain new features that are initially dormant. The customer can choose when to activate the new features via a new system activation command. APPLCOMPAT controls will be provided to ensure that applications remain stable and to allow for controlled exposure to the newly activated features. We will provide easy to access documentation on which new features are included in which function levels. We will work closely with ISVs to ensure that upcoming features are effectively communicated ahead of time so that the overall DB2 ecosystem remains stable as DB2 changes are incorporated.

We see the road ahead for DB2 for z/OS as being an exciting and rewarding journey for both IBM and its customers. Agile, quality-focused development will allow us to continuously deliver robust production-ready features to DB2 users much more rapidly than we could in the past; therefore, enhancing the vitality of the DB2 product and greatly easing the task of DB2 upgrades for customers.

For more information register for our live webcast with Q &A which we will be hosting on 27th September 2016 at 11am EST, this webcast will be available on replay

Read more…

By Sueli Almeida and Paul McWilliams.

We recently published a sample Db2 software services template. As a service provider, you can use it to create services (DBaaS) to rapidly provision from scratch one or multiple standalone Db2 subsystems. You can also later deprovision the provisioned Db2 subsystems, in IBM Cloud Provisioning and Management for z/OS.

The sample Db2 software services template is intended for service providers, who configure and make the Db2 system provisioning services available to the consumers of the service in their shops.

The sample template provisions non-data-sharing Db2 12 for z/OS subsystem instances in a “typical Db2 configuration.” For more information, about the configuration of the provisioned subsystems, see "About the sample Db2 software service template".
For a detailed instructions, and to download the sample Db2 software services template, see the GitHub repository: Db2ZTools/DevOps/Db2SystemServices/Db2ProvisionSystemNonDS/

Sueli Almeida is Db2 for z/OS DevOps and Cloud Provisioning Technical Leader and Paul McWilliams is an Information Developer for Db2 for z/OS documentation.

Always get the latest news about Db2 for z/OS from the IBM lab! How to subscribe
Follow us on Twitter: @DB2zLabNews
Read more…

IDUG is pleased to offer these Complimentary Workshops for FREE to squeeze the most educational value out of your conference.

Sunday, Nov 13th

Certification Preparation Courses:

Pre-Certification Workshop: IBM DB2 11 DBA for z/OS & DB2 10.1 Fundamentals (Exam 610 & Exam 312)  
Pre-Certification Workshop: DB2 10.1 DBA for LUW (Exam 611) and DB2 10.5 DBA for LUW Upgrade (Exam 311)

Thursday, Nov 17th

Read more…

We have decided to extend the Early Bird Registration up to and including 10th October. This means you will be able to take advantage of the low rate for a little bit longer.

Register by October 10th. Save an additional €225 using EARLYEMEA discount code.


Read more…

The IDUG Mentor Program gives IDUG members the opportunity to pass

on the valuable skills they have learned over the years to fellow DB2 professionals.

If you wish to motivate a brand new IDUG attendee and apply for 60% Mentor

discount coupon, you must fall in to one of the following categories:

   - Loyal IDUG attendees (attended 3 major IDUG conferences in the past)
   - IBM Champions ( )
   - Regional User Groups (Find a local User Group at )

Visit ; to learn more!

Read more…

In the first part of this article, we have introduced what the new approach is and how to prepare the artifacts needed by the new approach. In the second part of this article, we will introduce how to perform DB2 migration with z/OSMF.

Automating DB2 migration in z/OSMF

Creating the workflow instance

For every DB2 subsystem to be migrated, a workflow instance must be created. The path of the workflow definition file must be provided for instantiating a workflow instance. The path of the workflow input variable file is optional. However, it is always best to save the variable values for a specific subsystem into the workflow input variable file and provide it when creating the instance. Otherwise, z/OSMF prompts you to provide the value during execution, preventing a fully automated migration.  



z/OSMF prompts you to input the name of the workflow instance. It is best to name it with the workflow purpose and the target system. For example, "V10 to V11 CM Non-data-sharing - DB2A". "V10 to V11 CM Non-data-sharing" indicates the migration is from Version 10 to Version 11 conversion mode. DB2A is the name subsystem to be migrated.

All the steps in the workflow are assigned to the same person to fully automate the migration process. Dependencies have been set to these steps by the installation CLIST. Only the steps in the "Ready" state can be performed.




Validating JCL with symbolic substitution

It is best to validate the JCLs with variable substitution before executing them. The JCLs with variables substituted can be found on the "Perform" tab of a step. First review the values of the input variables.




By default, the values are loaded from the workflow input variable file, they don’t need to be changed here. However, if changes are required, click the exclamation mark next to the variable name, and check "Mark value editable." Then the input variable textbox becomes editable. The changed value applies to this workflow instance only.




In the step "Create JOB statement", you are prompted to review and update "JOB statement JCL". Update the job statement with a job name, a proper CLASS or MSGCLASS. For the migration steps that require INSTALL SYSADM authorization, you can also add one line to specify USERNAME and PASSWORD.

Then review the content of the JCL including the job statement. For anything to be updated, click the "Edit JCL" button then the JCL textbox becomes editable. The change on the JCL only applies to the particular workflow instance. If there is any change applying to all the workflow instances, it's best to change on the JCL template before creating workflow instances.




Because migration is a critical activity in an enterprise, every JCL must be verified  after the substitution, for every workflow instance.

Executing the workflow

After all workflow instances are tested, execute them. Some DB2 steps can repeatedly run, such as DSNTIJPM. For DSNTIJPM, select the option "Manually perform the selected step only".





Open the "Status" tab, the job status can be found, including job name, job ID, return code, and job outputs. However, return code 0 of DSNTIJPM does not mean no action should be taken. Follow the instructions on DB2 manual and check the output of each pre-migration reports to determine the actions to be taken.




After the first successful execution of a step, its status is marked "Complete". However, you can still re-execute the step. That is, if actions must be taken after the first execution of DSNTIJPM, take the actions and then return to z/OSMF and rerun the step. Check the reports until all pre-migration tasks are done.

Except the first step, all other steps in migration can be done automatically at the migration window. Select the first option in the "Perform Automated Step" panel. With that option, z/OSMF executes the selected steps and all the subsequent steps in the workflow, until all steps are executed successfully or an error occurs. An error in a step execution is not necessarily a nonzero return code. The maximum return code that a step can tolerate is defined in the workflow definition file. It can be 0, 4, 8 or any number. In the DB2 sample installation and migration workflows, most steps have 0 or 4 as the maximum return code.




If any error occurs, check the job output on z/OSMF "Status" tab. Problem diagnosis and resolution remains is business as usual.

If everything goes smoothly, all the steps are marked as "Completed."

Migrating multiple DB2 subsystems

The biggest benefit of using z/OSMF is for migrating multiple DB2 subsystems. In particular, when the migration steps and the migration jobs used in the migration of these subsystems are the same, and only the parameters used for these migrations, such as subsystem name and buffer pool sizes, are different. With z/OSMF, the rule "define once, execute multiple times" applies.

DB2 installation CLIST can help generate one workflow input variable file that fits for one DB2 system. To create the workflow input variable files working for other systems, the original workflow input variable file should be duplicated for each subsystem. Then you can edit the workflow input variable file with the proper parameters for that subsystem.

DB2 installation CLIST also provides an UPDATE function to generate a z/OSMF workflow input variable file based on an existing workflow input variable file and an input member such as DSNTIDxx.

For example, if the workflow artifacts were generated for an OLTP member, but the DBA wants to customize the workflow input variable file, the DBA can use the UPDATE mode and provide the input member DSNTIDAP from the existing OLAP member.




Enter the source and the target workflow input variable files.




Then DSNTIVAP is generated which will use the same workflow variable definition in DSNTIVTP but use the value in the input member from DSNTIDAP.

As a comparison, the value of the variable "ACCEL" in the source workflow input variable file DSNTIVTP is NO.



In the target workflow input variable file DSNTIVDP is AUTO.


The DBA can use the same workflow definition file, the file templates and the new workflow input variable file to create a new workflow instance to migrate the OLAP member.

Migrating a data sharing group

To migrate a data-sharing group, at least 2 sets of workflow artifacts need to be generated. The first set is for the first member to be migrated in that data sharing group. The workflow definition of the first member in a data sharing group is the same as a non-data-sharing DB2 subsystem. It is also named DSNTIWMS. The second set, for the subsequent members, is named DSNTIWMD.

The process to generate the set of workflow artifacts for the first member is the same as the process for a non-data-sharing DB2 subsystem except YES should be specified for DATA SHRING and first member should be specified.

It is best to generate the second set for the subsequent in a different data set because the JCL jobs used by the first set and the second set might have the same name but different content.

Moving workflow artifacts

If the generated workflow artifacts need to be moved or copied to another location, the workflow variable input file must be modified to update with the name of data set containing the JCL templates.

In the workflow definition file, the references of the JCL templates contain the variable NEWSAMP2 whose input is defined in the workflow input variable file.


After the JCL templates are moved to a new location, change the value of NEWSAMP2 in the new

workflow variable file.




DB2 11 for z/OS Installation and Migration Guide (GC19-4056)

IBM z/OS Management Facility Programming Guide (SA32-1066-04)

IBM DB2 for z/OS in the age of cloud computing




Special thanks to Maryela Weihrauch who provided much inspiration and good advice during the writing of this article.


Kewei Wei (魏可伟) is a senior software engineer in DB2 for z/OS development.

Paul A. McWilliams is a content developer in DB2 for z/OS development

If you have any questions about the solution, please feel free to contact Kewei (

Read more…

IBM DB2 for z/OS in the age of cloud computing

IBM DB2 for z/OS in the age  Of cloud computing

Reliability, availability, security and mobility




• Highly virtualized server that supports mixed workloads

• Self-serving capabilities in private, membership or hybrid cloud environments

• Divisional support of responsibilities

• Customized implementations to suit your business

• Platform foundation services for cloud use cases


Download this paper and learn:-

What is cloud computing, and what is driving this market trend?


What is data as a service, and what are the drivers?


Why DB2 for z/OS?


Click here to download full paper – over 700 downloads of this top preforming asset.

Direct Link to whitepaper without registration DB2%20for%20zOS%20Age%20of%20the%20Cloud%20IMW14820USEN.pdf

Read more…

At a Glance #DB2z  Learn More

IBM will make DB2 12 for z/OS available to a select group of clients in a closed Early
Support Program (ESP) on March 4, 2016.

The demands of the mobile economy combined with the explosive growth of data present unique opportunities and challenges for companies wanting to take advantage of their mission-critical resources.

Built on the proven availability,security, and scalability of DB2 11 for z/OS and the IBM z Systems TM platform, DB2 12 gives you the capabilities needed to meet the demands of mobile workloads
and increased mission-critical data. It delivers world-class analytics and online transaction processing (OLTP) performance.

DB2 for z/OS delivers innovations in these key areas:

  • Scalable, low-cost, enterprise OLTP and analytics
  • Easy access, easy scale, and easy application development for the mobile
  • In-memory performance improvements
  • Easy access to your enterprise systems of record

Read More

Read more…