Db2 (14)

In DB2 12 for z/OS, DRDA Applications and Application Compatibility Part Two Gareth Copplestone-Jones provides guidance on the implementation of server-side configuration.

Server-side configuration

When considering how to manage managing Application Compatibility – APPLCOMPAT – for your distributed applications which use the NULLID packages, the main alternative to client-side configuration (discussed in the previous article) is server-side or DB2-side configuration. Although not without its challenges, the advantage of server-side configuration is that much of the necessary configuration is done in one place, using system profiles. Continue reading part two

9524610897?profile=original

Read more…

Introduction

This, the first of two articles on how to manage the Application Compatibility level for DRDA applications, provides an introduction to the subject and considers two of the ways of doing this. In the second article Gareth Copplestone-Jones will concentrate on perhaps the most promising method and discusses its drawbacks.

A very brief history of Application Compatibility

With the release of DB2 11 for z/OS, IBM introduced Application Compatibility, which is intended to make migration from one DB2 release to another less burdensome by separating system migration from application migration, and by allowing you to migrate applications individually once system migration has completed. Application migration is managed using two controls: the APPLCOMPAT BIND option, with a default option provided by the APPLCOMPAT system parameter; and the CURRENT APPLICATION COMPATIBILITY special register.

The original announcement was that DB2 11 would support the SQL DML syntax and behaviour of both DB2 10 and DB2 11, and that DB2 12 would support that of all three. Then along came DB2 12 with Continuous Delivery and Function Levels.

Application Compatibility was extended in DB2 12 in two ways: to support function levels as well as release levels; and to support SQL DDL and DCL as well as DML. It still supports an Application Compatibility setting of V10R1.

One of the big practical issues with Application Compatibility has always been how to manage dynamic SQL packages, and in particular how to manage the NULLID packages used by DRDA clients connecting via DB2 Connect or the IBM data server clients and drivers. That’s what this article is about. Continue reading

9524610497?profile=original

 

Read more…

Node.js Application and DB2 REST services

DB2 for z/OS delivered native REST services support in the end of 2016. I wrote 2 white paper on how to create a DB2 REST service and how to consume this service from a mobile device. I start getting enquiries on how to consume a DB2 REST service from a node.js application. In the following blog, I am going to share my experience in implementing a node.js application to invoke a DB2 REST service.

https://www.ibm.com/developerworks/community/blogs/e429a8a2-b27f-48f3-aa73-ca13d5b69759/entry/Node_js_Application_and_DB2_REST_services?lang=en

Read more…

IDUG is pleased to offer these Complimentary Workshops for FREE to squeeze the most educational value out of your conference.

Sunday, Nov 13th

Certification Preparation Courses:

Pre-Certification Workshop: IBM DB2 11 DBA for z/OS & DB2 10.1 Fundamentals (Exam 610 & Exam 312)  
Pre-Certification Workshop: DB2 10.1 DBA for LUW (Exam 611) and DB2 10.5 DBA for LUW Upgrade (Exam 311)

Thursday, Nov 17th

Read more…

We have decided to extend the Early Bird Registration up to and including 10th October. This means you will be able to take advantage of the low rate for a little bit longer.

Register by October 10th. Save an additional €225 using EARLYEMEA discount code.

http://www.idug.org/p/cm/ld/fid=926

 

Read more…
Unable to attend the IDUG DB2 Tech Conference in Lisbon, Portugal this year? You can still experience featured session and Db2 Panels taking place at conference and ask questions live through uStream. Login to idug.org to join live stream, and engage with IBM strategists and developers, consultants and independent DB2 users remotely!When: October 4, 2017Time: Begins 11:00 AM WET ( GMT + 1hrs)Where: Live Stream from Anywhere!Cost: Complimentary for IDUG Members (Not a member? Join today!)Session: IBM BLU for Spark an Event store for the next generation of ApplicationsPresenter: Namik Hrle -IBM FellowAbstract: This presentation provides a deep dive into he next generation of IBM data store for handling real time event applications from IoT to new Event Sourcing applications. The Store is built on the Open source Spark platform and Object storage and can ingest Millions of transactions per second and provide Highspeed analytics on transactional data in real time. Perfect for Event sourcing applications than need the velocity and volume of data this platform can handle and for Structured Data Lake applications such as Internet of things.Join Now http://ibm.biz/BdjMLk
Read more…
Unable to attend the IDUG DB2 Tech Conference in Lisbon, Portugal this year? You can still experience featured session and Db2 Panels taking place at conference and ask questions live through uStream. Login to idug.org to join live stream, and engage with IBM strategists and developers, consultants and independent DB2 users remotely!When: October 4, 2017Time: Beings 11:00 AM WET ( GMT + 1hrs)Where: Live Stream from Anywhere!Cost: Complimentary for IDUG Members (Not a member? Join today!)Session: IBM BLU for Spark an Event store for the next generation of ApplicationsPresenter: Namik Hrle -IBM FellowAbstract: This presentation provides a deep dive into he next generation of IBM data store for handling real time event applications from IoT to new Event Sourcing applications. The Store is built on the Open source Spark platform and Object storage and can ingest Millions of transactions per second and provide Highspeed analytics on transactional data in real time. Perfect for Event sourcing applications than need the velocity and volume of data this platform can handle and for Structured Data Lake applications such as Internet of things.Join Now http://ibm.biz/BdjMLk
Read more…

The IDUG Mentor Program gives IDUG members the opportunity to pass

on the valuable skills they have learned over the years to fellow DB2 professionals.

If you wish to motivate a brand new IDUG attendee and apply for 60% Mentor

discount coupon, you must fall in to one of the following categories:

   - Loyal IDUG attendees (attended 3 major IDUG conferences in the past)
   - IBM Champions (https://www.ibm.com/developerworks/champion/ )
   - Regional User Groups (Find a local User Group at http://www.idug.org/page/user-groups-home )

Visit http://www.idug.org/p/cm/ld/fid=862 ; to learn more!

Read more…

My colleague Param (param.bng@in.ibm.com) and I (pallavipr@in.ibm.com) are exploring various aspects of Spark integration with DB2 and DB2 Connect drivers. We have decided to write a series of articles capturing our experimentation for the benefit of others as we did not find any article that focuses on different aspects of DB2 access via Spark.

Our first article in the series covered DB2 access via Spark Scala shell. This second article focuses on accessing DB2 data from via standalone Scala and Java program in Eclipse using DB2 JDBC driver and DataFrames API. Below are the detailed step by step instructions. Note that same instructions will apply to DB2 on all platforms (z/OS, LUW, I) as well as Informix.

  1. Confirm that you have Java installed by running java -version from Windows command line. JDK version 1.7 or 1.8 is recommended.

  2. Install Spark on local machine by downloading spark from https://spark.apache.org/downloads.html.

  3. We chose pre-built binaries as shown in Screenshot 1 (instead of source code download) to avoid building spark in early experimentation phase.

    9524596065?profile=originalScreenshot 1

  4. Unzip the installation file to a local directory (say C:/spark).

  5. Download Scala Eclipse IDE from http://scala-ide.org/download/sdk.html

  6. Unzip scala-SDK-4.1.0-vfinal-2.11-win32.win32.x86_64.zip into a folder (say c:\Eclipse_Scala)

  7. Find eclipse.exe from eclipse folder and run. Make sure you have 64-bit Java installed by running java -version from cmd prompt. Incompatibility between 64 bit Eclipse package and 32-bit Java will give an error and Eclipse would not start.

  8. Choose a workspace for your Scala project as shown in Screenshot 2.

    9524597452?profile=originalScreenshot 2

  9. Create a new Scala project using File->New Scala Project.

  10. Add Spark libraries downloaded in Step 6 to the newly created Scala project as shown in Screenshot 3.

    9524597469?profile=originalScreenshot 3

  11. You may see an error about more than 1 scala libraries as shown in Screenshot 4 since Spark has its own copy of Scala library.

9524597259?profile=originalScreenshot 4



  1. Remove Scala reference from the Java build path as shown in Screenshot 5 to remove the error.     9524597269?profile=originalScreenshot 5

  2. You may see another error “The version of scala library found in the build path of DB2SparkAccess (2.10.4) is prior to the one provided by scala IDE (2.11.6). Setting a Scala Installation Choice to match”. Right click Project->Properties->Scala Compiler and change project setting to 2.10 as shown in Screenshot 6.

    9524597501?profile=originalScreenshot 6

  3. After clicking OK, project gets rebuilt and you will only see a warning about different Scala versions that you can ignore.

  4. Now you can right click DB2SparkAccess project and choose New Scala App as shown in Screenshot 7. Enter application name and click Finish.

9524597088?profile=original

Screenshot 7

  1. Copy the following source code into the new Scala application you have created (.scala file) and modify the database credentials to yours.

    import org.apache.spark.sql.SQLContext

import org.apache.spark.SparkConf

import org.apache.spark.SparkContext

object DB2SparkScala extends App {

val conf = new SparkConf()

.setMaster("local[1]")

.setAppName("GetEmployee")

.set("spark.executor.memory", "1g")

val sc = new SparkContext(conf)

val sqlContext = new SQLContext(sc)

val employeeDF = sqlContext.load("jdbc", Map(

"url" -> "jdbc:db2://localhost:50000/sample:currentSchema=pallavipr;user=pallavipr;password=XXXX;",

"driver" -> "com.ibm.db2.jcc.DB2Driver",

"dbtable" -> "pallavipr.employee"))

employeeDF.show();

}

  1. Right click the application and select Run As-> Scala application as shown in Screenshot 8-

    9524598055?profile=originalScreenshot 8

  2. You may see the following exception - Exception in thread "main" java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver. To get rid of the above exception, select Project->Properties and configure Java Build Path to include the IBM DB2 JDBC driver (db2jcc.jar or db2jcc4.jar) as shown in Screenshot 9. JDBC driver can be downloaded from http://www-01.ibm.com/support/docview.wss?uid=swg21385217

    9524598076?profile=originalScreenshot 9

  3. Now click on your Scala application and select Run As->Scala Application again and you should see the employee data retrieved from DB2 table as shown in Screenshot 10.

    9524597862?profile=originalScreenshot 10

  4. To perform similar access via a standalone Java program, Click on Project->New->Other as shown in Screenshot 11.

    9524597879?profile=originalScreenshot 11

  5. Select Java->Class and click Next that takes you to Screenshot 12.

    9524598261?profile=originalScreenshot 12

  6. Enter a name for your Java class and click Finish as shown in Screenshot 13 -

    9524597691?profile=originalScreenshot 13

  7. Paste the following code into your newly created class (.java file) with database credentials changed to yours.

import java.util.HashMap;

import java.util.Map;

import org.apache.spark.SparkConf;

import org.apache.spark.api.java.JavaSparkContext;

import org.apache.spark.sql.DataFrame;

import org.apache.spark.sql.SQLContext;

public class DB2SparkJava {

public static void main(String[] args) {

SparkConf conf = new SparkConf().setAppName("Simple Application");

conf.setMaster("local[1]");

conf.set("spark.executor.memory", "1g");

JavaSparkContext sc = new JavaSparkContext(conf);

SQLContext sqlContext = new SQLContext(sc);

Map<String, String> options = new HashMap<String, String>();

options.put(

"url",

"jdbc:db2://localhost:50000/sample:currentSchema=pallavipr;user=pallavipr;password=XXXX;");

options.put("driver", "com.ibm.db2.jcc.DB2Driver");

options.put("dbtable", "pallavipr.employee");

DataFrame jdbcDF = sqlContext.load("jdbc", options);

jdbcDF.show();

}

}

  1. Right click your newly created Java application. Select Run As → Java application. You should see similar results as Step 20.

Read more…

9524596087?profile=originalHi,

Greetings !!

The Kolkata India DB2 User Group (KIDUG) is a Regional Users Group (RUG) & is an organized group of individuals at the local level who share an interest in IBM’s DB2 Family of Products or similar information Management opportunities. The group has started taking shape from 2013 and has been gaining momentum ever since with professionals from different organizations showing interest. The details about the group can be found at

www.idug.org/rug/kidug.

An event was organized last year at Kolkata, which was quite a success being attended by around 120 people where a number of topics covering both DB2 on z/OS and DB2 on LUW tracks were presented. This being first such initiative at Kolkata, it generated a lot of interest and positive feedback. Given the success of the last time, a similar event is planned for June 14th, 2014. The event was attended by professionals from various renowned organizations like IBM, Cognizant etc. and also by some of the students from Technical Institutes like Techno India. An entry fee was collected last time from each of the delegates to cover the expenses of the event.

The idea is to spread the message to a wider base of professionals from various IT Organizations in and around Kolkata like TCS, Accenture, Wipro, Capgemini, HCL etc. so that there are better networking opportunities and idea exchanges happening. The message could be spread to other non IT companies as well who are either using DB2 or are a potential user of the same. We also would like to bring some distinguished speakers from different organizations to share their DB2 experiences.

The different set of professionals who are expected to benefit by attending these sessions are:

1)      DB2 programmers
2)      DB2 DBAs
3)      Data Architects
4)      People working on Migration/Replatforming projects
5)      IT Project Managers


Thanks and Regards,

Kolkata India DB2 User Group

Read more…

We are proud to announce that as of today, there is a third product certification test available for DB2 11 for z/OS.

This test is intended for application programmers.

Test number is 313
Test title is DB2 11 Application Developer for z/OS

Please refer to the following link in order to get additional information about this test:
http://www-03.ibm.com/certify/certs/08002601.shtml

Read more…

Triton Consulting, the UK's largest independent information management consultancy, is delighted to announce that two of its Directors have been invited to speak at the prestigious IDUG DB2 Technical Conference in Berlin, Germany, in November this year.


Taking place between the 4th and 9th of November, IDUG is described as 'the foremost independent, user-driven community that provides a direct channel to thousands of professional DB2 users across the globe'. There will be a comprehensive programme of technical education sessions throughout the week: Triton's Julian Stuhler, Director/Head of Solutions Delivery and Paul Stoker, Sales and Marketing Director, will be bringing their respective areas of expertise to the delegates.

Julian Stuhler


On Tuesday 6th November at 2.15pm Julian, a previous IDUG Board Director and Past President, will present 'Memory Management In DB2 10 For z/OS'. He explains that 'with the advent of DB2 10 for z/OS, users are finally freed of the DBM1 virtual storage constraints that have been such a limiting factor for DB2 for z/OS scalability. Now the focus has to move from virtual to real memory management in order to maintain system performance and availability.' Julian will be looking at the major storage-related changes introduced by DB2 10 and providing practical advice on what and how to monitor in a DB2 10 for z/OS environment.


The presentation is aimed at Database Administrators and Systems Programmers with intermediate experience.


Julian is a Principal Consultant with Triton Consulting and has over 24 years' relational database experience. He possesses practical knowledge in many aspects of the IBM Information Management portfolio, including experience in application programming, database administration, technical architecture, performance tuning and systems programming. In 1999, Julian was invited to join the IBM Gold Consultants programme, used to recognise the contributions and influence of the world's 100 leading database consultants. In May 2008, Julian was recognised as one of IBM's inaugural Data Champions - acknowledging individuals with outstanding contributions to the data management community.


Julian is an IBM Redbook author and won the Best Overall Speaker award at the 2000 International DB2 User Group meeting. He has lectured widely on DB2 subjects throughout Europe and the US - his presentation in Berlin should not be missed.


Paul Stoker


On Thursday 8th November at 9.45am Paul will showcase 'Mission Impossible: Improving DB2 Scalability, Availability AND performance While Doubling Workload'. As the lead DB2 DBA for the Digital Banking Division of a large UK-based retail bank, Paul describes that 'in a recent project, the task of dramatically improving scalability and availability of a DB2 z/OS hosted 24 by 7 internet banking application was demanding enough, within a one year timeframe. The additional complexity of a doubling in transaction and customer numbers, whilst maintaining performance, made this an even more challenging assignment.' Paul will be sharing his experience of working on the project, discussing design, strategy and implementation, including Data Sharing, high insert performance and dynamic workload balancing'.


The presentation is aimed at Database Administrators and Systems Programmers at beginner and intermediate levels.


Paul is also a Principal Consultant with Triton Consulting and has worked with DB2, predominantly on the mainframe platform, for over 20 years. Paul's sectors of expertise include banking, insurance, central Government and telecoms. His roles have included Infrastructure Architect, Physical Data Modeller, Development DBA, Performance Tuning Specialist, System Programmer and Product DBA.

Click here for more information on the IDUG DB2 Technical Conference or alternatively, to register as a delegate click here. Triton's homepage can be found at http://www.triton.co.uk where you can find more information about the many areas of DB2 expertise provided by Triton Consulting.

Read more…

Summary
DB2 10.5 has been made available recently. One of the most interesting features is Columnar based tables and the BLU technology. This article will help you to get up and running in no time with your first BLU table.

Introduction
This week IBM made available DB2 10.5 for download and you can get a trial version of the product free of charge.

I have been looking after BLU tables and this was a great temptation to give it a try.
Indeed, one of the most relevant features of DB2 10.5 is the introduction of BLU Acceleration and Columnar tables. This is a new technology (to DB2 LUW) that has the potential to dramatically accelerate analytics queries. A key advantage is that BLU is embedded directly into the DB2 kernel and its implementations should be transparent for us, the users, apart of its performance advantages, of course.
Now we can create column organized tables, in contra-position to traditional row organized tables, in DB2 databases. DB2 10.5 embeds a vector processing engine in its core that takes in charge its handling. Some publications describe queries qualifying and touching 10 Terabytes of data being resolved in seconds, or less. A statement that is strong enough to keep many of us indifferent, right?
Another very interesting characteristic of Columnar based tables is simplicity. Whereas a row organized table requires the usual set of interactions that goes from designing the physical model to creating the best indexes for performance, BLU tables just need to be created and loaded. And that’s it. No more questions asked.

These notes are a recollection of the steps that I followed, and the problems that I encountered, while creating my first set of BLU tables. Hopefully these notes will help you to get started quickly with BLU as well.

Read more here: http://www.toadworld.com/platforms/ibmdb2/b/weblog/archive/2013/06/16/getting-up-and-running-with-db2-blu-tables-db2-10-5-very-quickly.aspx

Read more…