Carol Davis-Mann's Posts (43)

Sort by

In part one of this blog James Cockayne looked at what might happen to a DB2 database that was attacked by ransomware encryption.  

In part two James shares four dos and four don’ts to help protect DB2 databases from ransomware attacks.

https://www.triton.co.uk/ransomware-and-the-db2-database-part-two/

9860513277?profile=RESIZE_710x

Read more…

By Mark Gillis

There are easily accessible means of checking what your Stored Procedure needs in the way of dependent objects (SYSCAT.ROUTINEDEP, basically). So, what if you find a, or a number of, Stored Procs that are marked as needing a REBIND and then, when you do that rebind, you get an SQL0440 indicating that “something” is missing. How do you go about checking that situation out? Find out here

9756322693?profile=RESIZE_584x

Read more…

By James Cockayne

By now I’m sure everyone has heard of the malicious practice known as ransomware attacks, where miscreants break into a corporate network and encrypt data before demanding huge sums of money to provide a method to decrypt that data and make it accessible again.  The attacks tend to be insidious – sometimes the attacker is in the network for months before they gain access to the systems they are interested in, and they are known to target backup servers as well as the primary systems to cause maximum inconvenience to the target organisation. 

Find out what an attack on a DB2 Database would look like. Continue reading James Cockayne's latest blog. 

https://www.triton.co.uk/ransomware-and-the-db2-database-part-1/

 

 

Read more…

DB2 on Apple Silicon

By James Cockayne

Apple’s Macs have been a popular development platform for many years now, but IBM have never really committed to supporting DB2 servers on macOS it seems.  There was a version of DB2 Express-c v9.7 made available some years ago, but it was lacking features and is obviously way too far out of date to consider these days.  A useful solution was to run Docker containers, or full virtual machines with Linux or Windows to make DB2 available locally, but the switch to the new Apple Silicon processors, or the M1 chip, means the Mac now uses the ARM64 instruction set rather than sharing the Intel/AMD x86-64 processors common to Windows and Linux platforms – and there is no option to download a copy of DB2 that runs on this processor.

So is that the end of having DB2 installed locally?  Not quite.

Click to continue reading

9644149272?profile=original

Read more…

By Julian Stuhler

Background

With the advent of DB2 12 for z/OS, IBM has moved to a more agile approach for delivering new function to DB2 customers, known as “continuous delivery”. Major new releases of DB2 will now happen more rarely, with smaller packets of new functionality being delivered via the routine product maintenance process. This allows IBM to develop and release new features much more frequently, thereby reducing “time to value” – a familiar DevOps message.

To allow DB2 customers to absorb this new function in a flexible and efficient way, IBM has also delivered a comprehensive set of capabilities that allow the “function level” of the overall DB2 system to be easily progressed while insulating individual applications from the impact of any changes via the “application compatibility level” set for each program.

Read Julian Stuhler's article in full

9524610268?profile=original

Read more…

Introduction

In the previous blogs in this series, we’ve run through installing IBM Open Enterprise Python for z/OS and IBM z/OS Open Automation (ZOA) Utilities, which are the required pre-requisites for Ansible to perform actions on z/OS. In the third of this series, we will look at the installation of Ansible on Linux and an example playbook execution of Ansible to gather simple information and perform some tasks.

 

Architecture

In this example, we will be working with a control node – where the ansible script (playbook) will execute – and one or more z/OS hosts which will be the target(s). In this case the control node will be on Linux. Note that connectivity is via SSH, which needs to be configured and available for the userids that will be used on the target z/OS hosts. 

Continue Reading

9524609660?profile=original

Read more…

As the world mourns the loss of the digital dance duo Daft Punk, Triton Consulting is sending its very own duo of digital pioneers Mark Gillis and Damir Wilder around the world – virtually.

IBM Champions Mark and Damir were delighted to get the call that they had been selected to present at IDUG Australasia and IDUG North America Tech Conferences.

Eagerly planning which IDUG polo shirts to pack, it wasn’t long before reality set in……this year’s IDUG conferences would be virtual ones. Continue reading

9524610096?profile=original

Read more…

In DB2 12 for z/OS, DRDA Applications and Application Compatibility Part Two Gareth Copplestone-Jones provides guidance on the implementation of server-side configuration.

Server-side configuration

When considering how to manage managing Application Compatibility – APPLCOMPAT – for your distributed applications which use the NULLID packages, the main alternative to client-side configuration (discussed in the previous article) is server-side or DB2-side configuration. Although not without its challenges, the advantage of server-side configuration is that much of the necessary configuration is done in one place, using system profiles. Continue reading part two

9524610897?profile=original

Read more…

Introduction

This, the first of two articles on how to manage the Application Compatibility level for DRDA applications, provides an introduction to the subject and considers two of the ways of doing this. In the second article Gareth Copplestone-Jones will concentrate on perhaps the most promising method and discusses its drawbacks.

A very brief history of Application Compatibility

With the release of DB2 11 for z/OS, IBM introduced Application Compatibility, which is intended to make migration from one DB2 release to another less burdensome by separating system migration from application migration, and by allowing you to migrate applications individually once system migration has completed. Application migration is managed using two controls: the APPLCOMPAT BIND option, with a default option provided by the APPLCOMPAT system parameter; and the CURRENT APPLICATION COMPATIBILITY special register.

The original announcement was that DB2 11 would support the SQL DML syntax and behaviour of both DB2 10 and DB2 11, and that DB2 12 would support that of all three. Then along came DB2 12 with Continuous Delivery and Function Levels.

Application Compatibility was extended in DB2 12 in two ways: to support function levels as well as release levels; and to support SQL DDL and DCL as well as DML. It still supports an Application Compatibility setting of V10R1.

One of the big practical issues with Application Compatibility has always been how to manage dynamic SQL packages, and in particular how to manage the NULLID packages used by DRDA clients connecting via DB2 Connect or the IBM data server clients and drivers. That’s what this article is about. Continue reading

9524610497?profile=original

 

Read more…

By James Cockayne

Enabling clients to interact via HTTP GET/POST requests the REST API functionality provides clients a lightweight, modern interface to data stored in DB2 databases.  In this series we look at how to get started with the REST API from the DBA’s perspective, starting in part one with how to get the service up and running.

Read part one of this four part blog series. 

9524609683?profile=original

Read more…

This is the last article in the series by Gareth Copplestone-Jones on locking for developers, and is a wrap up of miscellaneous points arising from previous articles.

 

Row level locking

The first item under discussion is row-level locking. I mentioned previously that the design default lock size should be page-level, with row-level locking only being used where justified. This is to reduce the CPU and elapsed time overhead typically incurred by row-level locking, and especially to avoid the data sharing overhead involved. The DB2 for z/OS documentation has further information about locking in a data sharing environment, but for the purposes of this article it’s important to stress that row level locking in a data sharing environment can and typically does introduce significant overheads.

Continue reading the final instalment in Gareth Copplestone-Jones series of blogs on DB2 for z/OS Locking for Application Developers. 

 9524608690?profile=original

Read more…

In this ninth article in the series on DB2 Locking for Application Developers, which provides information about coding your application not only for data integrity, which is the principle focus of this series of articles, but also for performance and concurrency, taking into account the transaction isolation levels in effect at run time. Background information about DB2 for z/OS locking semantics and mechanisms, transaction isolation levels, data anomalies and more are discussed in previous articles. This article concentrates on coding techniques, mostly for combining read-only cursors with searched update statements, that will provide protection against data anomalies, most specifically the lost update anomaly.

Let’s start with a restatement of why this is important. In DB2 for z/OS, the recommended programming technique for reading rows via a cursor and then updating some or all of those rows is to specify the FOR UPDATE clause on the cursor declaration and use positioned updates – UPDATE WHERE CURRENT OF. This has the advantage that, when you read a row, DB2 takes a U lock on the row or page. This allows concurrent readers with an S lock, but any concurrent transactions requesting U or X locks will have to wait until the U lock is released. When the transaction issues UPDATE WHERE CURRENT OF <cursor-name>, DB2 attempts to promote the U lock to an X lock. This ensures that no other transaction can have updated the row between the SELECT, which protects the row with a U lock, and the UPDATE.

However, it’s not always possible to use FOR UPDATE cursors. Find out why. Click here to continue reading. 

9524609083?profile=original

Read more…

In the first in Triton Consulting's series of blogs on the latest DB2 release Iqbal Goralwalla takes a look at Consolidation of DB2 Editions.

Consolidation of DB2 Editions – simplicity is the key

One of the highlights in DB2 11.5 for me was the simplification of the plethora of editions. From 13 editions and several charge metrics to just 3 editions (Community, Standard, and Advanced) and one charge metric. The icing on the cake however was all DB2 features are available in each of the 3 editions. 

Read the blog in full9524608288?profile=original

Read more…

By James Cockayne

Jupyter notebooks are a popular way to query DB2 databases.  The ease of use to set up a notebook and run complex queries with rich visualisations make them ideal for quickly demonstrating concepts and data trends.  As the notebooks run Python code, this gives us the opportunity to use another tool useful for spinning up demonstration environments – Docker.  To do this we use the Docker SDK for Python – in this blog we look at how to spin up a Docker container and run a query against the container all from one notebook.  If you would like to find out more or use this code yourself, visit the Triton Consulting by clicking here

9524607084?profile=original

Read more…

Welcome to part eight of this blog series on DB2 Locking for Application Developers, which is about considerations for coding your application not only for data integrity (the main focus of this series of articles), but also for performance and concurrency, taking into account the transaction isolation levels in effect at run time.

This article continues the discussion in part seven about read-only cursors running at a transaction isolation level of cursor stability (CS), supplemented by the fairly common practice of combining read-only cursors with searched updates. I’ll explain later in the article the perils of access-dependent cursors and why you should avoid using them. The article concludes with a brief discussion of the concept of optimistic locking, also called optimistic concurrency control.

Visit the Triton Consulting tech blog to continue reading. 

9524602875?profile=original

Read more…

By Gareth Copplestone-Jones, Triton Consulting

This is the seventh article in the series on DB2 Locking for Application Developers. So far we’ve covered a lot of topics, including the ACID database transaction properties, lock size, lock mode and lock duration, transaction isolation levels and data anomalies, all with a view to understanding how data integrity and application performance requirements can only be achieved by the application being coded according to the isolation level and data currency options in use.

In this article we’ll start off by looking at which isolation levels are susceptible to which data anomalies before moving on to discuss the related topic of data currency.

Click to continue reading

9524605865?profile=original

Read more…

This is the sixth article in the series on DB2 for z/OS Locking for Application Developers. The primary focus of this series is to help DB2 for z/OS application developers to guarantee data integrity while optimising for performance by designing and coding applications which take into account the following : the ACID properties of database transactions; lock size, lock mode and lock duration in DB2 for z/OS; compatible and incompatible locks; and how DB2 isolation levels provide application-level controls to establish the balance between (i) the degree of  transaction isolation required to guarantee data integrity and (ii) the need for concurrency and performance.

In this article, I’m going to move on to look at some of the data anomalies that applications are exposed to and which isolation levels are susceptible to those data anomalies. I’ll then discuss two factors which complicate the task of managing the effect of DB2 locking semantics on data integrity: ambiguous cursors; and the CURRENTDATA option for BIND.

Continue reading part 6  

9524606088?profile=original

Read more…

Welcome to this, the fifth in the series on DB2 Locking for Application Developers. So far, we’ve covered a lot of ground, including the ACID properties of database transaction, the locking semantics implemented by DB2 – lock size, lock mode and lock duration – and issues such as lock contention and lock escalation, all with a view to understanding how data integrity and application performance depend on an effective partnership between DB2 and the application.

The next stage in this journey takes in the four isolation levels supported by DB2 (RR, RS, CS and UR), and the way they provide differing degrees of compromise between isolation (remember, the isolation property specifies that “all transactions must be executed, from a data point of view, as though no other transactions are being executed at the same time”) and concurrency – multiple transactions potentially accessing the same data running alongside each other.

Continue reading Part 5

9524604480?profile=original

Read more…

This is the fourth article in the series on locking for DB2 for z/OS developers. To summarize the main thrust of this series of articles, data integrity and application performance are dependent on application programs being designed and coded to take into account the locking strategy used by the DBMS. Following on from the previous articles, this one wraps up the discussion on lock size and lock mode with the topic of lock escalation and provides some recommendations on lock size before moving on to describe the final component of the locking mechanism covered in this series, lock duration.

Click here to continue reading Part 4. 

9524604063?profile=original

Read more…

By Gareth Copplestone-Jones, Triton Consulting

This is the third article in the series on locking for DB2 for z/OS developers. To recap, data integrity and application performance are dependent on application programs being designed and coded to take into account the locking strategy used and working in collaboration with the DBMS. The previous article concentrated on lock size and discussed gross locks and intent locks at the tablespace level; in this article I go into more detail about lock modes, and discuss incompatible locks.

Click here to continue reading.

9524603468?profile=original  

Read more…