Troy Coleman's Posts (17)

Sort by

Comparing DB2 12 Function Levels M100 and M500

DB2utor Post: Comparing DB2 12 Function Levels M100 and M500

I was recently reading the DB2 12 for z/OS What’s New guide. It's interesting comparing the features available in the initial Function Level M100 with what you can get when you activate Function Level M500.While most new capabilities in the initial DB2 12 release are enabled only after activation of M500, of course there are benefits with M100:Virtual storage enhancements – All enhancements are available in M100. No new enhancements are delivered with M500.Subsystem parameters – All SQL optimization enhancements are available in M100 as long as the statement goes through a full prepare. No new enhancements are delivered in M500.SQL and application compatibility – New SQL capabilities become available after the activation of a function level. No new SQL capabilities are delivered with M100. (I’ll list separately the SQL capabilities delivered with M500.) You can also continue to execute SQL with DB2 10 or DB2 11 new-function mode by using application compatibility values “V10R1” or “V11R1.”Now for the M500 enablement enhancements. I'll write plenty more about specific features in 2017, but here are some highlights:Advance triggers – DB2 12 introduces support for advanced triggers. Any trigger created before activation of M500 is considered a basic trigger.Additional array support – DB2 11 introduced array data types. DB2 12 adds the capability to define a global variable as an array data type. In addition, the ARRAY_AGG function can now be invoked without a GROUP BY clause, and can be used with an associative array.Additional support for global variables – BLOB, CLOB or DBCLOB data types are added.Additional support for pureXML – There are performance improvements with multi-document updates using XMLMODIFY.Additional support for JSON – A quick list: the JSON_VAL function isn't required to be BLOB. In addition, view column, CASE expression, table expression with UNION ALL, a trigger transition variable and a SQL PL variable or parameter are now supported.MERGE statement enhancements – There's greater functionality and improved compatibility with the DB2 family including: table-reference for source data, multiple MATCHED clause, additional predicates with MATCHED or NOT MATCHED, DELETE operation, IGNORE and SIGNAL actions.SQL pagination support – This allows mobile devices to access the next set of data using an OFFSET clause or a row value expression.Unicode column in EBCDIC table.Piece-wise deletion of data – This is designed to help avoid locking contention when a large number of rows are being deleted in a single SQL statement.Support for temporal referential constraint – You can use the PERIOD BUSINESS_TIME when creating a referential constraint for an application-period referential constraint.More flexibility in defining application periods for temporal tables – You can now define an application period to be inclusive-inclusive. The end date would be included in the period.Support for temporal logical transactions – The application can set a built-in global variable with a system time period and all data, regardless of commit processing, will contain the same logical time period.PERCENTILE function support.DRDA fast load – Enables quick and easy loading of data from files on a distributed client.ODBC enhancements – Performance and portability of the DB2 ODBC driver are improved.Obfuscated source code for SQL routines and triggers -- The SQL logic is rendered unreadable, enabling the delivery of SQL routines and triggers without the need to share the intellectual property of the SQL PL logic.Data sharing support for global transactions.Support for maintaining session data on target server.Resource limits for static SQL statements.
Read more…

Continuous Delivery and DB2 12 Function Levels

DB2utor post: 

Continuous Delivery and DB2 12 Function Levels

I recently attended a regional DB2 user group meeting, and the subject of continuous delivery came up frequently. People were wondering about the process of adding new features being added through the maintenance stream and how to fallback if it's determined that the new features are causing a problem.

As it happens, the IBM Knowledge Center has new documentation about continuous delivery and DB2 12 function levels:

New DB2 capabilities and enhancements are continuously delivered in a single maintenance stream as the code becomes ready. You can activate the new capabilities in a data sharing group or DB2 subsystem after a function level is delivered. A function level corresponds to a single PTF that enables a specific set of enhancements that have shipped in previous PTFs.

The above link directs you to a list of available DB2 12 function levels, so by all means, read the whole thing.

I believe any DB2 systems programmer responsible for maintenance, installs and migrations will love continuous delivery. Now instead of having to undertake lengthy upgrade projects, you'll simply apply maintenance just as you always have -- but on an ongoing basis.

Prior to DB2 12 you may have applied a PTF or an RSU to fix something. Without even realizing it, though that process you were also adding new features to DB2. The difference with DB2 12 is that IBM is now documenting when new features become available by function level. So DB2 pros can control when these new features become active and when applications can start using them.

DB2 11 introduced application compatibility, which allows you to determine the version of DB2 code that SQL statements execute under. This allows DB2 sysprogs to take advantage of new system level performance features without having to upgrade the application. It also removes the "big bang" approach, in that you don’t have to test all your different applications and get them all to agree before activating new features.

In DB2 11, the application available parameter settings are either V11R1 or V10R1. Starting with DB2 12, new functionality will be introduced during the maintenance stream, and the settings will be V12R1M### (where ### is the function level).

To activate a given level of new functionality, invoke the ACTIVATE FUNCTION LEVEL command. Once the system is running with a given function level, the applications can take advantage of the new capabilities once the application compatibility parameter is set to the new function level. (Note: With DB2 12 the application SQL statement can continue to run with DB2 10 or 11 expected behavior using V11R1 or V10R1.)

Once you migrate to DB2 12, you'll have the following function levels:

  • V12R1M100 : Identifies compatibility with the function level before you activate new function, after migration to DB2 12, function level 100. V12R1M100 is the same as V11R1.
  • V12R1M500: Identifies compatibility with the function level that enables new function in the initial DB2 12 release, function level 500. V12R1M500 is the same as V12R1
  • V11R1: Specifies DB2 11 compatibility behavior. V11R1 must not be specified in the body of a trigger.
  • V10R1: Specifies DB2 10 compatibility behavior. V10R1 must not be specified in the body of a trigger.

For more, see the SQL Reference Guide or visit the IBM Knowledge Center.

Making it possible for customers to rapidly put new features into production through normal maintenance is huge for the mainframe's continued viability. Businesses will now have a more positive view of running applications on DB2 for z/OS because they know they'll no longer have to wait years for required features.

Read more…

Cloud's Future on the Mainframe

The latest DB2utor blog posting: Taking a look at IDAA in the cloud.

Cloud's Future on the Mainframe

November 15, 2016

In last week's IBM DB2 Analytics Accelerator for z/OS V6.1 announcement summary, I mentioned that I'd get to the other piece of news, which is the introduction of IBM DB2 Analytics Accelerator on Cloud V1.1.

I wanted to cover this in a separate post because cloud storage is such a hot topic -- and a somewhat touchy subject in the mainframe space. Many mainframe shops are reluctant to move their core business data into the cloud, given the sensitivity of the data. While this concern is understandable, mainframe enterprises are increasingly dabbling in cloud technology. As a means of getting started, some enterprises are putting non-critical data -- e.g., system performance and log data -- in the cloud and using analytic reporting tools against this data.

As mainframe enterprises become more comfortable with cloud technology, observers expect they'll see the value in moving core business data to the cloud. For instance, Gartner projects that, by 2020, a corporate “no-cloud” policy will be as rare as a “no-Internet” policy is today.

DB2 Analytics Accelerator for z/OS on Cloud is accessible only through DB2 for z/OS, which provides a very secure environment. DB2 Analytics Accelerator for z/OS on Cloud is hosted by IBM data centers using IBM Softlayer, a global cloud infrastructure based on dashDB software. The data is encrypted at rest and flows across the network using a secured VPN tunnel. Again, this is a very secure platform. This, along with the capability to accelerate queries against enterprise data in a cost-effective, flexible, and easy-to-use cloud environment, should prove attractive to enterprises.

DB2 Analytics Accelerator for z/OS on Cloud V1.1 includes these features and capabilities:

  • A hosted offering that is built on the IBM SoftLayer, global cloud infrastructure and based on dashDBTM software.
  • Ability to accelerate queries against enterprise data in a cost-effective, flexible, and easy-to-use cloud environment.
    • Native encryption that ensures control of data.
    • Native encryption of data at rest.
  • Secure VPN tunnel for data in motion.
  • Speed and reliability from a dedicated, bare-metal machine to optimize performance and enhance security.

I believe this announcement will open doors, leading to more mainframe workloads being offloaded to the cloud. What do you think? Please share your thoughts in comments.

Read more…