|
developer (7)
There are several use cases where data extracted from live data streams such as Twitter may need to be persisted into external databases. In this example, you will learn how to filter incoming live Twitter data and write relevant subsets of Twitter data into IBM database DB2. Sample program will work against all flavors of IBM databases i.e. DB2 for z/OS, DB2 distributed, dashDB and SQLDB.
We will use Spark Streaming to receive live data streams from Twitter and filter the tweets by a keyword . We will then extract the twitter user names associated with the matching tweets and insert them into DB2. These user names extracted from Twitter can have many applications – such as a more comprehensive analysis on whether these Twitter users are account holders of the bank by performing joins with other tables such as customer table.
1) For a background on Spark Streaming, refer to http://spark.apache.org/docs/latest/streaming-programming-guide.html.
2) We will use TwitterUtils class provided by Spark Streaming. TwitterUtils uses Twitte4J under the covers, which is a Java library for Twitter API.
3) Create a table in DB2 called TWITTERUSERS using -
CREATE TABLE TWITTERUSERS (NAME VARCHAR(255))
4) Create a new Scala class in Eclipse with contents available at this link. Change database and Twitter credentials to yours (as shown in Step 7).
5) Make sure the Project Build Path contains the jars db2jcc.jar (DB2 JDBC driver), spark-assembly-1.3.1_IBM_1-hadoop2.6.0.jar and spark-examples-1.3.1_IBM_1-hadoop2.6.0.jar, as shown below -
6)Lines 12 to 15 loads the DB2 driver class, establishes a connection to the database and prepares an INSERT statement that is used to insert Twitter user names into DB2.
7) Lines 17 to 24 sets the system properties for consumerKey, consumerSecret, accessToken and accessTokenSecret that will be used by Twitter4j library to generate Oauth credentials. You do this by configuring consumer key/secret pair and access token/secret pair in your account at this link – https://dev.twitter.com/apps. Detailed instructions on how to generate the two pairs are contained at http://ampcamp.berkeley.edu/big-data-mini-course/realtime-processing-with-spark-streaming.html.
8) Lines 26 and 27 create a local StreamingContext with 16 threads and batch interval of 2 seconds. StreamingContext is the entry point for all streaming functionality.
9) Using the StreamingContext created above, Line 30 creates an object DStream called stream. DStream is the basic abstraction in Spark Streaming and is a continuous stream of RDDs containing object of Type twitter4j.Status (http://twitter4j.org/javadoc/twitter4j/Status.html). A filter is also specified (“Paris”) which will select only those tweets that have keyword “Paris” in them.
10) In Line 31, map operation on stream maps each status object to its user name to create a new DStream called users.
11) Line 32 returns a new DStream called recentUsers where user names are aggregated over 60 seconds.
12) Lines 34 to 41 iterate over each RDD in the DStream recentUsers to return number of users every 60 seconds, and inserting those users into the database table TWITTERUSERS through JDBC.
13) Lines 44 starts real processing and awaits termination.
14) Following screenshot shows a snippet of console output when the program is run. Of course, you can change the filter to any keyword in line 29.
15) You can also run SELECT * from TWITTERUSERS on your database to confirm that the Twitter users get inserted.
Above simple Twitter program can be extended to more complicated use cases using Spark Streaming to do analysis of social media data more effectively, persist subset of social media data into databases and join social media data with relational data to derive additional business insights.
You can reach us for questions (Pallavi pallavipr@in.ibm.com or Param param.bng@in.ibm.com).
We are seeing a trend of DB2 data being accessed by modern distributed applications written in new APIs and frameworks. JavaScript has become extremely popular for Web application development. JavaScript adoption was revolutionized by Node.js which makes it possible to run JavaScript on the server-side. There is an increasing interest amongst developers to write analytics applications in Node.js that need to access DB2 data (both z/OS and distributed). Modern DB2 provides a Node.js driver that makes Node.js connectivity straight forward. Below are step-by-step instructions for a basic end-to-end Node.js application on Windows for accessing data from DB2 for z/OS and DB2 distributed -
1) Install Node and its companion NPM. NPM is a tool to manage Node modules. Download the installer from https://nodejs.org/dist/v0.12.7/x64/node-v0.12.7-x64.msi.
2) Note that DB2 Node.js driver does not support Node 4 on Windows yet. Node 4 support is already available for Mac and Linux. We will have Node 4 support for Windows out very soon.
3) Install a 64-bit version of Node since DB2 Node.js driver does not support 32-bit.
4) Run the installer (in my case node-v0.12.7-x64.msi). You should see a screen like Screenshot 1.
5) Follow the instructions on license and folder choice until you reach the screen for the features you want installed. Default selection is recommended and click Next to start intsall (Screenshot 2).
6) Verify that the installation is complete by opening the command prompt and executing node -v and npm -v as shown in Screenshot 3.
7) You can write a simple JavaScript program to test the installation. Create a file called Confirmation.js with contents console.log('You have successfully installed Node and NPM.');
node Confirmation.js. Output looks like Screenshot 4.
9) Now install the DB2 Node.js driver using the following command from Windows command line: npm install ibm_db (For NodeJS 4+, installation command would be different as follows
npm install git+https://git@github.com/ibmdb/node-ibm_db.git#v4_support).
10) Under the covers, the npm command downloads node-ibm_db package from github and includes the DB2 ODBC CLI driver to provide connectivity to the DB2 backend. You should see following output (Screenshot 5).
11) Copy the following simple DB2 access program in a file called DB2Test.js and change the database credentials to yours -
var ibmdb = require('ibm_db');
ibmdb.open("DRIVER={DB2};DATABASE=<dbname>;HOSTNAME=<myhost>;UID=db2user;PWD=password;PORT=<dbport>;PROTOCOL=TCPIP", function (err,conn) {
if (err) return console.log(err);
conn.query('select 1 from sysibm.sysdummy1', function (err, data) {
if (err) console.log(err);
else console.log(data);
conn.close(function () {
console.log('done');
});
});
});
12) Run the following command from Windows command line to execute the program: node DB2Test.js. You should see Screenshot 6, containing the output of SQL SELECT 1 from SYSIBM.SYSDUMMY1. Your simple Node application can now access DB2.
13) For connecting to DB2 for z/OS, modify the Connection URL, DB name, port, user name and password to DB2 for z/OS credentials.
14) DB2 for z/OS access needs DB2 Connect license entitlement. In most production DB2 for z/OS systems with DB2 Connect Unlimited Edition licensing, server side license activation would have already been done, in which case you don't need to do anything about licensing. If you get any license error on executing the program, server side activation may not have been done. In that case, copy the DB2 Connect ODBC client side license file into ibm_db/installer/clidriver/license folder.
15) Also make sure that the DB2 for z/OS server you are testing against has CLI packages already bound (this would have been already done as part of DB2 Connect setup on the DB2 z/OS server).
16) Run the program with DB2 for z/OS credentials and you will observe similar output as Step 12.
17) Attached is a Node.js program (NodeDb2zosSelect.js) that fetches rows from DB2 for z/OS Employee table in the sample database (DSN8A10.EMP). For running the same program with DB2 distributed, make sure to not only change the database credentials, but also change the table name in the SELECT SQL to EMPLOYEE. In both DB2 for z/OS and DB2 distributed, you should see an output as shown in Screenshot 7.
Continue enjoying your Node.js test drive with DB2!
My colleague Param (param.bng@in.ibm.com) and I (pallavipr@in.ibm.com) are exploring various aspects of Spark integration with DB2 and DB2 Connect drivers. We have decided to write a series of blogs capturing our experimentation for the benefit of others as we did not find any article that focuses on different aspects of DB2 access via Spark.
Currently Spark shell is available in Scala and Python. This article covers accessing and filtering DB2 data via Scala shell using DB2 supplied JDBC driver (IBM Data Server Driver for JDBC and SQLJ). Below are the step by step instructions -
1) Confirm that you have Java installed by running java -version from Windows command line. JDK version 1.7 or 1.8 is recommended.
2) Install Spark on local machine by downloading spark from https://spark.apache.org/downloads.html.
3) We chose pre-built binaries as shown in Screenshot 1 (instead of source code download) to avoid building spark in early experimentation phase.
4) Unzip the installation file to a local directory (say C:/spark).
5) Start Windows command prompt.
6) Navigate to the directory that has bin folder of spark installation (c:/spark/bin).
7) Download the DB2 JDBC driver jar (db2jcc.jar or db2jcc4.jar) from http://www-01.ibm.com/support/docview.wss?uid=swg21385217 into C:\ or any other location you desire.
8) Set spark_classpath to the location of the DB2 driver by running SET SPARK_CLASSPATH=c:\db2jcc.jar
9) Run spark-shell.cmd script found in bin folder to start Spark shell using Scala.
10) If installation was successful, you should see output like Screenshot 2, followed by a Scala prompt as in Screenshot 3.
11) In Screenshot 3, you see 2 important objects are already created for you –
11.1) SparkContext – Any Spark application needs a SparkContext which tells Spark how to access a cluster. In the shell mode, a SparkContext is already created for you in a variable called sc.
11.2) SqlContext – This is needed to construct DataFrames (equivalent to relational tables) from database data and serves as the entry point for working with structured data.
12) Once you have Spark up and running, you can issue queries to DB2 on z/OS as well as DB2 LUW through the DB2 JDBC driver. Tables from DB2 database can be loaded as a DataFrame using the following options on load -
12.1) url
The JDBC URL to connect to
12.2) dbtable
The JDBC table that should be read. Note that anything that is valid in a `FROM` clause of a SQL query can be used.
12.3) driver
The class name of the JDBC driver needed to connect to this URL.
13) From Scala command line, issue
val employeeDF = sqlContext.load("jdbc", Map("url" -> "jdbc:db2://localhost:50000/sample:currentSchema=pallavipr;user=pallavipr;password=XXXXXX;","driver" -> "com.ibm.db2.jcc.DB2Driver","dbtable" -> "pallavipr.employee"))
14) You should see output containing the table metadata as shown in Screenshot 4 -
Screenshot 4
15) To see the contents of the EMPLOYEE table, issue employeeDF.show() from Scala command line, which shows the contents of the DataFrame as captured in Screenshot 5. Show() returns first 20 records from the table by default (out of ~40 rows that exist).
16) You can further narrow the search results above by using filter criteria. For eg. If you want to see only columns employee id, firstname, lastname and job title out of all existing columns, you will issue – employeeDF.select("empno","firstnme","lastname",”job”).show(). This gives results shown in Screenshot 6.
Screenshot 6
17) Now if you want to filter out only those rows that have job title DESIGNER, issue the following from scala shell - employeeDF.filter(employeeDF("job").equalTo("DESIGNER")).show(). You will see results shown in Screenshot 7.
Screenshot 7
Application paradigm has evolved rapidly – new programming and scripting languages geared towards Web and Mobile spring up regularly. DB2 has embraced new developments in the application space and continues to be the database of choice for application developers. It has kept up with new programming trends - whenever a new programming language or framework gets adopted by the developer community, DB2 steps up and adds or enhances support. Selection of programming language by developers depends upon several factors such as skills, performance, usage scenario, whether libraries are available for desired functionality etc.
Primary APIs for DB2 include C and C++, Visual Basic and Visual C# (for .NET applications),and Java (JDBC and SQLJ). DB2's CLI/ODBC and JDBC drivers serve as the base for several open source wrappers provided by DB2 – such as Perl, PhP, Python, Ruby and Node.js Advantage of using the JDBC and ODBC/CLI drivers as the foundation is that they not only implement standard API, but also provide advanced features such as workload balancing, failover, security, connection management, monitoring etc. that the wrappers can take advantage of to build robust enterprise applications. DB2 also contributes actively to the open source communities to keep them up-to-date. We are also seeing adoption of frameworks and Object-Relational Mapping tools such as Hibernate, JPA, iBatis and Spring for enterprise applications that take advantage of accelerators provided by DB2 for ease of use and improved performance.
Expect to see below diagram grow fast over time as application developers experiment with new APIs and DB2 continues down the path of supporting those application developers, further strengthening its position as an ideal database server for Cloud, Analytics and Mobile.
There are several use cases where data in Spark needs to be persisted in a backend database. Enterprise wide analytics may require load of data into Spark from different data sources, apply transformations, perform in-memory analytics and write the transformed data back to a enterprise RDMS such as DB2.
In this blog, simple techniques are shown using latest Spark release to load data from a JSON file into Spark and write that back into DB2 using DB2 supplied JDBC driver.
Step 1)
Download latest pre-built Spark library (1.4.1) from http://spark.apache.org/downloads.html. With the rapid evolution in Spark, many methods in 1.3 have been deprecated, and it is best to experiment with the latest.
Step 2)
In your Eclipse Scala IDE build path, add Spark library and DB2 JDBC driver as shown below -
Step 3)
Create a json file with following contents -
{ "EMPNO":10, "EDLEVEL":18, "SALARY":152750, "BONUS":1000 }
{ "EMPNO":20, "EDLEVEL":18, "SALARY":94250, "BONUS":800 }
Step 4)
Create a Scala application with following logic -
1: val DB2_CONNECTION_URL = "jdbc:db2://localhost:50000/sample:currentSchema=pallavipr;
user=pallavipr;password=XXXXXX;traceFile=C:/Eclipse_Scala/trace_scala.txt;";
2:
3: val conf = new SparkConf()
4: .setMaster("local[1]")
5: .setAppName("GetEmployee")
6: .set("spark.executor.memory", "1g")
7:
8: val sc = new SparkContext(conf)
9: val sqlcontext = new SQLContext(sc)
10: val path = "C:/Eclipse_Scala/empint.json"
11:
12: val empdf = sqlcontext.read.json(path)
13: empdf.printSchema()
14: empdf.show()
15:
16: Class.forName("com.ibm.db2.jcc.DB2Driver");
17:
18: val prop = new Properties()
19: prop.put("spark.sql.dialect" , "sql");
20:
21: empdf.write.jdbc(DB2_CONNECTION_URL, "PALLAVIPR.EMPLOYEESALARY", prop)
Step 5)
JSON file is loaded into Spark in Line 12 using new DataFrameReader introduced in Spark 1.4.0.
Step 6)
DB2 JDBC driver is loaded in Line 16 to carry out the write operation to DB2.
Step 7)
On running this Scala program, you will see following schema output from printSchema method on
DataFrame created from JSON file -
Step 8)
Print of DataFrame using Dataframes show method produces following output-
Step 9)
Final write to DB2 is done using DataFrameWriter jdbc API introduced in 1.4.0 (as shown in Line 21) which under the covers generate the CREATE TABLE and INSERT sql's for table EMPLOYEESALARY.
Step 10)
You can verify that the table is created and the JSON data is inserted into DB2 using any tool of your choice -
Note that we are working through couple of issues on write back of String data types in DB2 from Spark, however, that should not stop you from experimenting with numeric types. Watch this space for more blogs and updates on DB2-Spark.
