DataStage – Timestamp from Datetime

Timestamp from date is one of those data type conversions, which I occasionally have to do in DataStage but can never seem to remember. So, I thought I would write this quick post to document the data type conversion code, which is easy, once I finally remember how to do it again. 

The TimestampFromDateTime Function

I use the TimestampFromDateTime(%date%,%time%) function to do this data type conversion. I’m sure there are other ways to achieve the result, but I find this method clean and easy to perform. The TimestampFromDateTime(%date%,%time%) function is in the Functions > Date & Time menu.

To populate the function, you need only add your date field on use ’00:00:00’ as your time element

Time Element example

TimestampFromDateTime(<<Date Field Here>>, ’00:00:00′)

Example transformer code With Date Field

TimestampFromDateTime(Lnk_10.POSTED_DATE, ’00:00:00′)

Infosphere Knowledge Resources

IBM Knowledge Center

The IBM Knowledge Center is the IBM product documentation Site designed to help you plan, install, configure, use, tune, monitor, troubleshoot, and maintain the product.

IBM InfoSphere Information Server Version 11.7.0 documentation

The IBM InfoSphere Information Server Version 11.7.0 documentation provides IIS specific documentation about how to install and use InfoSphere Information Server

IBM Support Portal

The IBM Support Portal is IBM’s technical support resource for all IBM products and services including downloads, fixes, drivers, APARs, product documentation, Redbooks, whitepapers and more.

IBM InfoSphere Information Server Content Hub

The IBM InfoSphere Information Server content hub provides additional on Information Server components including documentation, blog entries, videos, and more.

IBM developerWorks

The IBM InfoSphere Information Server content hub provides additional on Information Server components including documentation, blog entries, videos, and more.

Infosphere forbidden USERID’s and passwords and restrictions

Related Reference

DataStage – Timestamp from date

This is one of those data type conversions which I occasionally have to do in DataStage, but can never seem to remember. So, I thought I would write this quick post to document  the data type conversion code, which is really easy, once I finally remember how to do it again. 

The TimestampFromDateTime  Function

I use the TimestampFromDateTime function to do this data type conversion. I’m sure there are other ways to achieve the result, but I find this method clean and easy to perform. The TimestampFromDateTimefunction is in the Functions > Date & Time menu.

To populate the function, you need only add your date field on use ’00:00:00’ as your time element


TimestampFromDateTime Function Format


TimestampFromDateTime(%date%,%time%)


TimestampFromDateTime with Time Element example

TimestampFromDateTime(<<Date Field Here>>, ’00:00:00′)

Example transformer code With Date Field

TimestampFromDateTime(Lnk_10.POSTED_DATE, ’00:00:00′)

DataStage – Netezza Connector Action Column

Over the years have occasionally use the action column feature, however, the last month or so I have found myself using it quite a lot. This is especially true in relation to the tea set and not just in relation to the change capture stage.

The first thing you need to know is, if you want to prevent getting the ‘no action column found’ notice on the target stage, need to ensure that the action column has been coded to be a single character field char (1). Otherwise, the Netezza connector stage will not recognize your field as an action column.

While most developers will commonly work with the action column feature in relation to the change capture stage, it can also be very useful if you have created a field from one or more inputs to tell you what behavior the row requires. I have found that this approach can be very useful and efficient under the right circumstances.

Example Pattern for Action Column Using Multiple Source Selects

Action column configuration example

Action Column Field Type

 Change Code Values Mapping To Action Column

  • Here’s a quick reference table to provide the interpretation of the change type code to the actual one character action column value to which it will need to be interpreted.

Change Code Type

Change Type Code

Action Column Value

Copy (Data Without Changes)

0

No
value for this Change Type

Insert

1

I

Delete

2

D

Update

3

U

Example Transformer Stage, Derivation

  •  Here is a quick transformer stage derivation coding example to take advantage of the action call capabilities. If you haven’t already handled the removal of the copy rows, you may also want to add a constraint.
  • The combination I most frequently find myself using is the insert and update combination.
if Lnk_Out_To_Tfm.change_code=1 then ‘I’

 

Else if Lnk_Out_To_Tfm.change_code=2 then ‘D’

Else if Lnk_Out_To_Tfm.change_code=3 then ‘U’

Related References

Home > InfoSphere Information Server 11.7.0 > InfoSphere DataStage and QualityStage > Developing parallel jobs > Introduction to InfoSphere DataStage Balanced Optimization > Job design considerations  > Specific considerations for the Netezza connector

Essbase Connector Error – Client Commands are Currently Not Being Accepted

DataStage Essbase Connector

While investigating a recent Infosphere Information Server (IIS), Datastage, Essbase Connect error I found the explanations of the probable causes of the error not to be terribly meaningful.  So, now that I have run our error to ground, I thought it might be nice to jot down a quick note of the potential cause of the ‘Client Commands are Currently Not Being Accepted’ error, which I gleaned from the process.

Error Message Id

  • IIS-CONN-ESSBASE-01010

Error Message

An error occurred while processing the request on the server. The error information is 1051544 (message on contacting or from application:[<<DateTimeStamp>>]Local////3544/Error(1013204) Client Commands are Currently Not Being Accepted.

Possible Causes of The Error

This Error is a problem with access to the Essbase object or accessing the security within the Essbase Object.  This can be a result of multiple issues, such as:

  • Object doesn’t exist – The Essbase object didn’t exist in the location specified,
  • Communications – the location is unavailable or cannot be reached,
  • Path Security – Security gets in the way to access the Essbase object location
  • Essbase Security – Security within the Essbase object does not support the user or filter being submitted. Also, the Essbase object security may be corrupted or incomplete.
  • Essbase Object Structure –  the Essbase object was not properly structured to support the filter or the Essbase filter is malformed for the current structure.

Related References

IBM Knowledge Center, InfoSphere Information Server 11.7.0, Connecting to data sources, Enterprise applications, IBM InfoSphere Information Server Pack for Hyperion Essbase

Printable PDF Version of This Article

 

Parallel jobs on Windows fail with APT_IOPort::readBlkVirt;error

APT_IOPort::readBlkVirt Error Screenshot

This a known error for windows systems and applies to DataStage and DataQuality jobs using the any or all the three join type stages (Join, Merge, and Lookup).

Error Message

  • <<Link name>>,0: APT_IOPort::readBlkVirt: read for block header, partition 0, [fd 4], returned -1 with errno 10,054 (Unknown error)

Message ID

  • IIS-DSEE-TFIO-00223

Applies To

  • Windows systems only
  • Parallel Engine Jobs the three join type stages (Join, Merge, and Lookup). It does not apply to Server Engine jobs.
  • Infosphere Information Server (IIS), Datastage and DataQuality 9.1 and higher

The Fix

  • Add the APT_NO_IOCOMM_OPTIMIZATION in project administrator and set to blank or 0. I left it blank so it would not impact other jobs
  • Add the environment variable to the job producing the error and set to 1

What it APT_NO_IOCOMM_OPTIMIZATION Does

  • Sets the use of shared memory as the transport type, rather than using the default sockets transport type.
  • Note that in most cases sockets transport type is faster, so, you likely will not to set this across the project as the default for all job. It is best to apply it as necessary for problematic jobs.

Related References

InfoSphere DataStage and QualityStage, Version 9.1 Job Compatibility

IBM Support, JR54078: PARALLEL JOBS ON WINDOWS FAIL WITH APT_IOPORT::READBLKVIRT; ERROR

IBM Support, Information Server DataStage job fails with unknown error 10,054.

 

DataStage – How to Pass the Invocation ID from one Sequence to another

DataStage Invocation ID Passing Pattern Overview

When you are controlling a chain of sequences in the job stream and taking advantage of reusable (multiple instances) jobs it is useful to be able to pass the Invocation ID from the master controlling sequence and have it passed down and assigned to the job run.  This can easily be done with needing to manual enter the values in each of the sequences, by leveraging the DSJobInvocationId variable.  For this to work:

  • The job must have ‘Allow Multiple Instance’ enabled
  • The Invocation Id must be provided in the Parent sequence must have the Invocation Name entered
  • The receiving child sequence will have the invocation variable entered
  • At runtime, a DataStage invocation id instance of the multi-instance job will generate with its own logs.

Variable Name

  • DSJobInvocationId

Note

This approach allows for the reuse of job and the assignment of meaningful instance extension names, which are managed for a single point of entry in the object tree.

Related References: 

IBM Knowledge Center > InfoSphere Information Server 11.5.0

InfoSphere DataStage and QualityStage > Designing DataStage and QualityStage jobs > Building sequence jobs > Sequence job activities > Job Activity properties

DataStage – How to use single quoted parameter list in an Oracle Connector

Data Integration

While working with a client’s 9.1 DataStage version, I ran into a situation where they wanted to parameterize SQL where clause lists in an Oracle Connector stage, which honestly was not very straight forward to figure out.  First, if the APT_OSL_PARAM_ESC_SQUOTE is not set and single quotes are used in the parameter, the job creates unquoted invalid SQL when the parameter is populated.  Second, I found much of the information confusing and/or incomplete in its explanation.   After some research and some trial and error, here is how I resolved the issue.  I’ll endeavor to be concise, but holistic in my explanation.

When this Variable applies

This where I know this process applies, there may be other circumstances to which is this applicable, but I’m listing the ones here with which I have recent experience.

Infosphere Information Server Datastage

  • Versions 91, 11.3, and 11.5

Oracle RDBMS

  • Versions 11g and 12c

Configurations process

Here is a brief explanation of the steps I used to implement the where clause as a parameter.  Please note that in this example, I am using a job parameter to populate on a portion of the where clause, you can certainly pass the entire where clause as a parameter, if it is not too long.

Configure Project Variable in Administrator

  • Add APT_OSL_PARAM_ESC_SQUOTE to project in Administrator
  • Populate the APT_OSL_PARAM_ESC_SQUOTE Variable
APT_OSL_PARAM_ESC_SQUOTE Project Variable

Create job parameter

Following your project name convention or standard practice, if you customer and/or project do not have established naming conventions, create the job parameter in the job. See jp_ItemSource parameter in the image below.

Job Parameter In Oracle Connector

Add job parameter to Custom SQL in Select Oracle Connector Stage

On the Job parameter has been created, add the job parameter to the SQL statement of the job.

Job Parameter In SQL

Related References

IBM Knowledge Center > InfoSphere Information Server 11.5.0

Connecting to data sources > Databases > Oracle databases > Oracle connector

IBM Support > Limitation of the Parameter APT_OSL_PARAM_ESC_SQUOTE on Plugins on Parallel Canvas

IBM Knowledge Center > InfoSphere Information Server 11.5.0

InfoSphere DataStage and Quality > Stage > Reference > Parallel Job Reference > Environment Variables > Miscellaneous > APT_OSL_PARAM_ESC_SQUOTE

InfoSphere Information Server Suites

The Six Suites of Infosphere Information Server

InfoSphere Information Server 11 Series is, primarily, made up of 6 suites / packages, which provide a mix of components:

  • InfoSphere Information Governance Catalog
  • BigInsights BigIntegrate
  • BigInsights
  • InfoSphere Information Server for Data Integration
  • InfoSphere Information Server for Data Quality
  • InfoSphere Information Server Enterprise Edition (is inclusive of all 5 packages above)

IIS capabilities / components

These License suites are comprised of differing combinations IIS capabilities / components, which consist of:

  • InfoSphere DataStage®
  • InfoSphere QualityStage®
  • InfoSphere DataStage and QualityStage Designer
  • InfoSphere Data Click
  • InfoSphere FastTrack
  • InfoSphere Information Analyzer
  • InfoSphere Information Governance Catalog
  • InfoSphere Information Governance Dashboard
  • InfoSphere Information Services Director

Additional Purchasable Infosphere capabilities

Additional Infosphere capabilities can be licensed / purchased to supplement the primary suite licenses:

  • Cognos® Business Intelligence
  • BigInsights Data Scientist
  • InfoSphere Change Data Delivery
  • InfoSphere Data Architect

Table of Component /capability Alignment to IIS Suites

Suites

InfoSphere Information Server for Data Integration

InfoSphere Information Server for Data Quality

InfoSphere Information Governance Catalog

BigInsights BigIntegrate

BigInsights

InfoSphere Information Server Enterprise Edition

Components

      

InfoSphere DataStage®

 ✓* 

InfoSphere QualityStage®

   

InfoSphere DataStage and QualityStage Designer

✓*

InfoSphere Data Click

   

InfoSphere FastTrack

   

InfoSphere Information Analyzer

   

InfoSphere Information Governance Catalog

✓*✓*✓*✓*

InfoSphere Information Governance Dashboard

InfoSphere Information Services Director

 

Footnotes:

* = usage and license restriction may apply.

Related References

IBM Knowledge Center, InfoSphere Information Server, Version 11.5.0,

Overview of IBM InfoSphere Information Server, Introduction to InfoSphere Information Server, Components in the InfoSphere Information Server suite

What are the dashDB isolation levels?

dashDB

 

Isolation levels are part of the ACID (Atomicity, Consistency, Isolation, Durability) paradigms in database control.  Isolation levels allow developers and user to trade-off consistency for a potential gain in performance. Therefore, it is essential to understand them and how the apply in structured Query Language(SQL).  The dashDB RDBMS has four isolations levels:

Repeatable Read (RR)

  • The repeatable read (RR) isolation level locks all the rows that an application references during a unit of work (UOW). If an application issues a SELECT statement twice within the same unit of work, the same result is returned each time. Under RR, lost updates, access to uncommitted data, non-repeatable reads, and phantom reads are not possible.
  • Under RR, an application can retrieve and operate on the rows as many times as necessary until the UOW completes. However, no other application can update, delete, or insert a row that would affect the result set until the UOW completes. Applications running under the RR isolation level cannot see the uncommitted changes of other applications. This isolation level ensures that all returned data remains unchanged until the time the application sees the data, even when temporary tables or row blocking is used.
  • Every referenced row is locked, not just the rows that are retrieved. For example, if you scan 20 000 rows and apply predicates to them, locks are held on all 20 000 rows, even if, say, only 200 rows qualify. Another application cannot insert or update a row that would be added to the list of rows referenced by a query if that query were to be executed again. This prevents phantom reads.
  • Because RR can acquire a considerable number of locks, this number might exceed limits specified by the locklist and maxlocks database configuration parameters. To avoid lock escalation, the optimizer might elect to acquire a single table-level lock for an index scan, if it appears that lock escalation is likely. If you do not want table-level locking, use the read stability isolation level.
  • While evaluating referential constraints, the dashDB might, occasionally, upgrade the isolation level used on scans of the foreign table to RR, regardless of the isolation level that was previously set by the user. This results in additional locks being held until commit time, which increases the likelihood of a deadlock or a lock timeout. To avoid these problems, create an index that contains only the foreign key columns, which the referential integrity scan can use instead.

Read Stability (RS)

  • The read stability (RS) isolation level locks only those rows that an application retrieves during a unit of work. RS ensures that any qualifying row read during a UOW cannot be changed by other application processes until the UOW completes, and that any change to a row made by another application process cannot be read until the change is committed by that process. Under RS, access to uncommitted data and non-repeatable reads are not possible. However, phantom reads are possible. Phantom reads might also be introduced by concurrent updates to rows where the old value did not satisfy the search condition of the original application but the new updated value does.
  • For example, a phantom row can occur in the following situation:
    • Application process P1 reads the set of rows n that satisfy some search condition.
    • Application process P2 then inserts one or more rows that satisfy the search condition and commits those new inserts.
    • P1 reads the set of rows again with the same search condition and obtains both the original rows and the rows inserted by P2.
  • In a dashDB environment, an application running at this isolation level might reject a previously committed row value, if the row is updated concurrently on a different member. To override this behavior, specify the WAIT_FOR_OUTCOME option.
  • This isolation level ensures that all returned data remains unchanged until the time the application sees the data, even when temporary tables or row blocking is used.
  • The RS isolation level provides both a high degree of concurrency and a stable view of the data. To that end, the optimizer ensures that table-level locks are not obtained until lock escalation occurs.
  • The RS isolation level is suitable for an application that:
    • Operates in a concurrent environment
    • Requires qualifying rows to remain stable for the duration of a unit of work
    • Does not issue the same query more than once during a unit of work, or does not require the same result set when a query is issued more than once during a unit of work

Cursor Stability (CS)

  • The cursor stability (CS) isolation level locks any row being accessed during a transaction while the cursor is positioned on that row. This lock remains in effect until the next row is fetched or the transaction terminates. However, if any data in the row was changed, the lock is held until the change is committed.
  • Under this isolation level, no other application can update or delete a row while an updatable cursor is positioned on that row. Under CS, access to the uncommitted data of other applications is not possible. However, non-repeatable reads and phantom reads are possible.
  • Cursor Stability (CS) is the default isolation level.
  • Cursor Stability (CS) is suitable when you want maximum concurrency and need to see only committed data.
  • In a dashDB environment, an application running at this isolation level may return or reject a previously committed row value, if the row is concurrently updated on a different member. The WAIT FOR OUTCOME option of the concurrent access resolution setting can be used to override this behavior.

Uncommitted Read (UR)

  • The uncommitted read (UR) isolation level allows an application to access the uncommitted changes of other transactions. Moreover, UR does not prevent another application from accessing a row that is being read, unless that application is attempting to alter or drop the table.
  • Under UR, access to uncommitted data, non-repeatable reads, and phantom reads are possible. This isolation level is suitable if you run queries against read-only tables, or if you issue SELECT statements only, and seeing data that has not been committed by other applications is not a problem.
  • Uncommitted Read (UR) works differently for read-only and updatable cursors.
  • Read-only cursors can access most of the uncommitted changes of other transactions.
  • Tables, views, and indexes that are being created or dropped by other transactions are not available while the transaction is processing. Any other changes by other transactions can be read before they are committed or rolled back. Updatable cursors operating under UR behave as though the isolation level were CS.
  • If an uncommitted read application uses ambiguous cursors, it might use the CS isolation level when it runs. To prevent this escalation, modify the cursors in the application program to be unambiguous and/or Change the SELECT statements to include the for read-only

 

Related References

IBM dashDB

Accessing remote data sources with fluid queries on dashDB Local, Developing for federation