Technology – Understanding Data Model Entities

Advertisements

Data Modeling is an established technique of comprehensively documenting an application or software system with the aid of symbols and diagrams. It is an abstract methodology of organizing the numerous data elements and thoroughly highlighting how these elements relate to each other. Representing the data requirements and elements of a database gra phically is called an Entity Relationship Diagram, or ERD.

What is an Entity?

Entities are one of the three essential components of ERDs and represent the tables of the database. An entity is something that depicts only one information concept. For instance, order and customer, although related, are two different concepts, and hence are modeled as two separate entities.

A data model entity typically falls in one of five classes – locations, things. events, roles, and concepts. Examples of entities can be vendors, customers, and products. These entities also have some attributes associated with them, which are some of the details that we would want to track about these entities.

A particular example of an entity is referred to as an instance. Instances form the various rows or records of the table. For instance, if there is a table titled ‘students,’ then a student named William Tell will be a single record of the table.

Why Do We Need a Data Model Entity?

Data is often stored in various forms. An organization may store data in XML files, spreadsheets, reports, and relational databases. Such a fragmented data storage methodology can present challenges during application design and data access. Writing maintainable and efficient code becomes all the more difficult when one has to think about easy data access, scalability, and storage. Additionally, moving data from one form to the other is difficult. This is where the Entity Data Model comes in. Describing the data in the form of relationships and entities, the structure of the data becomes independent of the storage methodology. As the application and data evolve, so does the Data Model Entity. The abstract view allows for a much more streamlined method of transforming or moving data.

SQL Server Length Function Equivalent

Advertisements

The purpose of the Length function in SQL

The SQL LENGTH function returns the number of characters in a string. The LENGTH function is available in many Database Management Systems (DBMS).

The LENGTH Function Syntax

  • LENGTH(string)

LENGTH Function Notes

  • If the input string is empty, the LENGTH returns 0.
  • If the input string is NULL, the LENGTH returns NULL.

Length Function Across Databases

When working as a technical consultant, one has to work with customer’s databases and as you move from one database to another you will find that the function commands may vary–assuming the database has an equivalent function.

Working with VQL and SQL Server got me thing about the LENGTH() function, so, here is a quick references list, which does include the SQL Server.  

IBM DB2

  • LENGTH( )

IBM Informix

  • CHAR_LENGTH() Or CHARACTER_LENGTH()

MariaDB

  • LENGTH( )

Microsoft SQL Server

  • LEN( )

MySQL

  • CHAR_LENGTH() Or CHARACTER_LENGTH()

Netezza

  • LENGTH( )

Oracle

  • LENGTH( )

PostgreSQL

  • CHAR_LENGTH() Or CHARACTER_LENGTH()

SOQL (SalesForce)

  • SOQL has no LENGTH function

VQL (Denodo)

  • LEN( )

Technology – Denodo Data Virtualization Project Roles

Advertisements

A Denodo virtualization project typically classifies the project duties of the primary implementation team into four Primary roles.

Denodo Data Virtualization Project Roles

  • Data Virtualization Architect
  • Denodo Platform Administrator
  • Data Virtualization Developer
  • Denodo Platform Java Programmer
  • Data Virtualization Internal Support Team

Role To Project Team Member Alignment

While the denodo project is grouped into security permissions and a set of duties, it is import to note that the assignment of the roles can be very dynamic as to their assignment among project team members.  Which team member who performs a given role can change the lifecycle of a denodo project.  One team member may hold more than one role at any given time or acquire or lose roles based on the needs of the project.

Denodo virtualization Project Roles Duties

Data Virtualization Architect

The knowledge, responsibilities, and duties of a denodo data virtualization architect, include:

  • A Deep understanding of denodo security features and data governance
  • Define and document5 best practices for users, roles, and security permissions.
  • Have a strong understanding of enterprise data/information assets
  • Defines data virtualization architecture and deployments
  • Guides the definition and documentation of the virtual data model, including, delivery modes, data sources, data combination, and transformations

Denodo Platform Administrator

The knowledge, responsibilities, and duties of a Denodo Platform Administrator, Include:

  • Denodo Platform Installation and maintenance, such as,
    • Installs denodo platform servers
    • Defines denodo platform update and upgrade policies
    • Creates, edits, and removes environments, clusters, and servs
    • Manages denodo licenses
    • Defines denodo platform backup policies
    • Defines procedures for artifact promotion between environments
  • Denodo platform configuration and management, such as,
    • Configures denodo platform server ports
    • Platform memory configuration and Java Virtual Machine (VM) options
    • Set the maximum number of concurrent requests
    • Set up database configuration
      • Specific cache server
      • Authentication configuration for users connecting to denodo platform (e.g., LDAP)
      • Secures (SSL) communications connections of denodo components
      • Provides connectivity credentials details for clients tools/applications (JDBC, ODBC,,,etc.)
      • Configuration of resources.
    • Setup Version Control System (VCS) configuration for denodo
    • Creates new Virtual Databases
    • Create Users, roles, and assigns privileges/roles.
    • Execute diagnostics and monitoring operations, analyzes logs and identifies potentials issues
    • Manages load balances variables

Data Virtualization Developer

The Data Virtualization Developer role is divided into the following sub-roles:

  • Data Engineer
  • Business Developer
  • Application Developer

the knowledge, responsibilities, and duties of a Denodo Data Virtualization Developer, by sub-role, Include:

Data Engineer

The denodo data engineer’s duties include:

  • Implements the virtual data model construction view by
    • Importing data sources and creating base views, and
    • Creating derived views applying combinations and transformations to the datasets
  • Writes documentation, defines testing to eliminate development errors before code promotion to other environments

Business Developer

The denodo business developer’s duties include:

  • Creates business vies for a specific business area from derived and/or interface views
  • Implements data services delivery
  • Writes documentation

Application Developer

The denodo application developer’s duties include:

  • Creates reporting vies from business views for reports and or datasets frequently consumed by users
  • Writes documentation

Denodo Platform Java Programmer

The Denodo Platform Java Programmer role is an optional, specialized, role, which:

  • Creates custom denodo components, such as data sources, stored procedures, and VDP/iTPilot functions.
  • Implements custom filters in data routines
  • Tests and debugs any custom components using Denodo4e

Data Virtualization Internal Support Team

The denodo data virtualization internal support team’s duties include

  • Access to and knowledge of the use and trouble of developed solutions
  • Tools and procedures to manage and support project users and developers

denodo Virtualization – Useful Links

Advertisements

Here are some denodo Virtualization references, which may be useful.

Reference Name Link
denodo Home Page https://www.denodo.com/en/about-us/our-company
denodo Platform 7.0 Documentation https://community.denodo.com/docs/html/browse/7.0/
denodo Knowledge Base and Best Practices https://community.denodo.com/kb/
denodo Tutorials https://community.denodo.com/tutorials/
denodo Express 7.0 Download https://community.denodo.com/express/download
Denodo Virtual Data Port (VDP) https://community.denodo.com/kb/download/pdf/VDP%20Naming%20Conventions?category=Operation
JDBC / ODBC drivers for Denodo https://community.denodo.com/drivers/
Denodo Governance Bridge – User Manual https://community.denodo.com/docs/html/document/denodoconnects/7.0/Denodo%20Governance%20Bridge%20-%20User%20Manual
Virtual DataPort VQL Guidehttps://community.denodo.com/docs/html/browse/7.0/vdp/vql/introduction/introduction
Denodo Model Bridge – User Manualhttps://community.denodo.com/docs/html/document/denodoconnects/7.0/Denodo%20Model%20Bridge%20-%20User%20Manual
Denodo Connects Manualshttps://community.denodo.com/docs/html/browse/7.0/denodoconnects/index
Denodo Infosphere Governance Bridge – User Manualhttps://community.denodo.com/docs/html/document/denodoconnects/7.0/Denodo%20Governance%20Bridge%20-%20User%20Manual

PostgreSQL – Useful Links

Advertisements

PostgreSQLUseful Links

Here are some PostgreSQL references, which may be useful.

Reference Type Link
PostgreSQL Home page & Download https://www.postgresql.org/
PostgreSQL Online Documentation https://www.postgresql.org/docs/manuals/
Citus – Scalability and Parallelism Extension https://github.com/citusdata
Free PostgreSQL Training https://www.enterprisedb.com/free-postgres-training
PostgreSQL Wiki https://wiki.postgresql.org/wiki/Main_Page

The Business Industries Using PostgreSQL

Advertisements

PostgreSQL is an open-source database, which was released in 1996. So, PostgreSQL has been around a long time.   So, among the many companies and industries which know they are using PostgreSQL, many others are using PostgreSQL and don’t know it because it is embedded as the foundation in some other application’s software architecture.  

I hadn’t paid much attaint to PostgreSQL even though it as been on the list leading databases used by business for years.  Mostly I have been focused on the databases my customer were using (Oracle, DB2, Microsoft SQL Server, and MySQL/MariaDB).  However, during a recent meeting I was surprised to learn that io had been using and administering PostrgresSQL embedded as part of another software vendors application, which made me take the time to pay attention to PostgreSQL. Especially, who is using PostgreSQL and what opportunities that may provide for evolving my career? 

The Industries Using PostgreSQL

According to enlyft, the major using the PostgreSQL are Computer Software and Information Technology And services companies.  

  PostgreSQL Consumers Information

Here is the  link to enlyft page, which provides additional information companies and industries using PostgreSQL:

How To Get A List Of Oracle Database Schemas?

Advertisements

Well, this is one of those circumstances, where your ability to answer this question will depend upon your user’s assigned security roles and what you actually want. 

To get a complete list, you will need to use the DBA_ administrator tables to which most of us will not have access.  In the very simple examples below, you may want to add a WHERE clause to eliminate the system schemas from the list, like ‘SYS’ and ‘SYSTEM,’ if you have access to them.

Example Administrator (DBA) Schema List

SELECT distinct OWNER as SCHEMA_NAME

FROM  DBA_OBJECTS

ORDER BY OWNER;

Example Administrator (DBA) Schema List Results Screenshot

Fortunately for the rest of us, there are All user tables, from which we can get a listing of the schemas to which we have access.

Example All Users Schema List

SELECT distinct OWNER as SCHEMA_NAME

FROM    ALL_OBJECTS

ORDER BY OWNER;

Example All Users Schema List Results Screenshot

Related References

Oracle help Center > Database> Oracle > Oracle Database > Release 19

Technology – A 720-Degree View of the Customer

Advertisements

The 360-degree view of the consumer is a well-explored concept, but it is not adequate in the digital age. Every firm, whether it is Google or Amazon, is deploying tools to understand customers in a bid to serve them better. A 360-degree view demanded that a company consults its internal data to segment customers and create marketing strategies. It has become imperative for companies to look outside their channels, to platforms like social media and reviews to gain insight into the motivations of their customers. The 720-degree view of the customer is further discussed below.

What is the 720-degree view of the customer?

A 720-degree view of the customer refers to a three-dimensional understanding of customers, based on deep analytics. It includes information on every customer’s level of influence, buying behavior, needs, and patterns. A 720-degree view will enable retailers to offer relevant products and experiences and to predict future behavior. If done right, this concept should assist retailers leverage on emerging technologies, mobile commerce, social media, cloud-based services, and analytics to sustain lifelong customer relationships

What Does a 720-Degree View of the Customer Entail?

Every business desires to cut costs, gain an edge over its competitors, and grow its customer base. So how exactly will a 720-degree view of the customer help a firm advance its cause?

Social Media

Social media channels help retailers interact more effectively and deeply with their customers. It offers reliable insights into what customers would appreciate in products, services, and marketing campaigns. Retailers can not only evaluate feedback, but they can also deliver real-time customer service. A business that integrates its services with social media will be able to assess customer behavior through tools like dislikes and likes. Some platforms also enable customers to buy products directly.

Customer Analytics


Customer analytics will construct more detailed customer profiles by integrating different data sources like demographics, transactional data, and location. When this internal data is added to information from external channels like social media, the result is a comprehensive view of the customer’s needs and wants. A firm will subsequently implement more-informed decisions on inventory, supply chain management, pricing, marketing, customer segmentation, and marketing. Analytics further come in handy when monitoring transactions, personalized services, waiting times, website performance.

Mobile Commerce

The modern customer demands convenience and device compatibility. Mobile commerce also accounts for a significant amount of retail sales, and retailers can explore multi-channel shopping experiences. By leveraging a 720-degree view of every customer, firms can provide consumers with the personalized experiences and flexibility they want. Marketing campaigns will also be very targeted as they will be based on the transactional behaviors of customers. Mobile commerce can take the form of mobile applications for secure payment systems, targeted messaging, and push notifications to inform consumers of special offers. The goal should be to provide differentiated shopper analytics.

Cloud

Cloud-based solutions provide real-time data across multiple channels, which illustrates an enhanced of customer. Real-time analytics influence decision-making in retail and they also harmonize the physical and retail digital environments. The management will be empowered to detect sales trends as transactions take place.

The Importance of the 720-Degree Customer View

Traditional marketers were all about marketing to groups of similar individuals, which is often termed as segmentation. This technique is, however, giving way to the more effective concept of personalized marketing. Marketing is currently channeled through a host of platforms, including social media, affiliate marketing, pay-per-click, and mobile. The modern marketer has to integrate the information from all these sources and match them to a real name and address. Companies can no longer depend on a fragmented view of the customer, as there has to be an emphasis on personalization. A 720-degree customer view can offer benefits like:

Customer Acquisition

Firms can improve customer acquisition by depending on the segment differences revealed from a new database of customer intelligence. Consumer analytics will expose any opportunities to be taken advantage of while external data sources will reveal competitor tactics. There are always segment opportunities in any market, which are best revealed by real-time consumer data.

Cutting Costs

Marketers who rely on enhanced digital data can contribute to cost management in a firm. It takes less investment to serve loyal and satisfied consumers because a firm is directing addressing their needs. Technology can be used to set customized pricing goals and to segment customers effectively.

New Products and Pricing

Real-time data, in addition to third-party information, have a crucial impact on pricing. Only firms with a robust and relevant competitor and customer analytics and data can take advantage of this importance. Marketers with a 720-degree view of the consumer across many channels will be able to utilize opportunities for new products and personalized pricing to support business growth

Advance Customer Engagement

The first 360 degrees include an enterprise-wide and timely view of all consumer interactions with the firm. The other 360 degrees consists of the customer’s relevant online interactions, which supplements the internal data a company holds. The modern customer is making their buying decisions online, and it is where purchasing decisions are influenced. Can you predict a surge in demand before your competitors? A 720-degree view will help you anticipate trends while monitoring the current ones.

720-degree Customer View and Big Data

Firms are always trying to make decision-making as accurate as possible, and this is being made more accessible by Big Data and analytics. To deliver customer-centric experiences, businesses require a 720-degree view of every customer collected with the help of in-depth analysis.

Big Data analytical capabilities enable monitoring of after-sales service-associated processes and the effective management of technology for customer satisfaction. A firm invested in being in front of the curve should maintain relevant databases of external and internal data with global smart meters. Designing specific products to various segments is made easier with the use of Big Data analytics. The analytics will also improve asset utilization and fault prediction. Big Data helps a company maintain a clearly-defined roadmap for growth

Conclusion

It is the dream of every enterprise to tap into customer behavior and create a rich profile for each customer. The importance of personalized customer experiences cannot be understated in the digital era. The objective remains to develop products that can be advertised and delivered to customers who want them, via their preferred platforms, and at a lower cost. 

OLTP vs Data Warehousing

Advertisements

OLTP Versus Data Warehousing

I’ve tried to explain the difference between OLTP systems and a Data Warehouse to my managers many times, as I’ve worked at a hospital as a Data Warehouse Manager/data analyst for many years. Why was the list that came from the operational applications different than the one that came from the Data Warehouse? Why couldn’t I just get a list of patients that were laying in the hospital right now from the Data Warehouse? So I explained, and explained again, and explained to another manager, and another. You get the picture.
In this article I will explain this very same thing to you. So you know  how to explain this to your manager. Or, if you are a manager, you might understand what your data analyst can and cannot give you.

OLTP

OLTP stands for OLine Transactional Processing. With other words: getting your data directly from the operational systems to make reports. An operational system is a system that is used for the day to day processes.
For example: When a patient checks in, his or her information gets entered into a Patient Information System. The doctor put scheduled tests, a diagnoses and a treatment plan in there as well. Doctors, nurses and other people working with patients use this system on a daily basis to enter and get detailed information on their patients.
The way the data is stored within operational systems is so the data can be used efficiently by the people working directly on the product, or with the patient in this case.

Data Warehousing

A Data Warehouse is a big database that fills itself with data from operational systems. It is used solely for reporting and analytical purposes. No one uses this data for day to day operations. The beauty of a Data Warehouse is, among others, that you can combine the data from the different operational systems. You can actually combine the number of patients in a department with the number of nurses for example. You can see how far a doctor is behind schedule and find the cause of that by looking at the patients. Does he run late with elderly patients? Is there a particular diagnoses that takes more time? Or does he just oversleep a lot? You can use this information to look at the past, see trends, so you can plan for the future.

The difference between OLTP and Data Warehousing

This is how a Data Warehouse works:

The data gets entered into the operational systems. Then the ETL processes Extract this data from these systems, Transforms the data so it will fit neatly into the Data Warehouse, and then Loads it into the Data Warehouse. After that reports are formed with a reporting tool, from the data that lies in the Data Warehouse.

This is how OLTP works:

Reports are directly made from the data inside the database of the operational systems. Some operational systems come with their own reporting tool, but you can always use a standalone reporting tool to make reports form the operational databases.

Pro’s and Con’s

Data Warehousing

Pro’s:

  • There is no strain on the operational systems during business hours
    • As you can schedule the ETL processes to run during the hours the least amount of people are using the operational system, you won’t disturb the operational processes. And when you need to run a large query, the operational systems won’t be affected, as you are working directly on the Data Warehouse database.
  • Data from different systems can be combined
    • It is possible to combine finance and productivity data for example. As the ETL process transforms the data so it can be combined.
  • Data is optimized for making queries and reports
    • You use different data in reports than you use on a day to day base. A Data Warehouse is built for this. For instance: most Data Warehouses have a separate date table where the weekday, day, month and year is saved. You can make a query to derive the weekday from a date, but that takes processing time. By using a separate table like this you’ll save time and decrease the strain on the database.
  • Data is saved longer than in the source systems
    • The source systems need to have their old records deleted when they are no longer used in the day to day operations. So they get deleted to gain performance.

Con’s:

  • You always look at the past
    • A Data Warehouse is updated once a night, or even just once a week. That means that you never have the latest data. Staying with the hospital example: you never knew how many patients are in the hospital are right now. Or what surgeon didn’t show up on time this morning.
  • You don’t have all the data
    • A Data Warehouse is built for discovering trends, showing the big picture. The little details, the ones not used in trends, get discarded during the ETL process.
  • Data isn’t the same as the data in the source systems
    • Because the data is older than those of the source systems it will always be a little different. But also because of the Transformation step in the ETL process, data will be a little different. It doesn’t mean one or the other is wrong. It’s just a different way of looking at the data. For example: the Data Warehouse at the hospital excluded all transactions that were marked as cancelled. If you try to get the same reports from both systems, and don’t exclude the cancelled transactions in the source system, you’ll get different results.

online transactional processing (OLTP)

Pro’s

  • You get real time data
    • If someone is entering a new record now, you’ll see it right away in your report. No delays.
  • You’ve got all the details
    • You have access to all the details that the employees have entered into the system. No grouping, no skipping records, just all the raw data that’s available.

Con’s

  • You are putting strain on an application during business hours.
    • When you are making a large query, you can take processing space that would otherwise be available to the people that need to work with this system for their day to day operations. And if you make an error, by for instance forgetting to put a date filter on your query, you could even bring the system down so no one can use it anymore.
  • You can’t compare the data with data from other sources.
    • Even when the systems are similar. Like an HR system and a payroll system that use each other to work. Data is always going to be different because it is granulated on a different level, or not all data is relevant for both systems.
  • You don’t have access to old data
    • To keep the applications at peak performance, old data, that’s irrelevant to day to day operations is deleted.
  • Data is optimized to suit day to day operations
    • And not for report making. This means you’ll have to get creative with your queries to get the data you need.

So what method should you use?

That all depends on what you need at that moment. If you need detailed information about things that are happening now, you should use OLTP.
If you are looking for trends, or insights on a higher level, you should use a Data Warehouse.

 Related References

Databases – Database Isolation Level Cross Reference

Advertisements

Database And Tables

 

Here is a table quick reference of some common database and/or connection types, which use connection level isolation and the equivalent isolation levels. This quick reference may prove useful as a job aid reference, when working with and making decisions about isolation level usage.

Database isolation levels

Data sources

Most restrictive isolation level

More restrictive isolation level

Less restrictive isolation level

Least restrictive isolation level

Amazon SimpleDB

Serializable Repeatable read Read committed Read Uncommitted

dashDB

Repeatable read Read stability Cursor stability Uncommitted read

DB2® family of products

Repeatable read Read stability* Cursor stability Uncommitted read

Informix®

Repeatable read Repeatable read Cursor stability Dirty read

JDBC

Serializable Repeatable read Read committed Read Uncommitted

MariaDB

Serializable Repeatable read Read committed Read Uncommitted

Microsoft SQL Server

Serializable Repeatable read Read committed Read Uncommitted

MySQL

Serializable Repeatable read Read committed Read Uncommitted

ODBC

Serializable Repeatable read Read committed Read Uncommitted

Oracle

Serializable Serializable Read committed Read committed

PostgreSQL

Serializable Repeatable read Read committed Read committed

Sybase

Level 3 Level 3 Level 1 Level 0

 

Related References

 

What are the dashDB isolation levels?

Advertisements

dashDB

 

Isolation levels are part of the ACID (Atomicity, Consistency, Isolation, Durability) paradigms in database control.  Isolation levels allow developers and user to trade-off consistency for a potential gain in performance. Therefore, it is essential to understand them and how the apply in structured Query Language(SQL).  The dashDB RDBMS has four isolations levels:

Repeatable Read (RR)

  • The repeatable read (RR) isolation level locks all the rows that an application references during a unit of work (UOW). If an application issues a SELECT statement twice within the same unit of work, the same result is returned each time. Under RR, lost updates, access to uncommitted data, non-repeatable reads, and phantom reads are not possible.
  • Under RR, an application can retrieve and operate on the rows as many times as necessary until the UOW completes. However, no other application can update, delete, or insert a row that would affect the result set until the UOW completes. Applications running under the RR isolation level cannot see the uncommitted changes of other applications. This isolation level ensures that all returned data remains unchanged until the time the application sees the data, even when temporary tables or row blocking is used.
  • Every referenced row is locked, not just the rows that are retrieved. For example, if you scan 20 000 rows and apply predicates to them, locks are held on all 20 000 rows, even if, say, only 200 rows qualify. Another application cannot insert or update a row that would be added to the list of rows referenced by a query if that query were to be executed again. This prevents phantom reads.
  • Because RR can acquire a considerable number of locks, this number might exceed limits specified by the locklist and maxlocks database configuration parameters. To avoid lock escalation, the optimizer might elect to acquire a single table-level lock for an index scan, if it appears that lock escalation is likely. If you do not want table-level locking, use the read stability isolation level.
  • While evaluating referential constraints, the dashDB might, occasionally, upgrade the isolation level used on scans of the foreign table to RR, regardless of the isolation level that was previously set by the user. This results in additional locks being held until commit time, which increases the likelihood of a deadlock or a lock timeout. To avoid these problems, create an index that contains only the foreign key columns, which the referential integrity scan can use instead.

Read Stability (RS)

  • The read stability (RS) isolation level locks only those rows that an application retrieves during a unit of work. RS ensures that any qualifying row read during a UOW cannot be changed by other application processes until the UOW completes, and that any change to a row made by another application process cannot be read until the change is committed by that process. Under RS, access to uncommitted data and non-repeatable reads are not possible. However, phantom reads are possible. Phantom reads might also be introduced by concurrent updates to rows where the old value did not satisfy the search condition of the original application but the new updated value does.
  • For example, a phantom row can occur in the following situation:
    • Application process P1 reads the set of rows n that satisfy some search condition.
    • Application process P2 then inserts one or more rows that satisfy the search condition and commits those new inserts.
    • P1 reads the set of rows again with the same search condition and obtains both the original rows and the rows inserted by P2.
  • In a dashDB environment, an application running at this isolation level might reject a previously committed row value, if the row is updated concurrently on a different member. To override this behavior, specify the WAIT_FOR_OUTCOME option.
  • This isolation level ensures that all returned data remains unchanged until the time the application sees the data, even when temporary tables or row blocking is used.
  • The RS isolation level provides both a high degree of concurrency and a stable view of the data. To that end, the optimizer ensures that table-level locks are not obtained until lock escalation occurs.
  • The RS isolation level is suitable for an application that:
    • Operates in a concurrent environment
    • Requires qualifying rows to remain stable for the duration of a unit of work
    • Does not issue the same query more than once during a unit of work, or does not require the same result set when a query is issued more than once during a unit of work

Cursor Stability (CS)

  • The cursor stability (CS) isolation level locks any row being accessed during a transaction while the cursor is positioned on that row. This lock remains in effect until the next row is fetched or the transaction terminates. However, if any data in the row was changed, the lock is held until the change is committed.
  • Under this isolation level, no other application can update or delete a row while an updatable cursor is positioned on that row. Under CS, access to the uncommitted data of other applications is not possible. However, non-repeatable reads and phantom reads are possible.
  • Cursor Stability (CS) is the default isolation level.
  • Cursor Stability (CS) is suitable when you want maximum concurrency and need to see only committed data.
  • In a dashDB environment, an application running at this isolation level may return or reject a previously committed row value, if the row is concurrently updated on a different member. The WAIT FOR OUTCOME option of the concurrent access resolution setting can be used to override this behavior.

Uncommitted Read (UR)

  • The uncommitted read (UR) isolation level allows an application to access the uncommitted changes of other transactions. Moreover, UR does not prevent another application from accessing a row that is being read, unless that application is attempting to alter or drop the table.
  • Under UR, access to uncommitted data, non-repeatable reads, and phantom reads are possible. This isolation level is suitable if you run queries against read-only tables, or if you issue SELECT statements only, and seeing data that has not been committed by other applications is not a problem.
  • Uncommitted Read (UR) works differently for read-only and updatable cursors.
  • Read-only cursors can access most of the uncommitted changes of other transactions.
  • Tables, views, and indexes that are being created or dropped by other transactions are not available while the transaction is processing. Any other changes by other transactions can be read before they are committed or rolled back. Updatable cursors operating under UR behave as though the isolation level were CS.
  • If an uncommitted read application uses ambiguous cursors, it might use the CS isolation level when it runs. To prevent this escalation, modify the cursors in the application program to be unambiguous and/or Change the SELECT statements to include the for read-only

 

Related References

IBM dashDB

Accessing remote data sources with fluid queries on dashDB Local, Developing for federation