Db2 for z/OS – Thirty-Five and Still Hip

imageDb2 for z/OS is one the most business-critical products in the IBM portfolio and remains core to many transaction processing, advanced analytics, and machine learning initiatives.

Looking back…

Over the past 35 years Db2 has been on an exciting and transformational journey. Retired IBM Fellow Edgar F. Codd published his famous paper in 1969 “A Relational Model of Data for Large Shared Data Banks“. From that, “Sequel” – later renamed SQL – was born.

Db2 launched in 1983 on MVS but Don Haderle (retired IBM Fellow and considered to be the “father of Db2”) views 1988 as a seminal point in its development as DB2 version 2 proved it was viable for online transactional processing (OLTP), the lifeblood of business computing at the time.

Thus was born a single database and the relational model for transactions and business intelligence.

Success on the mainframe led to the port to open systems platforms, like UNIX, Linux and other platforms on both IBM and non-IBM hardware.

Db2 helped position IBM as an overall solution provider of hardware, software and services. Its early success, coupled with IBM WebSphere in the 1990s put it in the spotlight as the database system for several Olympic games — 1992 Barcelona, 1996 Atlanta and the 1998 Winter Olympics in Nagano. Performance was critical – any failure or delays would be visible to viewers and the world’s press as they waited for event scores to appear.


Mainframes continue to store some of the world’s most valued data. The platform is capable of 110,000 million instructions per second, which (doing the math) translates into a theoretical 9.5 trillion instructions per day. With such high-value data, some of which holds highly sensitive financial and personal information, the mainframe becomes a potential target for cyber-criminals. Thankfully, the IBM Z platform is designed to be one the most securable platforms. Another key capability of the platform is the integrity of the z/OS system and IBM’s commitment to resolve any integrity-related issues.

Db2 for z/OS is a strong foundation for the IBM Z analytics portfolio with the latest iteration, version 12, providing enhanced performance over the previous version. Db2 leverages the reliability, availability and serviceability capabilities of the IBM Z platform, which delivers five nines (99.999) percent—near continuous data availability.

Advanced in-memory techniques result in fast transaction execution with less CPU, making Db2 an in-memory database. Rich in security, resiliency, simplified management and analytics functionality, Db2 continues to provide a strong foundation to help deliver insight to the right users, at the right time.

The ability to ingest hundreds of thousands of rows each second is critical for more and more applications, particularly for mobile computing and the Internet of Things (IoT) where tracking website clicks, capturing call data records for a mobile network carrier, tracking events generated by “smart meters” and embedded devices can all generate huge volumes of transactions.

Many consider a NoSQL database essential for high data ingestion rates. Db2 12, however, allows for very high insert rates without having to partition or shard the database — all while being able to query the data using standard SQL with Atomicity, Consistency, Isolation, Durability (ACID) compliance.

In 2016 Db2 for z/OS moved to a continuous delivery model that delivers new capabilities and enhancements through the service stream in just weeks and sometime days instead of multi-year release cycles. This helps deliver greater agility while maintaining the quality, reliability, stability, security requirements demanded by its customer base.

We also enhance performance with every release and now provide millions of inserts per second, trillions of rows in a single table, staggering CPU reductions….  the list goes on.

Db2 for z/OS is the data server at the heart of many of today’s data warehouses, powering  IBM analytics solutions such as Cognos, SPSS, QMF, ML for z/OS, IBM DB2 Analytics Accelerator and more.  In short, Db2 creates a sense of “Data Gravity” where its high value prompts organizations to co-locate their analytics solutions with their data. This helps remove unnecessary network and infrastructure latencies as well as helping reduce security vulnerabilities. The sheer volume and velocity of the transactions, the richness of data in each transaction, coupled with data in log files, is a potential gold mine for machine learning and A.I. applications to exploit cognitive capabilities and do smarter work, more intelligently and more securely. And so Machine Learning for z/OS was released, built on open source technology, leveraging the latest innovations while making any perceived complexities of the platform transparent to data scientists through the IBM Data Science Experience interface.


The future is hybrid cloud. Customers will always need on-prem data and applications but the move to cloud (public or private) is in high demand. We see the opportunity to help customers reduce capital and management costs, enabling them to focus on running their data and advanced analytics to create business advantages while providing a dynamic, elastic scale-out infrastructure in the cloud from any of our data centers around the world. Cloud-enabling applications and middleware such as Db2 for z/OS also helps clients to rapidly provision new services and instances on demand — again for both public or private cloud.

To the end user, the processing platform is (and should be) transparent to them and transparent to the applications that connect to or through DB2 for z/OS.

We recognize the draw of cloud — and how fast it’s changing. It’s why this DBMS offering continues to leverage a continuous delivery model to speed this transformational journey.

Our  “One Team” approach has made this work possible. Many talented people participate in this work but some of the key players driving the effort are Namik Hrle – IBM Fellow,  and Distinguished Engineers Jeff Josten and John Campbell.

Your next move…

To stay connected to what’s happening next for Db2 for z/OS, I encourage to check in regularly at ibm.com/analytics/db2/zos and also at the World of DB2.

Dinesh Nirmal,
VP IBM Analytics Development
Follow me on twitter @DineshNirmalIBM

Should’ve, Could’ve, Would’ve – Making the Optimal Decision.

In a previous blog I talked about the value of machine learning and how it could help organizations by making smarter predictions by continually learning and adapting models as it consumed new interactions, transaction and data.  I compared that to how my son embraced learning about the world around him to become gradually smarter, more knowledgeable.  But that doesn’t always mean he is going to make the best decision because he may not have all the information or foresee or correlate past events.

So, I guess there are times when we wonder – or get asked by others – whether the decision we just made was the best possible one.  Think of when you made your last hi-tech or car purchase.  It’s often difficult to judge at the time if it was the best choice as there are many parameters involved including (but not limited to) logic, price, best value, best meets our needs and wants, emotion, political.  How many times have you sought to justify that purchase immediately after by doing even more research on reviews by others?  Don’t worry. It’s a natural human emotion known as post purchase cognitive dissonance.

The collective name for this capability is called prescriptive analytics and it relates to predictive analytics as shown in figure 1 below.


Figure 1: Descriptive, predictive and prescriptive analytics relationship.

Making that optimal decision at a business or boardroom level becomes even more exacerbating and complex, involving many people and large sums of money. Every decision needs to balance risk with cost and with benefit.  Quite often there are conflicts of interest, bias, power struggles involving political and emotional agendas that can result in suboptimal decisions being made for the business.  Oh the frailties and flaws of humankind!   The crux of the matter is there are far too many parameters, data points, correlations and patterns for us humans to be aware of to make the optimal decision for every transaction and interaction.  So there are times we need these capabilities to augment and balance our own judgements.  These decisions can range from a simple purchase or where best to distribute warehouse stock, where to place emergency services in the event of a pending disaster or be financial focused or risk based through to decisions that ultimately help preserve or save lives in the medical field.

Decision Optimization at every transaction.

IBM has been providing decision optimization for many years as part of the business logic within applications with products such as IBM Decision Optimization and its market leading CPLEX Optimizer engines.  Together these offerings are providing a collaborative environment for key personas to deliver a powerful solution application to line of business users to make better and faster decisions in planning, scheduling and resource assignment. In short, the aim of Decision Optimization is to remove the guess work from making a decision by “prescribing” and automating the best decision for you. Just to put your mind at rest – the business user / planner still has the ability to change that decision, interact with it – it is something which is recommended, but the end-user can still own that choice of whether the decision will be operationalized or not.

There are three key personas that are involved in building a decision optimization solution as part of the optimization application development cycle (figure #2) namely the business analyst, operation research (OR) expert, application developer – and the LOB consumer of course.


Figure 2: Optimization Application Development Cycle.

Together these personas have a broad range of responsibilities and needs such as:

  • Create optimization models / algorithms.
  • Use APIs to embed in decision making applications.
  • Solve models with the IBM CPLEX Optimizers.
  • Analyze results.
  • Use Rest APIs to embed in decision making applications
  • A Collaborative development environment to build & deploy enterprise-scale optimization based applications
  • Visualize trade-offs across multiple plans, scenarios, KPIs.
  • Optimize costs vs robustness.
  • Recommend decisions hedged against data uncertainty.

You don’t have to be an Operations Research Expert to use IBM Decision Optimization.

The goal is for decision optimization to be consumable making this powerful technology available to decision makers who are not necessarily trained in mathematical modeling.  Business managers need to be able to use their own business language to define a decision model.  To support this, the IBM Decision Optimization R&D team created the proof-of-concept for “Cognitive Optimization”.  The goal of Cognitive Optimization is to allow business users to directly create and work with Decision Optimization models, without requiring the intervention of a mathematical whiz.    It uses a combination of the business data and the business user input, in natural language, to figure the business intent, to suggest potential decision models, and then to use those models for “what-if” and trade-off analysis.”

Similar with what was announced for the Watson Machine Learning service, Optimization as-a-Service will be also available as part of the data science experience (DSX), in addition to being a standalone cloud, hybrid, and on-premise offering, for the personas mentioned above to experience decision optimization as part of a prescriptive analytics solution as a stand-alone service.   If you consider the high level flow of optimization as-a-service it looks very similar to machine learning as-a-service as shown in figure #3 and, as such, they share many commonalities which makes DSX an ideal collaborative environment for machine learning and operations research practitioners.


Figure 3: Optimization-as-a-Service high level workflow

Clear business benefits reducing time, risks and costs.

In summary the combination of IBM Watson Machine Learning and the Optimization as-a-service capability can help organizations across every industry make progressively smarter cognitive insights and act on optimized decisions while removing the human frailties of emotional, political and personal bias.

Organizations are already experiencing the benefits. For example, a global tire manufacturer uses IBM decision optimization solutions to help:

  • Optimize long-term production planning, considering up to 10 million constraints across all products and plants
  • Predict when major machine bottlenecks are likely to occur, enabling staff to take corrective action early on
  • Drive smarter decision-making with 30 times more what-if scenarios evaluated
  • Save up to 30% of planners’ time, allowing them to focus on value-add initiatives

Your Optimal Decision

Over the next few months this is going to be an exciting space to watch as decision optimization moves forward with three key design principals of simplicity, collaboration and convergence of the technologies helping more organizations complete their cognitive journey. So what do YOU do next? What’s YOUR next best decision?  Sign up for Optimization as-a-service as part of the Data Science Experience here.

Dinesh Nirmal,  

Vice President, Analytics Development

Follow me on Twitter @DineshNirmalIBM

Business Differentiation through Machine Learning.

I look at my child and marvel as he embraces the ever fast moving world around him, adapting to new experiences, grasping technology, absorbing a bombardment of information from so many sources.  It’s staggering to watch their progress from basic learning of just accepting facts they are taught, to augmenting those facts with their own knowledge, to asking questions, using their knowledge to express their opinions and values to others, then challenging facts and hypotheses that they once accepted to adapting their knowledge, understanding, decisions and value systems. And then they start reasoning with their parents! Their brains are like sponges using their senses of seeing, hearing, touch, smell and taste, absorbing information, experiences, emotions, constantly connecting new events and situations with those from the past – pushing boundaries and testing us.

Watching my son learn was fascinating but quite frankly businesses can’t wait years to learn about their data and how best to act on it.


From Business Intelligence to the Intelligent Business.

The business world is complex with many fast moving dynamics.  We only see the tip of the iceberg when it comes to the total information that exists at any point in time as it grows exponentially. Nor are we able to fully harness the collective thoughts or full knowledge of the people around us.  This is where cognitive technologies such as the many forms of machine learning (ML) can really be a huge advantage to business people.

Simply put machine learning is the capability of computers to learn without being explicitly programmed.

Imagine systems that have total information awareness. “Nodes” that connect with each other on premise and in the cloud that learn about each other like synapses in the brain.  Harnessing the collective consciousness of these “machines” (don’t restrict your thoughts to hardware here) and applying intelligence to advise on the “optimal” decisions resulting in the “optimal” outcome is what every organization would want. I think I just defined the killer-app for business.

What does it look like?  Well I envisage it as the graphic below – a cycle or workflow of constant learning.

Screen Shot 2016-06-29 at 8.34.45 AM

Figure 1 : Machine Learning  – a cycle of constant learning


What’s out there today?

The good news is that much of these capabilities exist in IBM solutions today and much more in our research labs yet to emerge and potentially change the industry.

So how do you get started?  There are four main states when considering knowledge: Know what you know, Know what you don’t know, Don’t know what you know, Don’t know what you don’t know.  And all this starts with your data. We have solutions that can catalogue what information you have, discover information that you didn’t know you had and help identify information that is missing or untrustworthy.  These can help reduce not knowing what you don’t have.

Next you need capabilities that can ingest information where ever it is without having to move it.  From this data we can apply machine learning (ML) that can discover relationships, rationalize, predict what might happen next and actually become intelligent (all knowing) about the data and all previous outcomes.  This can only be achieved through continuous learning and adapting. There are also other steps around optimization of that learning that can be read here. There are many vendors that claim to provide one form of ML capability or another and while that may be interesting in its own right, on its own it has minimal value.  Combining all forms or learning and analytics can take you closer to the bigger picture of “cognitive” in which IBM strives to be a leader. I’ll expand a little more on cognitive later in this blog.


From Tic-Tac-Toe , Jeopardy, Crime Prevention, Cancer Treatments and more.

In my early student days I learned how to code simple recursive algorithms using rote learning to produce a game of Tic-Tac-Toe that could not be beaten after five or six games. It did this by minimizing its losses by avoiding outcomes that would lead to failure.  In 2011 “Watson” competed on Jeopardy and beat the top contestants.  It had access to 200 million pages of structured and unstructured content consuming terabytes of disk storage including the full text of Wikipedia but was not connected to the Internet during the game. For each clue, Watson’s three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game’s signaling device, but had trouble in a few categories, notably those having short clues containing only a few words – kind of similar to human language ambiguity.  Its natural language processing, predictive scoring and models were key to its success.

In February 2013, IBM, Wellpoint and Memorial Sloan-Kettering Cancer Center announced the first commercially developed Watson based cognitive computing technology to be implemented for utilization management recommendations to physicians in lung cancer treatment at Memorial Sloan Kettering Cancer Center in conjunction with health insurance company WellPoint Inc. (now Anthem Inc). At the time of writing this blog, nearly 43,000 organizations have registered to use IBM Watson Healthcare Analytics Platform .(1)


Machine Learning 101

I am often asked “How confident are you in that decision?” I used to base my answer on the strength of the information I had – and some of it was intuition, gut feel, life experiences.  But now I can scientifically put a figure on it – as precise as the underlying information allows of course.  In fact, well established IBM analytics products have been using these predictive models and scoring algorithms (included in Watson Analytics) across many industries for years to better manage risk, and identify potential fraud (I’ll tell you about my credit card experience in a later blog while attempting to rent a vehicle).  In 2015 IBM donated its SystemML to the Apache™ Software Foundation. SystemML is a flexible machine learning system designed to auto-scale to Spark and Hadoop® clusters and extends the core machine learning in the Apache Spark™MLlib libraries. We also have machine learning in many other forms available as-a-service including but not limited to Natural Language Classifier, Retrieve and Rank, AlchemyVision –and many others that we will describe in more detail later.   Below is a diagram of how machine learning is implemented as a generic model.

Screen Shot 2016-06-29 at 8.34.04 AM

Figure 2 – A generic machine learning model


Beyond Traditional ML – Learning through Senses.

As mentioned earlier IBM has many other forms of machine learning technologies that help  differentiate our cognitive capabilities from other vendors. Using vision, language and other ML technologies begins to more closely simulate human behavior of understanding, reasoning and learning.  Below are some of the key ML technologies that are being used by many organizations around the world.

AlchemyVision is an API that can analyze an image and return the objects, people, and text found within the image. AlchemyVision can enhance the way businesses make decisions by integrating image cognition. Organizations across a variety of industries ranging from publishing and advertising to eCommerce and enterprise search can effectively integrate images as part of big data analytics being used to make critical business decisions by better targeting ads, organizing image libraries, improve consumer experience, monitor your brand, profile target markets, improve researching.  The Tabelog case study is particularly interesting. Over 40,000 foodies visit the Tabelog site, confident that Tabelog will provide accurate and reliable recommendations. Additionally, more than 200,000 registered restaurants use the site to help brand and promote their establishment. Try it now

Natural Language Classifier is a service that enables developers without a background in machine learning or statistical algorithms to create natural language interfaces for their applications. The service interprets the intent behind text and returns a corresponding classification with associated confidence levels. The return value can then be used to trigger a corresponding action, such as redirecting the request or answering a question. The Natural Language Classifier is tuned and tailored to short text (1000 characters or less) and can be trained to function in any domain or application. Typical usage scenarios are:

  • Tackle common questions from your users that are typically handled by a live agent.
  • Classify SMS texts as personal, work, or promotional
  • Classify tweets into a set of classes, such as events, news, or opinions.
  • Based on the response from the service, an application can control the outcome to the user. For example, you can start another application, respond with an answer, begin a dialog, or any number of other possible outcomes.

You can try it out by clicking here.

Retrieve and Rank is a service helping users find the most relevant information for their query by using a combination of search and machine learning algorithms to detect “signals” in the data. Built on top of Apache Solr, developers load their data into the service, train a machine learning model based on known relevant results, then leverage this model to provide improved results to their end users based on their question or query. The Retrieve and Rank Service can be applied to a number of information retrieval scenarios. For example, an experienced technician who is going onsite and requires help troubleshooting a problem, or a contact center agent who needs assistance in dealing with an incoming customer issue, or a project manager finding domain experts from a professional services organization to build out a project team.  You can try it out here.


From Crystal Ball Predictions to Prescriptive Actions.

So predicting a hurricane or an outcome or recognizing images and video or understanding the importance of a particular piece of information is just a first step. Now you need to take prescriptive actions on that understanding to, for example, survive the hurricane or achieve business advantage from a business opportunity that may only last a short time.

IBM has the capabilities not only to help provide an advanced and wide range of machine learning capabilities and to act on them – but to do it in the “right-time” – for example being able to predict whether a trade or bank transaction is legitimate or fraudulent 15,000 times a second.  In business, speed is of the essence.


Cognitive + Cloud = Optimal Business Outcomes

So it seems that machine learning can be advantageous (even smarter) and faster than humans in certain situations – given complete and accurate data. Combining the wide range of advanced machine learning capabilities described above with the ability to act prescriptively on what has been learned with IBM’s full cognitive analytics capabilities can yield many opportunities to potentially out-maneuver your competition, act with confidence and help your organization become the optimal Intelligent Business. Cloud is key here because it has the potential for everyone and everything  (including data) to be interconnected.

Machine learning can help enable cognitive systems to learn, reason and engage with us in a more natural and personalized way. These systems will get smarter and more customized through interactions with data, devices and people. They will help us take on what may have been seen as unsolvable problems by using all the information that surrounds us and bringing the right insight or suggestion to our fingertips right when it’s most needed. Over the next five years, machine learning applications could lead to new breakthroughs to help amplify human abilities, assist us in making good choices, look out for us and help us navigate our world in powerful new ways.

In summary machine learning in all its forms has the potential to bring the collective knowledge and consciousness of humans and machines together to help make the world a better, safe place.

In the following months the team and I will be taking you on a journey exploring many more aspects of IBM’s cognitive and learning capabilities.

Of course you won’t be able to predict where I’m taking you next.   🙂

For more information on IBM’s cognitive and machine learning capabilities click this link ibm.com/outthink


Dinesh Nirmal,

Vice President, Development, Next Generation Platforms, Big Data & Analytics

Follow me on Twitter @DineshNirmalIBM


Jean Francois Puget (PhD),

Distinguished Engineer, Chief Architect IBM Analytics Solutions

Follow me on Twitter @JFPuget


Foot notes

(1) IBM Watson Analytics Team based on number of registrations as at June 27 2016

TRADEMARK DISCLAIMER: Apache, Apache Hadoop, Hadoop, Apache Spark, Spark are trademarks of The Apache Software Foundation.