The 5 Attributes Of Useful AI, According To IBM

With people becoming dependent on computers to do their job, there is an increasing number of data generated. With these, there should be adequate amount of resources to train AI. But there is a problem: as the world changes, so will data.

Building an AI or machine learning model means building a way for us to look at the world we're in. But as the world and the data change, AI models need to adapt.

There is a gap between great AI prototypes and AI in operation. And according to IBM, the attempt to build a great AI model is just the first step of creating a useful AI.

An AI model on its own, is too brittle for the real world. For it to work perfectly against time, it needs to live as a larger system that is fluid and adaptable to changes.

Humans

First of all, there are some key components that AI should have:

Capable in dealing with voluminous amounts of data; adaptive to the environment with machine learning; reactive to conditions by making decisions; forward-looking to seek any possible scenarios; and concurrent to handle multiple people or systems simultaneously.

If the AI has all of the above, then it needs to have the following attributes for it to be considered useful:

1. Managed

AI and machine learning need to be thoughtful, durable and transparent. All that is for it to do real and lasting work.

For example, if it needs to identify the data and correcting issues with bad or missing data, there should an integration with data that is controlled. The version of each model indicates its inputs, and regulators need to know them in order to manage it.

2. Resilient

For an AI to adapt to the real world, it needs to be resilient to changes. If ever it fall out of sync, researchers should provide the equivalent data for testing, and do that testing frequently, but without burning unnecessary time.

The AI should have a threshold in which researchers can set so it will automatically alert them when the system's models need attention.

Then comes the decision: should the AI be retrained using old data sets? Or there should be an attempt to gather new data? Or should there be a re-engineering process from scratch? The answer depends on the data and the model itself.

3. Performant

To make AI learn something, it needs a lot of computational resources.

During training and deployment, models need to score transactions in mere milliseconds, not seconds or minutes. This is where it needs to excel in level of performance and efficiency. Without being performant, fraud and fleeting opportunities will happen.

Ideally, there should be a sufficient amount of GPUs to train and high-performance CPUs for deployment. All that with enough memory to spare.

And not only an AI should have great performance and efficient, it also needs to be error-free, regardless where it is deployed.

4. Measurable

As the world changes, data will follow. This is where budget allocation should be managed.

At the start of a project there may be a generous amount of budget. This is needed to make improvements in data access and data volume, improvements in model accuracy, and ultimately improvements to the bottom line. If the science team can't deliver concrete results, budget will dry up pretty quickly..

Here, a knowledge of data is required. Don’t just think about what is needed right now, and think about future measurements as data science work matures. Is the system fluid enough to adapt?

5. Continuous

The aspect of a fluid AI is about it in continuously learning as the world changes. Data doesn't sit still; AI needs to adapt to the changes if it wants to be useful.

At the same time, researchers should also grow their own learning, and evolve their knowledge to understand the advantages/disadvantages of certain technology, algorithms, programming languages, data sets, tools and so forth.

Digital

Moving Forward

AI requires IT platforms that are hybrid right from the start.

Since AI won't work well using one single technology architecture, there should be interconnected systems that can deliver fast data ingestion, analytics at large scale, high-volume concurrent transaction processing, and machine learning models to bring all that together.

And that with a goal to make AI capable making decisions on its own, without human intervention.

Large technology companies can have most of the world's computing power and data streams, so it's not a surprise that the biggest momentum for AI came from them. However, the availability of abundant, affordable compute power in the cloud, and free and open source software for big data and machine learning means that AI can quickly spread beyond large tech companies.

And as more companies dive into AI, there is another problem that needs to be solved: human talents.

Developers, like many other professionals, are finding that they have to continually update their skills to remain relevant.

To keep up with AI's pace, new programming languages that power machine learning should become part of the skill-set developers have. They have to make an investment in learning those languages if they really want to help with the development of AIs, in the hopes for AIs to soon aid them.