The Challenges Humans Should Address To Make AI Work Like It Should In Our Society

Created from dreams and popularized by science-fictions, Artificial Intelligence (AI) has become a hit and a hint of what computers could and would be. Machine learning is starting to become part of what people do and it's becoming more like us in both the goods and the bads.

With its potential to aid humans in many fields, including finance, health, shopping, news and others, AI has expressed some concerns. While they are having the ability to ease many of the process, they also create new problems that were never there before.

This challenges us humans in creating AI that really could change the way we see them in our societies, and the way we think about technology.

Below is a list of popular issues that we need to tackle in order to address AI, making it a technology that aids not invades.

Job Opportunities

Robot shake hands

Computers are much faster and much more efficient that us humans. With its ability to do automation very quickly without much intervention, they have taken away many jobs in factories and manufacturing plants. With huge leaps in AI, the rate computers are taking over humans has escalated dramatically.

Previous jobs that weren't disturbed by the presence of computers and AI, have started to meet their challenge.

From accounting to writing news, for example, or to self-driving cars, AI algorithms are threatening many jobs like never before. As tech advances, AI will soon set their eyes on other fields, to even replace professional jobs like doctors and even lawyers.

One one side, AI revolution will give us humans more than enough data to study. From raw extracts to science, IT jobs will continue to thrive as they are the ones to maintain the systems and the software running those AI algorithms. But on the other hand, for other jobs, AI are taking them away from humans.

People who don't have the skill to fill vacant jobs, will create an community where people with talents are unemployed.

In order to prevent such a thing, AI needs to be controlled. Tech industries and the government have the responsibility to help society in adapting to the major shift in the socio-economics landscape, and to help the transition towards the future when robots will be occupying more and more jobs.

People need to learn new tech skills to prevent them losing their jobs to AI. With skills sufficient to embrace the growing trend, people could complement the effort of AI.

Eliminating The Bias

Robot side

AI is still a child and it needs to learn to do what it should do before doing it properly. In order to learn, AI needs to feed on a lot of data.

Machine learning, for example, is an branch of AI that is behind many algorithms including face recognition, advertising, suggestions and more. In order to make them work like they should, the system needs to be trained on data to improvise how its algorithms work.

With the advance of technology, AI has more than plenty of data to learn. However, the problem lies on the trainers that feed the data to the algorithms and also the data itself.

Like a child, AI feeds and learns what it is exposed to. If the information is biased, the AI will adapt to the biases based on the data sets it learned. For example, if an AI learns that "white" is more popular than "black", it can turn into a racist.

This was seen in a beauty contest judged by five AIs. In 2016, the AI had more emphasis on whites over black, an example of computer having a biased point-of-view.

Because AI is still young, the moment could be regarded as a mistake by seeing AI having humorous flaws. But in order for AI to thrive like it should, humans need to change some aspects in its learning by putting some safeguards in place.

For example, researchers can promote transparency and openness to an AI's algorithmic datasets. With data shared and not owned by a single entity, people can audit the AI in order to make it move forward towards its goal.

Responsibility

Robot handcuffs

Computers are just machines with chips and wires. A software that runs inside them controls how the system work and making it move towards what it was created for.

Because robots are machines that work by themselves, they could accidentally cause harm to humans in their single-minded approach to getting jobs done. This is one reason why humans have kept robots and their field of work confined, or at least away from humans. Like in factories where human interactions are kept to a minimum.

But when AI is given the ability to think by itself and choose its own path when working in the field where humans are present, who is to blame? For example, a self-driving car that crashes and injured a pedestrian. If the cause was not a malfunction on the brakes, can us humans put the AI to court?

Before AI ever comes to existence, humans can easily determine who is to blame in a car crash. We can conclude whether a car crash is an incident or the result of an action, or an accident.

But in the era of AI-driven technologies, this is not possible.

Computers reacts to events with the algorithms figuring out for answers to a given problem. While data gives them the context they need, not every scenario can fit their decision. In self-driving cars, for example, this can become an issue when AI algorithms start making critical decisions such as when a it has to choose between the life of a passenger and a pedestrian.

Another example is when computers aided by AI replace human doctors, who can be blamed if the robot harms the patient?

Computers don't have feelings and they can't be taught by hurting them physically or emotionally. This blurs the boundaries of responsibility between users, developers and the data trainers. Every one of them can blame each other over an unwanted incident.

In order to prevent this, regulations to govern this should be placed in order to clarify and address any legal issues surrounding AIs.

Related: MIT's "Moral Machine" Challenges People To Decide Who Should Die In An Autonomous Car Accident

Safety

Robot worker

AI can shine a bright future to technology, and also a threat that could endanger all mankind. Depending on our point-of-view, the possibilities are there.

Computers and machines are created to aid humans. In its development and practice, we need to stop any negative effects they may do in order to get a job done. For example, a robot that is tasked to clean a house should not knock an expensive vase to get its job done faster, and shouldn't spray water into an electric outlet just because it sees it dirty.

AIs are single minded and challenges are to make them do what they should do and prevent them from doing something otherwise. Humans need to program AI in a way that they don't 'game the system'. Like for example, a cleaning robot that covers a dirty floor with a sheet rather than cleaning it.

Like earlier said, machines could accidentally cause harm to humans in their single-minded approach to getting jobs done. This is why we've kept robots and their field of work confined, or at least away from humans.

But if we really want machines to work along with us, the issues that need to be address:

  • Avoiding negative side effects: AI should not negatively disturb the environment it is in when trying to accomplish a task.
  • Avoiding reward hacking. AI should not trick itself or humans by deceiving.
  • Scalable oversight. AI needs to understand what it should do without asking too much.
  • Safe exploration. Ai should not experiment with things it doesn't know as it may cause harm to humans, itself or to the environment.
  • Robustness to distributional shift. AI needs to learn and adapt to the environment it is in, knowing that things can change.

Humans need to ensure that AI systems are able to learn from human without having to pester them and that it learns what behavior is appropriate in different environments. While many have believed Isaac Asimov's Three Laws of Robotics might help to prevent AI systems from harming humans, they do not offer all the answers.

Privacy

Robot eyes

To make AI work like it should, its machine learning needs to learn from data. Tech companies that work on AIs need to have enough data to feed them, and questions are asked about where they get the data sufficient to train the system.

In order to get more data, companies may go to the uncharted territory and cross some boundaries to go to places they shouldn't go.

For example, when we use search engines to find the things we want that may contain some personal issues, search engines know what we type into the search query and ties our data to what it would show on ads. So if you are a professional that is addicted to substances, for example, search engines may show some information about how to deal with the issue when you browse the web.

Search engines use AI to create many of their compelling contents (ads). One example is Google as the most popular search engines that is also active in developing the AI field.

Not limited to that. Both enterprise and governments have secrets that can leak from time to time. With data scattered waiting to be collected, AIs that will feed on the data can alter the behavior on how they see things.

Companies that are developing AI and machine learning technologies need to regulate their information collection and be transparent in what they do in their practices. Necessary precautions such as keeping data anonymized and not tied to anyone can protect user data. While the data is still there, the AI won't be able to understand the relation of the data to the person that is providing it.

So although AI and machine learning can impersonate a person by imitating his/her voice or handwriting, they can't relate the ability to that person.

The use and availability of AI and machine learning must also be revised and regulated to prevent ill use.

Users should also become more sensible about what they share with companies or post on the internet. As the internet widens its reach, we are all living in an era where privacy is exposed. Exposing more of ourselves will make AI worse.