Google reveals five security issues concerning Artificial intelligence
In a recent article published by Google, they’ve revealed five major security problems related to Artificial Intelligence. From now on, companies will have to follow a guide on their future Al system to control robots before they can interact with humans.
The artificial intelligence is designed to mimic the human brain, or at least its logic when it comes to making decisions. Before worrying about whether an artificial intelligence (AI) could become so powerful that can dominate humans, it would be better to make sure that robots (also called our future colleagues and household companions) are trustworthy. That’s what Google has tried to explain to us. Google’s artificial intelligence specialists have worked with researchers from the Universities of Stanford and Berkeley (California, USA) and with the Association OpenAI on concrete security issues that we must work to resolve.
In white paper titled “Concrete Problems in AI Safety” this team describes five “practical problems” of accidents in artificial intelligence-based machine could cause if they aren’t designed properly. Al specialists define accidents as “unexpected and harmful behavior that may emerge from poor design of real world machine learning systems”. In short, these are not potential errors of robots we should be feared but those of their designers.
To concretely illustrate their point of view, the authors of this study voluntarily took a random example of a “cleaning robot”. However, it’s quite clear that the issues apply to all forms of AI controlling a robot.
Pour prévenir ce cas de figure, la solution pourrait consister à créer des « contraintes de bon sens » sous la forme de pénalités infligées à l’IA lorsqu’elle cause une perturbation majeure à l’environnement dans lequel le robot évolue.
- A robot may disrupt the environment :
The recommended dosage of sildenafil or cialis discount price as treatment for male sexual dysfunction is 25mg to 100mg. ACE inhibitors may also be connected with birth defects. pill viagra It boosts soft tab viagra physical health and improves sensation in the genitals. Keep communication lines open You buy brand cialis should talk openly about the pre-mature ejaculation.
The first two risks identified by the researchers from Google and their acolytes are related to a poor coordination and allocation of the main objective. There is first what they call “Avoiding Negative Side Effects”. Specifically, how to avoid environment related problems caused by a robot while it’s accomplishing its mission. For example, the cleaner could well topple or crush what is on his way because he calculated the fastest route to complete its task. To prevent this scenario, the solution may be to create “constraints of common sense” in the form of penalties imposed on the IA when he causes a major disruption to the environment in which the robot moves.
- The machine can cheat :
Second risk of Al based machines is to avoiding reward hacking. For IA, the reward is the success of the goal. Avoid the quest reward from turning into a game and the machine trying to get by all means, even skip steps or cheat. In the case of cleaning robot, it would for example to hide the dirt under the rug in order to say “that’s it, I’m done.”
A difficult problem to solve as an Al can be interpreted in many different ways a task and the environment it meets. One of the ideas in the article is to truncate the information so that the program does not have a perfect knowledge of how to get a reward and thus does not seek to go shorter or easier.
- How to setup the robot go to the basics?
The third risk is called scalable oversight. More the goal is complex, AI will have to validate his progress with his human referent, which would quickly become tiresome and unproductive. How to proceed so the robot can accomplish itself certain stages of its mission to be effective while knowing seek approval in situations that he will know how to interpret? Example: tidy and clean the kitchen, but ask what to do in the saucepan on the stove. It would simplify to the maximum step of the cooking task so that the robot goes to the point without coming to disturb you during your nap every time.
- How much independence can you give to an AI?
The next identified problem is the safe exploration of Al. How much independence can you give an AI? The whole point of artificial intelligence is that it can make progress by experimenting different approaches to evaluate the results and decide to keep the most relevant scenarios to achieve its objective. Thus, Google says, if our brave robot would be well advised to learn to perfect its handling of the sponge, we wouldn’t want it to clean an electrical outlet! The suggested solution is to train these Al with simulated environment in which their empirical experiments will not create any risk of accident.
- Does AI will adapt the change?
Fifth and final problem: robustness to distributional shift or how to adapt to change. “How to be ensured that AI recognizes and behaves properly when it is in a very different environment from the one in which it was being driven? It is clear that we wouldn’t want the robot who was trained to wash the floor of a factory with detergent products does not apply the same technique if asked to clean home.
The article ends by saying that these problems are relatively easy to overcome with the technical means currently available but it’s better to be prudent and develop security policies that can remain effective as the autonomous systems will gain in power. Google is also working on an “emergency stop” button for all menacing AI, if eventually one or several of these risks were not fully mastered,