<trp-post-container data-trp-post-id='25435'>Human, too human ?

With the development of artificial intelligence, robots are back: after their glory days in fiction, from Asimov to Kubrick, they are suddenly part of our everyday lives, or almost.
In Japan, they are beginning to replace pets in the homes of the elderly; they also advise customers in shops: Nescafé, for example, has 'hired' a thousand Pepper robots, developed by Softbank, to help consumers choose the Dolce Gusto products best suited to their tastes. More recently, this same Pepper has been taking part as a pupil in a Waseda high school: "I never thought I'd get into a school for humans. I promise to do my best", he declared (sic!).
For Japanese roboticist Masahiro Mori, a specialist in the emotional responses of non-human entities and author of the "Disturbing Valley" theory, humans develop a certain empathy for robots with certain human traits, but feel a strong repulsion towards humanoids that are too close to them: the machine must maintain a certain distance.
This is certainly why the robots that share the lives of the elderly most often take on an animal appearance.
In Europe, we are not there yet - our cultural capital also differs profoundly, beyond the technologies: connected objects will certainly not take on a human appearance for a long time to come.
But the cars we drive tomorrow may not be very different from the Discovery One spaceship from 2001: A Space Odyssey, and its on-board computer: HAL - except that, with the advent of miniaturisation, what took up a room in the rocket will now fit into a small box under the dashboard.
The question is who will have the final say in this vehicle?
When the driver's breath exudes more alcohol than the law allows, the car is unlikely to start: it's the lesser evil, even if it's not certain that a drunk individual will willingly submit to the dictates of the machine.
Machine, which simply responds to Asimov's first two laws: the First Law: "A robot may not harm a human being, nor, by remaining passive, allow a human being to be exposed to danger"; and the Second Law: "A robot must obey the orders given to it by a human being, unless such orders conflict with the First Law".
But Asimov didn't foresee more basic, but oh-so-realistic situations. Take the case of two connected vehicles (= two automotive robots) which, due to the poor control of their human drivers or simply extreme weather conditions, hurtle towards each other on a narrow road: a fatal accident is inevitable, but who is to be saved? The occupants of which vehicle? What criteria should be used to decide who will survive?
At a time when our politicians are struggling to draft laws to keep pace with technological developments (high tech is moving too fast, and ever faster), who is going to take on the task of writing the new laws on robotics and artificial intelligence?

They we have fact confidence. Discover our achievements