Are our chatbots overworked? Do they feel belittled or poorly treated? These questions, which could make some of our readers burst out laughing, are at the heart of a mysterious recruitment within the AI company Anthropic. A look back at a very surprising decision and its foundations.
Protecting the “interests” of AI systems
Business Insider thus returns to the hiring of researcher Kyle Fish by Anthropic. The latter is supposed to reflect on the “well-being” of AI. Clearly, and as these technologies evolve, it will have to guarantee that artificial intelligence is sufficiently respected.
The company says it is evaluating “what capabilities are necessary for an AI system to be worthy of moral consideration” and what practical steps companies can take to protect “interests” AI systems.
The person himself did not wish to react to our colleagues. However, he has already mentioned this subject in the past:
I want to be the type of person who cares – early and seriously – about the possibility that a new species/a new type of AI might have its own interests that matter morally.
AI soon to be endowed with sensitivity?
This consideration is not disinterested, since he thinks that, if we treat them correctly, AIs could return the favor in the future if they become more powerful than us.
Among the proponents of this conception, there is this idea that these technologies could soon be endowed with consciousness. They would therefore not only be intelligent, but also sensitive.
Proponents of this idea compare these questions to previous debates over animal rights. Quoted by the American site, Jeff Sebo, director of the Center for Mind, Ethics, and Policy at New York University, explains:
If you look forward 10 or 20 years from now, when AI systems will have many more computational cognitive characteristics associated with consciousness and sentience, you can imagine that similar debates will take place
A contested version
As the economic media rightly points out, it is quite astonishing to worry about the future rights of machines, even though some are used today to roll back human rights. The brutal example of programs that make it possible to deny health care to sick children or to spread disinformation online, or to guide combat drones equipped with missiles that cause enormous damage. The message got through.