Oxford – Technology experts call the future of the Earth is in the threat of artificial intelligence, where the risk of taking over nearly 10 percent of human activity within the next 50 years.
Launched from Independent.co.uk on Thursday (22/2/2018), a new report prepared by 26 leading experts painted a gruesome picture of the world in the next 10 years.
In the future, the utilization of artificial intelligence tends to empower all people, including ‘naughty’ governments, criminals, and terrorists, the report warns.
If people, including policy makers and researchers do not work together, then the threats can penetrate some of the most basic parts of human life.
The associated threats can come from drone attacks, to robots used to manipulate news agendas and electoral processes.
The Report on the Use of Dangerous Artificial Intelligence: Forecasting, Prevention, and Mitigation cite some of the worst possible threats that are hard to imagine.
The form of artificial intelligence that is most likely to threaten is the technology of synthetic speech-makers. This technology can be exploited in the making of hidden video propaganda by irresponsible elements.
Global Synergy is Required
The report on the threat of artificial intelligence has been compiled by experts from many of the world’s leading institutions.
By some of the intelligence agencies involved, the cross-related discussion meeting was the first in history.
Not only beneficial, the development of artificial intelligence also began to realize has some serious threats that damage the order of human life.
The report includes input from representatives of OpenAI, a research group founded by Elon Musk; Oxford University Humanitarian Institution; and the Center for Study of External Risk Studies at Cambridge University.
“Artificial intelligence is one of the determinants of the future, and this report has implied the worst possible in the next five to ten years,” said Dr. Sean O Heigeartaigh, scientist and executive director of the Center for External Risk Studies at Cambridge University.
According to Dr. Heigeartaigh, the related report suggests a broader approach that might help, such as how to design software and hardware to avoid hacking, to review relevant international laws and regulations, and so on.
Danger, Cyber Criminals Can Abuse Artificial Intelligence (AI)
|Robot who is good at playing chess show ability in Consumer Electronic Show (CES) 2017 exhibition in Las Vegas, USA (8/1). This robot is equipped with artificial intelligence “intelligent vision system” that can work with precision. || |
| || |
The researchers assessed criminals can take advantage of the progress of artificial intelligence (AI, Artificial Intelligence).
Cyber criminals are thought to be able to exploit the technology to create hacking attacks, cause autonomous cars to crash, and convert unmanned aircraft into weapons.
This was stated in a study published Wednesday by 25 technicians and public policy researchers from Cambridge, Oxford and Yale Universities, along with military and security experts.
This study is considered to be a reminder of the possible potential abuses of artificial intelligence by “naughty” states, criminals and single criminals (lone-wolf attackers).
Researchers believe the misuse of artificial intelligence can pose an imminent threat in digital, physical and political security, with a highly efficient targeted large-scale attack.
“We all agree there are many positive applications of artificial intelligence, but there are loopholes in the literature regarding the issue of harmful use,” said researcher from Future of Human Institute Oxford, Miles Brundage.
Artificial intelligence is a technology that uses computers to perform tasks that usually require human intelligence, such as making decisions or recognizing text, speech, and visual images.
Artificial intelligence is regarded as a technology that can solve technical problems, but on the other hand also raises the debate over whether its automatic functioning can lead to widespread unemployment or other social change.
Compact Overcome Concerned Risk of Artificial Intelligence
Furthermore, the researchers also discussed the development of academic research on artificial intelligence security risks, and called on governments, technical experts, and policies, to collaborate on this issue.
They also detail the power of artificial intelligence to create images, text, and audio that can imitate the identity of others online, or can influence public opinion. The authoritarian regime is judged to use such technology.
The study also contains a number of recommendations including regulating artificial intelligence as a dual use technology, namely military and commercial.
In addition, the researchers also questioned whether academics and others should control what they publish or reveal new developments of artificial intelligence, until other experts in the field have the opportunity to learn and react to potential dangers they may encounter.
“Eventually we ended up with more statements than answers,” Brundage said.
The researchers speculate, artificial intelligence can be used to make fake audio and video public officials look very realistic for propaganda.
In the past year, pornographic videos “deepfake” appeared in the online realm that featured the faces of celebrities on different bodies.
“This happens in pornography rather than propaganda, but there is nothing in ‘deepfakes’ that can not be implemented for propaganda,” said Head of Policy OpenAI, Jack Clark.
OpenAI is a group founded by CEO Tesla Elon Musk and Silicon Valley investor Sam Altman, focusing on developing a friendly artificial intelligence that benefits humanity. Similarly, as quoted by Reuters on Thursday (22/2/2018).