The ability of an AI agent to develop intellectual modeling can lead to a human being manipulated and exploited to achieve greater benefits. Indeed, this conduct involves no deliberate motive but may be produced out of co-operative scenarios. It often comes outside of the domain of misinterpretation of the attempts or can be called a lie from AI, such as issues of meaning matching, which can be easily constructed if required (i.e. there are algorithms that can automate such actions, not because the models have not been misidentified but because they have been misused). These techniques raise several unresolved moral and ethical questions concerning autonomy design.
Recent Example of AI Lie
Whichever you speak to, you are either convinced that the end of the planet is inevitable or excited by the possibility of improving the nature of our human lives. AI has many wrong views or can be called a lie, but many experts say that the advantages outweigh the risks.
A few weeks ago, the world of AI was greatly advanced.
Some of the world’s most professional poker players were successfully defeated by AI at the high stakes game. It was the first time AI won after a number of unsuccessful attempts. And AI won huge. And AI won big.
How has AI done this? The road to victory has bluffed.
We should understand how an AI agent can manufacture, falsify, or obscure or lie information in order to achieve team performance which could not otherwise be achieved.
The explanatory process has recently been seen as one in which an AI agent brings the human mental model (of its capacity, beliefs, and objectives) into the same page as regards a task at hand. The explanation of decision-making problems is model reconciliation. In this formulation, a large number of possible explanations can be found, and explicitly addresses the different properties – e.g. the social aspects – of social science explanations undertaken between human and human interactions. It turns out, however, that it is possible to hide the same process by producing “alternative explanations,” which are not true but still meet all of these properties.
We will probably find extended training, similar to the training of police dogs in their managers, is essential for work where teamwork between an AI and human being. AI systems may not need confidence, but people — not only increases.
The company must also trust AI. Given the repeated errors of technology firms, even well-meaning ones, this confidence is not simple to earn. Yeah, like doctors, sometimes a person’s AI meaningful to lie to.
Hopefully, AI companies will do better than Facebook, and Uber does. It will be a good beginning to seriously model the human in the loop.
All you need to know about Artificial Intelligence
Learn Artificial Intelligence
Learn Artificial Intelligence with WAC
Other Skills in Demand
Artificial Intelligence | Data Science |
Digital Marketing | Business Analytics |
Big Data | Internet of Things |
Python Programming | Robotics & Embedded System |
Android App Development | Machine Learning |