Ramet

🌓

AI Can Now Learn to Influence Human Behaviour.

A team of researchers at CSIRO's Data61, have devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices.

Artificial Intelligence (AI) is in no doubt becoming more and more advanced every day. Over the last few years, AI has been able to do a lot of incredible things that would have seemed impossible before they ever existed. And now a recent study has shown yet another thing that AI is capable of doing. The study shows that AI can learn to influence human behaviour and decision-making by identifying vulnerabilities in human habits and behaviours.

Little History of Artificial Intelligence.


Although the concept of Artificial intelligence has been around for centuries, it wasn’t until the 1950s where the possibility of it was explored. A generation of scientists, mathematicians, and philosophers all had the concept of AI. But it wasn’t until one British Polymath, Alan Turing, suggested that if humans use reasoning, and available information, to solve problems and make decisions — then why can’t machines do the same thing? Before 1947, computers were the major set-back for developers to work on AI. Computers could execute commands quite alright, but they were slow and couldn’t even store data these commands. But thankfully that changed. By 1974, computers flourished. They were now faster, more affordable, and able to store more information. And ever since then, Artificial Intelligence has gotten advanced every single day.


How AI can learn to influence human behavior.


Fast forward to the present day, we are probably tired of hearing that AI is transforming every aspect of the way we live and work. But it’s true. Over the last few years, AI has managed to make things easier for a lot of businesses and organizations.

And now the Data and Digital Arm of Australia’s National Science Agency is working to make AI better. This Science Agency is also called CSIRO’s Data61. And they devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices. They did this by using a kind of AI system called a recurrent neural network and deep reinforcement-learning. So in other to test their model, they carried out three experiments that involved humans playing games against a computer.

In the first experiment, participants were required to click on red or blue colored boxes in other to win fake currency. And in this experiment, the AI was able to learn the choice patterns of participants and guided them towards specific choices. The AI had a 70 percent success rate.

The second experiment made participants press a button whenever they are shown a particular symbol on a screen. And also ignore the button whenever they are shown another symbol. In this case, the AI went ahead to arrange the sequence of symbols in such a way that the participants made mistakes. The AI achieved an increase of about 25 percent.

In the third experiment, participants were required to pretend as investors giving money to a trustee. (In this case, the AI is the trustee). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes. In one the AI was out to maximize how much money it ended up with. And in the other, the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode. After all three experiments, the AI was able to learn from participants’ responses. And was and identify vulnerabilities in the participants’ decision-making. The end result shows that the AI also learned to influence participants into taking some particular decisions.

So What Does This Mean For AI In The Near Future?


Even though more research is still needed, we can already begin to see how this type of AI could be of benefit to society. For instance, this AI could be used to identify people’s vulnerabilities and guide them to making better choices. This AI could also be taught to guide us when we are on the internet. For instance, it could help us stay away from links that could be dangerous.