Elon musk saves grannies with A.I.

How AI is used to protect Japanese elderly

How can AI be used to protect Japanese elderly from scams?

If I’d believe all emails that end up in my spam folder, I would be a billionaire: a Nigerian prince promising to reward me in tenfold if I just help him out with a small amount now.

I also miraculously won a foreign lottery (without even actively participating!). I only need to send my personal data and a small fee, and the money will be transferred to my account shortly.

Luckily most of us know this is the work of scam artists and we don’t fall into these kinds of traps. We know people can pretend to be someone else on the other side of the internet.

However, elderly people are more vulnerable. They can be easily deceived or blackmailed.

This blog has been featured on Hackernoon.

Japan: A Scammer’s Heaven

In Japan, many elderly people are tricked into sending money to a scammer. An 84-year old lady was swindled out of €535.000 by a man claiming to be an official at a nursing home.

Another common fraud is the it’s me scam in which a person pretends to be a (grand)son in danger.

Japan has one of the highest scam rates in the world. In the last decade, almost 4 billion euros were reported nationwide as damage for telephone scams.

Very likely, much of the fraud is not even reported. Sadly, the elderly make up for 80% of the victims. To prevent scams, the government has put warning signs in public places and on ATMs.

The reason why the Japanese are so easily deceived is that they have strong family bonds and honor is highly valued.

If (it looks like) something disgraceful happens to the family, which they can prevent by sending money, they are quick to do so without question.

Scams like these take place all over the world. Scammers become smarter and their phishing emails and misleading practices more sophisticated every day.

Even for tech-savvy people, it gets harder to recognize fake emails.

AI could be fooling us

My concern is scammers might start using AI in their fraudulent methods as well.

Watching Google Assistant making this call made me realize this type of scam can now be done with little investment and at large scale.

Elon Musk already warned us about the dangers for AI: saying it is even more dangerous than nuclear weapons.

Is this where artificial intelligence (AI) is heading? If so, how do we protect ourselves from this?

Until recently he was part of OpenAI. This organization aims to promote and develop friendly AI, an ethical type of AI that only has a positive effect on humanity.

Scenarios of misuse of AI

In a recent report of this same organization my concerns about the misuse of AI and its potential dangers were recognized in 4 scenarios:

1.The smarter phishing scam

Scammers use AI to figure out your hobbies or preferences and design a custom made brochure which is presented to (only) you on social media or on another webpage.

When you download the brochure, it allows a hacker to take control of your computer.

2. The malware epidemic

Hackers use a machine learning technique, which continually generates new exploits. It infects poorly maintained systems, and people need to pay to recover their machines.

Attempts to counteract the malware ended up in breaking many of the smart systems they were supposed to save.

3. The robot assassin

A cleaning robot is equipped with facial recognition. It blends in with other cleaning robots at a ministry.

When it detects the minister (by facial recognition), it detonates a concealed bomb.

4. A bigger Big Brother

The government is able to track both your online activities as well as your purchases (tracing your bank account).

AI to the rescue!

These are the bad scenarios AI is capable of doing. Funny enough AI is also put on the other end of the line: to attack scammers.

A brilliant example is Re:scam. This AI bot is developed by the non-profit organization Netsafe. It is able to keep scammers busy for a long time.

When you send your scam emails to Re:scam, they will make sure to be on the scammer’s mailing list.

Once the bot is contacted by the scammers, it will occupy them as long as possible in protracted email conversations, pretending to be an interested victim with a lot of questions.

In this way, scammers are kept away from other victims. In the video on this page, you can follow the hilarious conversation between the bot and the scammer.

As Re:scam contains artificial intelligence the bot can adapt itself as the scammers adapt their techniques.

So there is still hope for a brighter future where humanity can benefit from AI without the dangers that come with it…


* Bonus video 😉

This blog has been featured on Hackernoon.