I'm Not a Robot
How reCAPTCHA Works
Are you a robot? You must have answered this question a thousand times when you for example fill in a form. Sometimes it includes an annoying image challenge. And how stupid it feels when you fail to give the correct answer. How do these reCAPTCHA tests work and is it really just to verify that you’re not a robot?
How to Distinguish a Human from a Bot
Let’s first have a look at the definition. CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. A ‘Turing Test’ is an experiment proposed by Alan Turing in 1950 to test if a machine shows human intelligence. The machine passes if you can’t tell it’s a human or a computer. Though, CAPTCHA is actually the opposite of a Turing Test as it is a way to verify a human is not a bot.
CAPTCHA was invented by Luis Von Ahn. In 2000 he attended a talk at Yahoo!. In this talk, he heard that Yahoo! needed a way to distinguish humans from bots, because a lot of spam bots were trying to register email accounts.
Von Ahn accepted this challenge. In his research, he discovered that humans are pretty good at reading things on different surfaces. For example, we read traffic signs and handwritten text without any trouble. That is why the first CAPTCHA test involved a stretched-out scribbly word with a line through it.
Improving the Language Model
In 2005 CAPTCHA upgraded to reCAPTCHA. The tests now consisted of two words. The first word was like the old test to set humans apart from bots. The second word was used to make the Artificial Intelligence (AI) system behind reCAPTCHA smarter. The new word was pulled from a book or article. If the human got the first word right, it was quite probable that the second one was correct as well. Especially if other people answered the same. If so, the language model of reCAPTCHA had learned a new word.
After Google bought reCAPTCHA in 2009, its learning capabilities increased rapidly. Google used reCAPTCHA to scan millions of books and articles so it could train its machine learning model. They created a huge image library with distorted words and characters. And the more people used the test, the smarter the model became. It got so clever that at a certain point the AI of reCAPTCHA could recognize warped words from new images.
When AI gets too Smart
As the language model of reCAPTCHA improved, the reCAPTCHA tests were made more difficult so humans could still prove they were no robots. But at a certain point machines got better at recognizing distorted words. In 2014 humans could guess the correct words with about 33% accuracy. AI knew the answer with 99.8% accuracy. That was the moment reCAPTCHA V2 got developed.
With reCAPTCHA V2, you have to check a box and do an image challenge. Why? Because humans are better at recognizing objects in different areas, angles, and weather conditions than machines. Again humans could prove they were not bots.
Just like before, the test was used for machine learning. This time AI was taught how to identify real-world objects like traffic lights, crosswalks, and street signs. Most of the objects are traffic-related because Google uses this information to train their self-driving cars to recognize these objects. It’s used to digitize street names and numbers on Google Maps as well.
Behavior Testing
The AI of reCAPTCHA outsmarted us again as it became better at recognizing objects. This is why Google developed NoCAPTCHA (an upgraded V2 version) and reCAPTCHA V3. These new versions verify humans by their behavior. How do you move your mouse before your click? How fast do you type? Do you have a browser history? By constantly tracking your behavior on the web, the test can figure out if you are a human or a robot.
In NoCAPTCHA you still have to do a challenge if the system isn’t sure you are a bot. In reCAPTCHA V3 no interaction is needed at all. It simply returns a score to validate if you are a human or a bot. If this score is too low, a website owner can for example improve the website by adding another factor of authentication. It’s a blind Turing Test that constantly tracks your behavior in the background.
Final Thoughts
It’s a good thing that we can ban malicious bots trying to make purchases, leaving comments, or creating fake accounts. However, it is a scary thought that AI is outsmarting us so quickly in recognizing text and objects in images. What if they are going to outsmart us as well in our (website) behavior? Furthermore, tracking our behavior raises questions about privacy. Can our information just be used to improve AI? And also: how is our information used? Is it just all for profit, like selling self-driving cars? Or is it used for the public good, like reducing traffic accidents? It isn’t that black or white, but we should keep an eye on it.
Want to Learn More?
Do you want to read, watch or listen more about this subject? Have a look at the links below.