We now live in a time when our first instinct is to turn to the web whenever we’re considering buying something. We perform an online search on the brand or go directly to the company’s website. Even when other people make recommendations via peer-to-peer marketing,...
A CAPTCHA (an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”) is a type of challenge-response test used in computing to determine whether or not the user is human or another computer. Because computing is becoming pervasive, and computerized tasks and services are commonplace, the need for increased levels of security has led to the development of this way for computers to ensure that they are dealing with humans in situations where human interaction is essential to security. Activities such as online commerce transactions, search engine submissions, Web polls, Web registrations, free e-mail service registration and other automated services are subject to software programs, or bots, that mimic the behavior of humans in order to skew the results of the automated task or perform malicious activities, such as gathering e-mail addresses for spamming or ordering hundreds of tickets to a concert.
In order to validate the digital transaction, using the CAPTCHA system the user is presented with a distorted word typically placed on top of a distorted background. The user must type the word into a field in order to complete the process. Computers have a difficult time decoding the distorted words while humans can easily decipher the text. Some CAPTCHAs now use pictures instead of words where the user is presented with a series of pictures and asked what is the common element among all of the pictures. By entering that common element, the user validates the transaction and the computer knows it is dealing with a human and not a bot.
The word public in the term refers to the fact that the algorithm used is made public instead of being held secret. The idea is that breaking the scurity of a CAPTCHA depends on artifical intelligence; discovering the algorithm itself does not defeat the security measures. The term was coined by Luis von Ahn, Manuel Blum and Nicholas J. Hopper of Carnegie Mellon University, and John Langford of IBM in 2000.