Two-factor authentication (2FA) is something of a buzz-word at the moment, made higher profile by the fact that Google has introduced it as an optional way to increase the security of user accounts (including Gmail).
What is it?
One factor authentication requires a single step to verify your identity, such as knowing your username and password. 2FA provides another layer of protection against hackers by also requiring you to have something (in Google’s case this is your smart phone).
Two-step authentication is common in secure physical work places, where in addition to needing passcodes/doorcodes etc. (i.e. what you know), employees are required to carry a smartcard, USB thumbdrive, or similar physical object to prove what they have.
The requirement for both a bank card and PIN number when using an ATM is another good commonly used example of two-factor authentication.
By requiring proof of ‘what you know’ and ‘what you have’, two-factor authentication greatly improves security.
The more layers of authentication used, the more secure a system is, so some highly secure systems add a ‘who you are’ component. At its most basic this can be a photo ID, but more sophisticated methods such as fingerprint, retina pattern, handwriting style, voice pattern recognition, etc. are becoming increasingly common.
The biggest problem with 2FA is that it’s an added hassle, and in a world where ‘password’ and ‘123456’ are the most commonly used passwords, many can simply not be bothered with it.
While much more secure than one-factor authentication, 2FA (and 3FA for that matter) is still vulnerable to man-in-the middle, man-in-the-browser, keylogging, and other well-known hacking attacks. Each authentication factor added, however, does make such attacks much less likely to succeed.