The IRS/ID.me Debacle: A Teachable Moment for Tech


We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!

Last year, when the Internal Revenue Service (IRS) signed an $86 million contract with ID.me, an identity verification provider, to provide biometric identity verification services, it was a huge vote of confidence for this technology. Taxpayers could now verify their identities online using facial biometrics, a move designed to more secure the administration of federal tax affairs by U.S. taxpayers.

However, after loud opposition from privacy groups and bipartisan lawmakers voicing privacy concerns, the IRS reversed its plan in February and abandoned its plan. These critics objected to the requirement that taxpayers submit their biometrics in the form of a selfie as part of the new identity verification program. Since that time, both the IRS and ID.me have offered additional options that give taxpayers the choice of using ID.me’s service or verifying their identity through a live, virtual video interview with an agent. While this move may appease the parties that have expressed concerns, including Senator Jeff Merkley (D-OR) who No facial recognition under the IRS Act (S. Bill 3668) At the height of the debate – the highly public misunderstanding of the IRS’s deal with ID.me has marred public opinion on biometric authentication technology and raised bigger questions for the cybersecurity industry in general.

While the IRS has since agreed to offer ID.me’s facial recognition biometric technology as an identity verification method for taxpayers with an opt-out option, confusion still exists. The high-profile complaints against the IRS deal have unnecessarily weakened public confidence in biometric authentication technology, at least for now, and fraudsters have felt very relieved. However, there are lessons for both government agencies and technology providers to consider as the ID.me debacle fades in the rearview mirror.

Don’t underestimate the political value of a controversy

This recent controversy highlights the need for better education and understanding of the nuances of biometric technology, of the types of content that may be subject to facial recognition versus facial recognition, the use cases and potential privacy issues arising from these technologies, and the necessary regulations to better protect consumer rights and interests.

For example, there is a huge discrepancy between using biometrics with the user’s explicit informed consent for a single, one-time purpose that benefits the user, such as identity verification and authentication to protect the user’s identity from fraud, as opposed to scraping biometric data in any identity verification transaction without consent or using it for unauthorized purposes such as surveillance or even marketing purposes. Most consumers do not understand that their facial photos on social media or other internet sites can be collected for biometric databases without their express consent. When platforms such as Facebook or Instagram expressly communicate such activity, it is usually buried in the privacy policy, described in terms that are incomprehensible to the average user. In the case of ID.me, companies implementing this scraping technology must be required to educate users and obtain explicit informed consent for the use case they enable.

In other cases, different biometric technologies that appear to perform the same function cannot be created simultaneously. Benchmarks such as the NIST FRVT provide a rigorous evaluation of biometric matching technologies and a standardized way to compare their functionality and ability to avoid problematic demographic performance biases about characteristics such as skin color, age, or gender. Biometric technology companies must be held accountable not only for the ethical use of biometrics, but also for the equitable use of biometrics that works well for the entire population they serve.

Politicians and privacy activists place high demands on suppliers of biometric technology. And they should – the stakes are high and privacy is important. As such, these companies need to be transparent, clear, and—perhaps most importantly—proactive in communicating the nuances of their technology to those audiences. One misinformed, fiery speech by a politician trying to win hearts during a campaign can undermine otherwise consistent and focused consumer education. sen. Ron Wyden, member of the Senate Finance Committee, proclaimed“No one should be forced to submit to facial recognition to access critical government services.” And in doing so, he mischaracterized facial recognition as facial recognition, and the damage was done.

Perhaps Senator Wyden didn’t realize that millions of Americans submit to facial recognition every day when using critical services — at the airport, at government agencies, and in many workplaces. But by failing to address this misunderstanding from the start, ID.me and the IRS openly misinformed the public and presented the agency’s use of facial recognition technology as unusual and nefarious.

Honesty is a business necessity

Despite a deluge of misinformation from third parties, ID.me’s response was late and complicated, if not misleading. In January, CEO Blake Hall said in a: statement that ID.me doesn’t use a lot of facial recognition technology – the comparison of one face to another stored in a central repository. Less than a week later, the latest in a series of inconsistencies, Hall declined, stating that ID.me uses 1: a lot during enrollment, but only once. An engineer from ID.me referenced that incongruity in a prescient Slack channel post:

“We could disable the 1:many face search, but lose a valuable anti-fraud tool. Or we can change our public stance on using 1:many face search. But it looks like we can’t keep doing one thing and saying another, because we’re definitely going to be in hot water.”

Transparent and consistent communication with the public and key influencers, using print and digital media and other creative channels, will help combat misinformation and provide assurance that facial biometric technology, when used with explicit informed consent to protect consumers, is more secure. than legacy-based alternatives.

Get ready for regulation

Rampant cybercrime has led to more aggressive state and federal legislation, while policymakers have placed themselves at the center of the push-pull between privacy and security, and they must act from there. Bureau chiefs may argue that their legislative efforts are fueled by a commitment to voter safety, security and privacy, but Congress and the White House must decide what sweeping regulations will protect all Americans from the current cyberthreat landscape.

There is no shortage of regulatory precedents to refer to. The California Consumer Privacy Act (CCPA) and its leading European cousin, the General Data Protection Regulation (GDPR), model how to ensure that users understand the types of data organizations collect from them, how it is used, measures to monitor and manage that data, and how to opt out of data collection. To date, officials in Washington have left the data protection infrastructure to the states. The Biometric Information Privacy Act (BIPA) in Illinois, as well as similar laws in Texas and Washington, regulate the collection and use of biometric data. These rules require organizations to obtain consent before collecting or disclosing anyone’s likeness or biometric data. They must also store biometric data securely and destroy it in a timely manner. BIPA issues fines for violating these rules.

If legislators draft and pass a law that combines the principles of CCPA and GDPR regulation with the biometric-specific rules set out in BIPA, there could be greater confidence around the security and convenience of biometric authentication technology.

The future of biometrics

Biometric authentication providers and government agencies must be good shepherds of the technology they offer – and acquire – and, more importantly, when it comes to educating the public. Some hide behind the apparent fear of giving cybercriminals too much information about how the technology works. The fortunes of those companies, not theirs, rests on the success of any given implementation, and wherever there is a lack of communication and transparency there will be opportunistic critics eager to publicly misrepresent biometric facial recognition technology to enhance their capabilities. promote its own agenda.

While multiple lawmakers have portrayed facial recognition and biometrics companies as bad actors, they missed the opportunity to eradicate the real culprits: cybercriminals and identity criminals.

Tom Thimot is CEO of authID.ai.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers



Leave a Reply

Your email address will not be published.