BankInfoSecurity.com reported that “The U.S. Department of Homeland Security said it will eschew biased artificial intelligence decision-making and facial recognition systems as part of an ongoing federal effort to promote “trustworthy AI.”” The September 18, 2023 article entitled “US DHS Announces New AI Guardrails” (https://www.bankinfosecurity.com/us-dhs-announces-new-ai-guardrails-a-23106) included these comments:
U.S. Secretary of Homeland Security Alejandro Mayorkas also announced that departmental CIO Eric Hysen will serve as the “chief AI officer” while staying on in his original position.
Fighting biased outcomes can be more difficult than might appear since AI systems may be used to discriminate even without the use of obviously biased inquiries. Eliminating close proxies for characteristics such as ethnicity by closing off obvious prompts based on ZIP code and income allows for the possibility that the same intentionally biased results could be obtained with inquiries based on other factors. AI is effective because it draws on vast pools of data, allowing computers to make connections between seemingly unrelated data points. With enough data, there’s no need for a close proxy such as ZIP code.
Critics of facial recognition have also questioned whether human review will be sufficient to prevent wrongful arrests based on facial recognition matches. The New York Times reported in August that six individuals have reported being falsely accused of a crime as a result of a facial recognition search matching the photo of an unknown offender’s face to a photo in a database. “You’ve got a very powerful tool that, if it searches enough faces, will always yield people who look like the person on the surveillance image,” a psychology professor told the Times.
What do you think?
First published at https://www.vogelitlaw.com/blog/will-new-ai-guardrails-work-for-us-dhs