series of headshots
Feature

Adobe Files Patent for AI-Powered ‘Diversity Auditing’ System

4 minute read
Michelle Hawley avatar
By
SAVED
Adobe wants to bring AI to the diversity auditing world with a new patent application. How would it work and what are the implications for the workplace?

Adobe wants to bring AI into the realm of diversity, equity and inclusion (DEI) with a new patent for its AI-based “diversity auditing” system. 

The system uses facial detection and image classification to analyze photos of people, categorize those people based on attributes like age, gender and race and generate a “diversity score.”

How does the tool work and what potential applications does it have for the workplace? 

How Adobe’s Diversity Auditing System Works 

As first reported in Patent Drop, Adobe’s “System and Methods for Diversity Auditing,” laid out in patent No. 20230267764 on August 24, scans a multitude of images to detect faces and then classifies those faces based on predicted “sensitive attributes.” 

Sensitive attributes, according to Adobe, are “characteristics that can describe a person’s appearance, or related to a protected class of individuals.” Examples they give include race, gender, identity and age. 

The system, for instance, could classify images in an internal employee database. It then calculates a diversity score for the set of images using machine learning, comparing the classified images to a comparison population — for example, census data, polling results or employment data.

adobe ai patent
FIG. 7 depicts an example of diversity auditing as described in the patent.
  

In the patent filing, Adobe said the system could audit a set of images to determine if the people included are “representative of a certain level of diversity in one or more sensitive attributes.” 

The system also has the ability to “augment” the set of images to “increase diversity” using additional retrieved images “until a certain threshold of diversity is met.” 

Related Article: Is DEI Sustainable in the Workspace?

Bias in Stock Images Present Challenges  

The inventors view their offering as an alternative to conventional diversity auditing systems, which rely on manual identification of images. “This manual approach does not scale to large image sets," they write in the patent filing. 

A 2022 paper titled “Generating and Controlling Diversity in Image Search” dove further into the ideas that spurred on the invention.  

Generations of systemic biases, they wrote, lead to some professions being more common among genders and races. And this bias, they continued, reflects in image searches and stock image repositories on search engines, something that presents a challenge to content providers.

“The pursuit of a utopian world demands providing content users with an opportunity to present any profession with diverse racial and gender characteristics,” they wrote.  

Adobe’s Plans for Diversity Auditing System Unknown

Adobe has yet to outline its full plans for its diversity auditing system, and did not respond to a request for comment. 

Patent and research documentation point toward potential new augmented image capabilities. Right now, for example, the researchers say finding an image of a “male Asian administrative assistant” might produce limited results. Adobe’s system could open the door for more diverse representation in images. 

Use cases could also potentially exist in the human resources realm for companies looking to prioritize diversity, equity and inclusion in various ways. 

“There is a growing concern that the algorithmic tools used by HR professionals might make discriminatory decisions or conceal existing discrimination,” said Juan Pastor Merchante, an employment lawyer qualified to practice in Spain at Freshfields Bruckhaus Deringer.

However, he said, “Diversity auditing is in fact one of the instruments that is preferred by lawmakers to ensure that such tools conform to legal requirements under employment discrimination laws.” He pointed to New York City law 144, which mandates that a “bias audit” be conducted on automated employment decision tools prior to their use. 

“But what if the audit tool exacerbates or conceals the algorithmic discrimination it was designed to fight against?” he asked. 

Related Article: NYC's New AI Bias Is in Effect. Here's What It Entails

Adobe’s DEI Patent Poses Potential Problems 

AI is promising when it comes to processing extremely large data sets, and facial recognition continues to outperform the human eye, said Merchante.

And yes, he added, the technology has been shown to misidentify people based on race and gender. “Just like any other AI tool used in the HR/employment context, it is conceivably possible that Adobe’s tool might be biased against employees of protected classes.” 

Learning Opportunities

Fortunately, based on the information available right now, he said, it doesn’t seem like Adobe’s new tool will be used to make direct employment decisions, like hiring and firing. But he still sees three distinct issues that could arise.  

The first, which could happen with a flawed scoring tool, is a false sense of comfort that a workforce is diverse when it really is not. In this case, said Merchante, companies might fail to adopt remedial action and incur legal liability for breach of employment discrimination laws. 

The second issue is transparency, or the “black box problem.” It’s our inability to determine how deep learning systems make their decisions. Even the developers of these algorithms are left in the dark. As a result, said Merchante, “Adobe might have a hard time explaining how the tool comes up with the scores it generates.” 

Last is the privacy issue. “Employees might refuse to disclose their ethnicity or gender. How is this observed when the tool relies on facial recognition?” he asked. And, he added, the system seems unlikkley to address gender identity issues. “How is this factored into the diversity score?”

Related Article: Report Reveals Workers Most ‘Exposed’ to AI

Adobe’s AI Plans Unlikely to Face Regulation 

Ultimately, said Merchante, from the standpoint of US law, the use of AI in employment is largely unregulated. 

Some local and state legislatures have passed laws to regulate this technology, but the comprehensive Algorithmic Accountability Act proposed in Congress in 2022 did not pass into law, he explained. 

“Adobe can likely launch and market its AI-powered diversity auditing tool in the US without being subject to significant legal obligations or regulatory scrutiny.”

About the Author
Michelle Hawley

Michelle Hawley is an experienced journalist who specializes in reporting on the impact of technology on society. As editorial director at Simpler Media Group, she oversees the day-to-day operations of VKTR, covering the world of enterprise AI and managing a network of contributing writers. She's also the host of CMSWire's CMO Circle and co-host of CMSWire's CX Decoded. With an MFA in creative writing and background in both news and marketing, she offers unique insights on the topics of tech disruption, corporate responsibility, changing AI legislation and more. She currently resides in Pennsylvania with her husband and two dogs. Connect with Michelle Hawley:

Main image: adobe stock
Featured Research