Dec 13, 2020
The use of software algorithms to assist in organizational decision-making and their potential negative impact on minority populations will be an increasingly important area for humankind to resolve as we embrace our AI future.
These critical issues were brought into even sharper focus earlier this month with the publication of a new report by the Center For Democracy & Technology entitled “Algorithm Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”
Looking beyond just the employment sphere, a dedicated panel discussion at last week’s Sight Tech Global conference explored other important areas for people with disabilities impacted by algorithmic decision-making, such as the administration of welfare benefits, education and the criminal justice system.
The key messages emerging from both the panel discussion and the report convey a unanimously stark warning.
Disability rights risk being eroded as they become entangled within wider society’s drive to achieve greater efficiency through the automation of processes that once required careful human deliberation.
This is dangerous for disabled people due to an inescapable tension between the way algorithmic tools work and the lived experience of many people with disabilities.
By their very nature, algorithms rely on large data sets that are used to model the normative, standardized behavior of majority populations.
The lived experience of disabled people naturally sits on the margins of “Big data.” It also remains intrinsically difficult to reflect disabled people’s experiences through population-level modeling due to the individualized nature of medical conditions and prevailing socio-economic factors.
Jutta Treviranus is Director of the Inclusive Design Research Centre and contributed to a panel discussion at Sight Tech Global entitled “AI, Fairness and Bias: What technologists and advocates need to do to ensure that AI helps instead of harms people with disabilities.”
“Artificial intelligence amplifies, automates and accelerates whatever has happened before, said Treviranus at the virtual conference.
“It’s using data from the past to optimize what was optimal in the past. The terrible flaw with artificial intelligence is that it does not deal with diversity or the complexity of the unexpected very well,” she continued.
“Disability is a perfect challenge to artificial intelligence because, if you’re living with a disability, your entire life is much more complex, much more entangled and your experiences are always diverse.”
Algorithm-driven hiring tools in recruitment
The use of algorithm-based assessment tools in recruitment is a particularly thorny pain point for the disability community. Estimates suggest the employment rate for people with disabilities in the U.S. stands at around 37%, compared to 79% for the general population.
Algorithm-hiring tools may involve several different exercises and components. These may include candidates recording videos for the assessment of facial and vocal cues, resume checking software to identify red flags such as long gaps between periods of employment and gamified tests to evaluate reaction speed and learning styles.
Algorithm-driven software is also marketed as being able to identify less tangible, but, potentially, desirable characteristics in candidates such as optimism, enthusiasm, personal stability, sociability and assertiveness.
Of course, straight-out platform inaccessibility is the immediate concern that springs to mind when considering interactions with disabled candidates.
It is entirely valid to wonder how a candidate with a vision impairment might access a gamified test involving graphics and images, how a candidate with motor disabilities might move a mouse to answer multiple-choice questions, or how an individual on the autism spectrum might react to an exercise in reading facial expressions from static photos.
Indeed, the Americans with Disabilities Act specifically prohibits the screening out of candidates with disabilities through inaccessible hiring processes or ones that do not measure attributes directly related to the job in question.
Employers may themselves think they are helping disabled candidates by removing traditional human bias and outsourcing the assessment to an apparently “neutral” AI.
This, however, is to set aside the fact that the tools have most likely been designed by able-bodied, white males in the first place.
Furthermore, approval criteria are often modeled off the pre-determined positive traits of an organization’s currently successful employees.
If the workforce lacks diversity, this is simply reflected back into the algorithm-based testing tool.
By developing an over-reliance on these tools without understanding the pitfalls, employers run the very real risk of sleepwalking into the promotion of discriminatory practices at an industrial scale.
Addressing this point specifically, the report’s authors note, “In the end, the individualized analysis to which candidates are legally entitled under the ADA may be fundamentally in tension with the mass-scale approach to hiring embodied in many algorithm-based tools.”
“Employers must think seriously about not only the legal risks they may face from deploying such a tool, but the ethical, moral, and reputational risks that their use of poorly-conceived hiring tools will compound exclusion in the workforce and in broader society.”
During the Sight Tech Global panel discussion, Lydia X. Z. Brown, a Policy Counsel for the Center For Democracy & Technology’s Privacy and Data Project, was asked whether algorithm-driven assessment tools really do represent a truly modern form of disability discrimination.
“Algorithm discrimination highlights existing ableism, exacerbates and sharpens existing ableism and only shows different ways for ableism that already existed to manifest,” responded Brown.
Later continuing, “When we talk about ableism in that way, it helps us understand that algorithmic discrimination doesn’t create something new, it builds on the ableism and other forms of oppression that already existed throughout society.”
Yet, it is the scale and pace at which automation can further seed and embed discrimination that must be of greatest concern.
Building a more inclusive AI future
The CDT report does make some recommendations around the creation of more accessible hiring practices.
The key leap for organizations is to first develop an understanding of the inherent limitations of these tools for assessing individuals with varied and complex disabilities.
Once this reality-check takes hold at a leadership level, organizations can begin to proactively initiate policies to offset the issues.
This may start with a deep-dive into what these tests are actually measuring. Are positive but vague qualities such as “optimism” and “high self-esteem,” as elicited by a snapshot test, truly essential for the position advertised?
Through understanding and appropriately discharging their legal responsibilities, employers should seek to educate and inform all candidates on the specific details of what algorithmic tests involve.
It is only by communicating these details that candidates will be able to make an informed choice around accessibility.
For candidates who proceed with the test, organizations should be energetic in their data collection on accessibility issues.
For candidates, who fear an algorithm may unfairly screen them out, a suite of alternative testing models should readily be made available without any implied stigma.
Finally, it should be incumbent on software vendors to keep accessibility at the forefront of the initial design process.
This can be further bolstered by more stringent regulation in this area but the most useful measure vendors might adopt right now is to co-design alongside disabled people and take account of their feedback.
The simple truth is that AI isn’t just the future. It’s here already and its presence is reaching out exponentially into every facet of human existence.
The destination may be set but there is still time to modify the journey and, through best-practice, take the more direct shortcuts to inclusion, rather than the long road of having to learn from mistakes that risk leaving people behind.