Home » Employment » Recruitment and promotion » Discrimination by computer algorithm: recruitment

Discrimination by computer algorithm: recruitment

Disclaimer – please read
This page does not apply outside Great Britain.
Last updated 7th January 2021.

Artificial intelligence (AI) can have huge benefits, but may also be discriminatory. There are dangers it may discriminate against disabled people when used in recruitment.

Summary

  • AI software to grade job candidates may be trained on “normal” people without disabilities. Even if efforts are made to make the software non-discriminatory for sex, ethnic origin etc, doing this for disability may be much more difficult, given the wide range of different disabilities.
  • An example of AI in recruitment is Recorded video recruitment, where AI assesses a recorded interview done on the job applicant’s computer or smartphone.
  • It seems very possible that AI software which takes into account eg speech patterns, tone of voice, and facial movements may mark people down because of their stammer and accompanying behaviours. Below How might this type of AI mark down people who stammer?
  • You could ask for reasonable adjustments such as being waived through the screening stage, or having a screening interview either by live video or face-to-face. Other types of Equality Act claim are also valuable.
  • The attitude of tribunals to this type of case is unclear. A particular issue for claimants is likely to be getting enough evidence that the AI marks them down because of the consequences of their disability. I suggest some ways one might get Evidence of detriment related to stammering. I have also tried to suggest how burdens of proof might work in this area.
  • Though this pages focuses on the Equality Act, I include some notes on the UK GDPR which is relevant even after Brexit.
  • Other ways AI may be used in recruitment include analysing job applications or a job applicant’s social media profile.
  • There is a Links section below, but I would like to point up particularly ai-lawhub.com, including AI in recruitment (ai-lawhub.com).

AI and algorithms

Aritificial intelligence (AI) is based on algorithms. An algorithm is basically a set of rules the computer uses. The rules may be written by a human. However more and more computers write the rules themselves, called machine learning.

There are various kinds of machine learning. In supervised learning what is particularly important is the training data with desired outputs that you feed in. You might “provide a training data set with answers, such as a set of pictures of animals along with the names of the animals. The goal of that training would be a model that could correctly identify a picture (of a kind of animal that was included in the training set) that it had not previously seen” (Machine learning algorithms explained (infoworld.com)). From this training data, the algorithm creates a “model” which can hopefully identify animals in further pictures. You may not understand what the rules of the model are. You can test what results it produces though – how far it identifies animals correctly – and can train it further if need be. See also Machine learning made easy (thenewstack.io).

The model depends on what data is fed in to train it. On models which aim to grade job candidates, a key concern is that data may be based on “normal” people without disabilities. The model may assume that characteristics which are underrepresented in successful candidates are undesired. For example, it is reported that AI models developed by Amazon from data of a male-dominated tech industry learnt to mark down résumés which mentioned “women’s” clubs etc, and to mark up masculine language such as “executed” and “captured”: Amazon scraps secret AI recruiting tool that showed bias against women (reuters.com), 2018. This is even more of a problem with disability, where there are many different disabilities with different characteristics and each group is statistically small: How algorithmic bias hurts people with disabilities (slate.com), 2020.

Data scientists refer to ways to produce “algorithmic fairness”: Human bias and discrimination in AI systems (ico.org.uk), 2019. However this is sometimes different from what is legally required under the Equality Act: see “Algorithmic unfairness” & the recent ICO consultation (ai-lawhub.com), 2020.

The above is as I understand it, but I’m no expert on how AI and algorithms work.

Recorded video recruitment

AI is being used by some companies in the UK to assess recorded video interviews. One company offering facilities for this is HireVue. There are different ways of doing it, but for example:

A job applicant sits in front of the computer and answers pre-set interview questions, or does it on their smartphone. There is no-one live at the other end. However the interview (video and audio) is recorded, and is analysed and scored by the AI software, providing a recommendation to the employer on whether to reject a candidate or proceed to the next stage. The employer can also watch the video.

HireVue is understood to use AI “to analyze these videos, examining speech patterns, tone of voice, facial movements, and other indicators” ainowinstitute.org/disabilitybiasai-2019.pdf. HireVue discusses how it uses both language and non-verbal communication at Nonverbal communication in interview assessments (hirevue.com).

How might this type of AI mark down people who stammer?

We don’t know for sure how this or other AI software assessing recorded interviews will deal with stammering and behaviours which often accompany it – for example dysfluency, filler words (eg “eh”, “you see”), different tone of voice, facial expressions, lack of eye contact. It is likely to be different for different people who stammer. However it sounds more than possible that software which takes into account eg speech patterns, tone of voice, and facial movements may mark down people because of their stammer and accompanying behaviours.

One preparation guide for HireVue interviews (mergersandinquisitions.com) – not from HireVue – gives a list of “types of stupid mistakes you should avoid“. These include “stuttering” and various others things which people often do precisely because they stammer, for example:

  • not maintaining eye contact with your webcam the whole time;
  • using too many “filler words” (um, ah, uh, like, etc.) or stuttering when you speak;
  • using too many hand gestures or too much body language.

Also HireVue says (in the “Non-verbal communication” link above) that language is an important part of the analysis. Therefore presumably voice recognition is important. As I discuss under Voice recognition telephone systems, many people who stammer find these systems cannot understand them. Also the language a person who stammers uses may be distorted by avoiding words they have difficulty saying – particularly in the unusual and stressful situation of an interview, and particularly where they fear the software will mark them down if they stammer.

Furthermore, stammering and related behaviours may be more severe in this type of video interview, so that it does not accurately reflect how you will communicate in the job: Assessment of oral skills in recruitment.

HireVue, to continue with that example, says it works to find and eliminate factors that cause bias (hirevue.com), and tests for adverse impact (hirevue.com), including for disability. However the “four-fifths rule” which it describes using to check for adverse impact is based on US law, not the law of the UK or EU (page 31 of Regulating for an equal AI: A new role for equality bodies (pdf, quineteurope.org)). More fundamentally, different disabilities are very different from each other, and there is no indication which if any individual disabilities the software has been tested on for discriminatory effects (see How algorithmic bias hurts people with disabilities (slate.com), 2020, on difficulties with testing for different disabilities).

Therefore, has the AI model the employer is using been tested specifically with people who stammer/stutter, and with what results? Even within stammering, what about different types and severity of symptoms?

  • A person with a mainly covert stammer, sounding largely fluent but using ‘tricks’ such as switching words, limiting answers to what they can say, using “filler” words, may give one kind of AI result (potentially negative), whereas
  • a person who stammers more openly and frequently may give a different kind of AI result (though also potentially negative).

Also within those broad subgroups of people who stammer, there may be wide variation between individuals.

One American report Expanding employment success for people with disabilities (pdf, benetch.org), 2018 claims that HireVue’s method “massively discriminates against many people with disabilities that significantly affect facial expression and voice: disabilities such as … speech disorders…”. It says that in response to complaints from both AI and disability advocates, HireVue’s response has been to suggest that employers allow applicants to opt out instead of using the HireVue tool. However the authors do not believe that many employers (in the US) do this. Even for those that do, the authors say it is unclear that these applicants are seriously considered.

HireVue say you can ask for more time or other adjustments (hirevuesupport.zendesk.com). More time may be helpful, but there is still the question of how the AI software will assess stammering and related behaviours, however long one is allowed.

Other ways AI may be used in recruitment

These include:

There are some more examples in All the ways hiring algorithms can introduce bias (hbr.org).

This page does not deal specifically with those examples. Compared with AI-assessment of an interview, they have much less obvious potential to impact people who stammer or perhaps other disabilities. However, one example of how there might be an impact would be if material being analysed mentions stammering or other disabilities:

I remember an AI exhibition at the Barbican in London, in 2019. There was an exhibit where you could type in a word, and an algorithm would generate loads of associated words. I typed in “stuttering” (it wouldn’t accept “stammering”, it was probably American). The machine generated a stream of words with largely negative associations.

Outside of recruitment, voice risk analysis (“lie detector”) technology, sometimes used in insurance claims by phone for example, is another use of algorithms that may discriminate against people who stammer.

Equality Act rights

Possible reasonable adjustments?

If you are concerned that an AI-assessed interview will not mark you fairly, or has not done so, you could ask for a “reasonable adjustment”. (There are also other relevant types of EqA claim). Steps which might be a reasonable adjustment include:

  • being waived through the screening stage;
  • having a screening interview either by live video (not computer-assessed) or face-to-face instead. On live video interviews see also Examples of reasonable adjustments: Recruitment>Telephone or video interviews;
  • possibly doing a recorded video interview but either with no computer assessment or with that assessment being disregarded. However it may not be possible to tell whether the employer has complied with this.

You could ask whether the software been trained not to mark candidates down in relation to your disability, and if not or if (more likely) they don’t know, insist that you want your skills assessed in a way that doesn’t discriminate.

What is reasonable or proportionate under the Equality Act will depend on the facts. More generally, AI in recruitment is a fairly new area and it is uncertain how employment tribunals will approach important issues on it. If a disabled person brings a legal claim, a particular issue is likely to be how far there is evidence that the AI put them at a disadvantage related to their disability…

Evidence of detriment related to stammering

I summarise possible types of Equality Act claim below. However a key point on all of them may be: can the person show the AI software puts them at a detriment related to the stammer? (That is speaking very roughly; the exact legal test depends on the type of claim.) An AI model can be like a “black box”, at least to some extent. Things go in, other things come out, but no one quite knows what happens inside.

At this stage, we are not looking at whether the detriment is justified, eg whether the individual’s stammer and related behaviours mean the individual really does not have the skills for the job. It is for the employer to show the detriment is justified: see below Burdens of proof.

How could one get evidence of detriment related to the disability?

  • If the individual’s particular stammer and related behaviours affect characteristics which the particular AI software takes into account in its assessment (see the bullet points below on trying to get evidence of what these characteristics are), this may well suggest that the individual is at a disadvantage related to the stammer. Possible characteristics taken into account might be for example dysfluency/hesitation, filler words (eg “eh”, “you see”), tone of voice, facial expressions or movements, lack of eye contact, hand gestures and body movements, language used. A report from a speech and language therapist may be helpful as evidence of the detail of the individual’s stammering and related behaviours.
  • Information about what sort of speech, language and behaviour characteristics the software measures (and other useful information) may be volunteered by the employer to the job applicant, or in response to questions.
  • There may be requirements under the UK GDPR to include useful information in a privacy notice given to the job applicant, or to give it in response to a subject access request by the job applicant: below GDPR: Getting information about the AI software.
  • In the case of a public sector employer, information might be obtainable through a freedom of information request.
  • Information on the particular technology may be publicly available online, including on the website of the company marketing it, or any independent research on it. Information may also be publicly available through litigation or official complaints about it in other countries.
  • Information provided to unsuccessful candidates may suggest disadvantage related to the stammer.
  • Depending on the individual’s own stammer and related behaviours, points made in Recorded video recruitment and How might this type of AI mark down people who stammer? above may be suggestive that the software will put the person at a disadvantage.
  • If there is evidence that the AI software uses voice recognition (see above How might this type of AI mark down people who stammer?) it may be helpful if the individual can give evidence that he is commonly not understood by voice recognition software. However there could easily be other kinds of detriment even if he is understood by voice recognition software.
  • An employment tribunal will normally order each side to disclose to the other relevant documents in their control or possession. This may be helpful to allow the claimant to see documents the employer has on the AI software.
  • AI in recruitment (ai-lawhub.com), an excellent resource, suggests ways in which one could get evidence of detriment or disadvantage, particularly in the context of indirect discrimination. (Some points mentioned above come from there).

Types of Equality Act claim

Various types of Equality Act claim may be relevant, in broad terms:

In addition, public authorities are required to have due regard to the need to eliminate unlawful discrimination and advance equality of opportunity, under the Public Sector Equality Duty (PSED). One could ask have they considered disability equality in deciding to use the AI, and could you see their Equality Impact Assessment.

Burdens of proof

The section below may be rather too complicated. However I include it in case anyone finds the thoughts helpful.

In Evidence of detriment related to stammering above, I discuss some evidence that may help a disabled person show the AI software puts them at a detriment related to their stammer or other disability, for the purpose of the different types of claim outlined above. I would say the primary hurdle for the claimant is to produce evidence of the detriment related to their disability, as opposed to evidence that the detriment is unjustified or unreasonable – though doubtless a claimant will also produce any evidence they have of the latter. This is because when it comes to justification and reasonableness – such things as whether the individual’s stammer and related behaviours mean they really do not have the skills for the job, or whether use of the AI (including for the claimant) is well-grounded in evidence and reasonable – the burden of proof should fall primarily on the employer, as outlined in the next sub-heading.

Justification/reasonableness and burden of proof

Speaking in very broad terms, once the claimant has put forward enough evidence, particularly of the software putting him at a detriment in relation to his disability, then the burden of proof shifts to the employer:

  • In a claim under s.15 (discrimination arising from disability) or s.19 (indirect discrimination) the burden is on the employer to show that the rejection of the candidate (under s.15) or the PCP, eg the use of that software, (under s.19) was a proportionate means of achieving a legitimate aim, see Objective justification defence. This generally includes for example the employer having to show that its aim could not reasonably have been achieved through less discriminatory means – though it is not enough just to show that. (Note that in a s.19 claim, before the employer has to show justification, the claimant needs to put forward enough evidence of particular disadvantage for people with the same disability as well as for himself, but that does not apply to a claim under ss.15 or 20.)
  • In a reasonable adjustments claim (s.20), once a claimant (who has established that a PCP puts him at a substantial disadvantage in comparison with non-disabled people) suggests a potentially reasonable adjustment, such as waiving the screening interview or having a live interview, then under s.136 EqA it is for the employer to show that is not reasonable: Reasonable adjustment rules: employment>Burden of proof.

Looking briefly at some points relevant to deciding whether the employer has shown justification or reasonableness:

  • AI in recruitment (ai-lawhub.com) says there is much evidence suggesting that facial recognition technology (FRT) does not accurately identify the best candidates. In any event, the burden should be on the employer to put forward evidence of the software’s accuracy.
  • Even if generally the software is valuable to help identify the best candidates, the employer may well have difficulty showing that it is reasonable or justified to use it on people with particular disabilities for which the software has not been designed. It might be argued (under ss.15 and 20 EqA) that an exception should be made for these disabled people. (If the employer thinks the software has been designed to be effective for people having the particular disability, without unjustified discrimination, again that should be for the employer to prove.)
  • On a s.15 claim, the employer might argue that even if its use of the AI was not justified, the rejection of that job applicant was justified because the applicant was not one of the best people for the job (possibly as discussed in Objective justification defence>Outcome rather than procedure?). Even if that’s a good argument in principle though, the employer may struggle to show that the claimant was not one of the best people, if the claimant was not considered in a fair, non-discriminatory way. As discussed at that link, the decision-making procedure is likely to be important in practice.

Might shift of burden of proof help in showing detriment?

Might the burden of proof shift to the employer somewhat earlier? Might the burden shift without the disabled person having actually shown, on a balance of probabilities (ie more likely than not) that the software puts him at a detriment related to his stammer? This could happen under s.136 EqA, which says very broadly that the burden of proof shifts to the employer etc if the claimant puts forward enough evidence to make out a prima facie case. S.136 is normally used to help claimants with the difficult task showing why the employer acted as it did, eg was the disability or something arising from the disability a factor which influenced the employer’s decision, consciously or unconsciously. However can s.136 go further than this? To briefly take each of the three main types of claim:

  • Reasonable adjustments (s.20 EqA): In Project Management Institute v Latif the EAT observed in passing that it very much doubted whether the burden of proof shifts at all in respect of establishing the provision, criterion or practice, or demonstrating the substantial disadvantage. It said “These are not issues where the employer has information or beliefs within his own knowledge which the claimant cannot be expected to prove.” However that reasoning does not seem to apply – and one wonders whether a court would change its view – in a case involving an algorithm which, so far as one can know about it, is within the knowledge of the employer and of the supplier which the employer has chosen, presumably after due investigation by the employer.
  • Discrimination arising from disability (s.15 EqA) – I’m not aware of any cases on whether the shift in burden of proof under s.136 can apply to showing that the “something” (eg a low mark from the AI software) arose in consequence of the disability.
  • Indirect discrimination (s.19) – can the shift in burden of proof under s.136 apply to showing that the provision, criterion of practice (broadly the software or its use) puts people with the relevant disability including the claimant at a particular disadvantage? The position is not clear. However it has been suggested that the lack of transparency of the AI software may be an argument for shifting the burden of proof, in the light of a line of European Authorities such as C-109/88 Danfoss which established that a lack of transparency in a pay system could give rise to an inference of discrimination: see Data protection: Proving discrimination (ai-lawhub.com).

Liability of companies other than the employer

It could be considered whether the software producer or other companies involved in the AI software are liable under the Equality Act, on the facts. However I’m not going to go into that.

UK GDPR

GDPR (gdpr-info.eu) is a European Union Regulation. After 31st December 2020, under the Brexit legislation it continues in force as “UK GDPR” (gov.uk) subject to some adaptations (that “UK GDPR” link shows the text as amended by the UK adaptations). The UK is potentially be able to amend UK GDPR in future if it so chooses, though substantial changes are unlikely for the time being. I talk below mainly of UK GDPR, but position was the same under the GDPR itself.

This page is mainly about the Equality Act. I’m not going to go into detail about the UK GDPR and data protection. However below are some notes on Articles of UK GDPR which may be relevant.

UK GDPR: Getting information about the AI software

I discuss above (in Evidence of detriment related to stammering) the potential difficulty of finding out what characteristics the particular AI software takes into account in assessing an interview, and some possible ways to find out about them. Might the UK GDPR help?

One set of provisions which could help seems to apply only if Article 22 applies (though recital 63 does not mention this limitation?). On Article 22, see below Some other points on UK GDPR. One requirement before Article 22 can apply is that the decision – here the decision whether to take the job application forward – is based solely (ico.org.uk) on the AI outcome. If that is the case, specified information about the AI software should be given to the job applicant in a GDPR privacy notice under Articles 13(2)(f) and 14(2)(g) UK GDPR. That includes “meaningful information about the logic involved”. According to guidance from the Information and Commissioners Office (ICO) (ico.org.uk), this does not mean organisations have to “confuse people with over-complex explanations of algorithms” but it does include describing for example “the type of information you [the organisation] collect or use…”, which may be helpful as to what characteristics the particular software takes into account in assessing an interview. A GDPR subject access request for similar information can be made under Article 15(1)(h) if Article 22 applies.

However the employer will perhaps not – or will say it does not – base its decision solely on the AI result, so that Article 22 does not apply. Maybe a job applicant could argue that even where Article 22 is not applicable, other UK GDPR provisions might require that useful information on characteristics taken into account by the particular software be included in the privacy notice (Articles 13 and 14) or provided on in response to a subject access request (Article 15).

For example, might it be argued that characteristics taken into account by the AI software are within Article 14 (on the basis that they are not data obtained from the individual) and should be included in a privacy notice under Article 14(1)(d) as “the categories of personal data concerned”, ie the categories of data being processed?

In any event, Article 15 gives a right to obtain from the data controller (the employer or software provider?) access to personal data held by the data controller, and to information on “the categories of personal data concerned”. A request under Article 15, known as a subject access request, is normally free of charge.

Some other points on UK GDPR

  • “Profiling” defined in Article 4(4) includes automated processing of personal data used to evaluate personal aspects “relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work…”.
  • On privacy notices under Articles 13 and 14, and subject access requests under Article 15, see above UK GDPR: Getting information about the AI software.
  • Under Article 21 the individual has the right to object to processing of data, including profiling, based on Article 6(1)(e) or (f) (public interest or official authority, or legitimate interest), subject to an exception.
  • Under Article 22 the individual has the right not to be subject to a decision based solely on automated processing, including profiling, “which produces legal effects concerning him or her or similarly significantly affects him or her”, but subject to important exceptions. Recital 71 specifically mentions “e-recruiting practices without any human intervention” as something that significantly affects people.
  • The data controller may be required to make a data protection impact assessment (DPIA) under Article 35. The ICO says that it requires an organisation to do a DPIA if, for example, the organisation plans to use “innovative technology” (which AI assessment of interviews may well be), or to profile individuals on a large scale (which may be the case depending on numbers involved). The “innovative technology” head is subject to one of the criteria from European guidelines being satisfied.

Cases

Visa application algorithm, August 2020. Case settled out of court.
The Home Office was using an algorithm described as a digital “streaming tool” to sift visa applications. The algorithm scanned applications and directed them into a fast lane (Green), a slower lane (Yellow), or a lane where any decision to allow an application had to be justified to a supervisor (Red). Slower lane applications were reviewed more carefully. It was suggested that “People from rich white countries get ‘Speedy Boarding’; poorer people of colour get pushed to the back of the queue.”

The Joint Council for the Welfare of Immigrants (JCWI) with Foxglove argued this was similar to what was found to be unlawful race discrimination in the Roma rights case, where UK immigration checks at Prague airport particularly targeted Roma travellers. The JCWI claim did not get to court as the Home Office agreed to withdraw the system in August 2020.

Links: How we got the government to scrap the visa streaming algorithm – some key legal documents (foxglove.org.uk), August 2020
Home Office says it will abandon its racist visa algorithm –  after we sued them (foxglove.org.uk), August 2020
Legal action to challenge Home Office use of secret algorithm to assess visa applications (foxglove.org.uk), October 2017.

20th anniversary of stammeringlaw, 1999-2019