Can AI tell your age?

David Strom 19 May 2021

While social justice issues involving algorithms receive attention, there's little discussion around ageist algorithmic bias

Algorithms are under attack, but so far, the score seems to be Machines: 1, Humans: 0. While we haven’t quite reached the point of Skynet Armageddon, the machines are making significant strides in keeping track and taking advantage of the various carbon-based life forms on the planet.

Before we dive into the finer points, let’s review where things stand in popular culture. 

We’ve already reviewed Netflix’s The Social Dilemma docufilm and pointed out the issues about how social media companies compromise our general privacy. I also took a look at the 2019 TV series All Hail the Algorithm. Its fourth episode focuses on the potential abuse of biometrics — and, particularly, how we have little to no control over how our faces and other biometric data is used and can be abused. 

This is also the subject of the latest Netflix documentary, Coded Bias, which was released last year. The film covers the problems of using algorithms to make decisions about hiring and firing employees, monitoring parolees, and how China has deployed its facial recognition software to track its citizens and authenticate their purchases and movements. 

The movie follows a few AI activists who are trying to get US Congress to pass laws (such as the Algorithm Accountability Act) to regulate how algorithms interact with us and who’s ultimately given legal recourse when the algorithms go awry. It also documents the origin story of the group called the Algorithmic Justice League who were shown documenting some of the more egregious abuses of AI bias.

Finally, last fall, we covered the inherent bias in AI algorithms as part of our coverage of our CyberSec&AI Connected conference. We raised some of the numerous issues involving bias — issues that only seem to be getting worse. Take a look at this resource, where you can plot which cities now have bans on facial recognition tools, which public spaces (such as airports) use the tools, and other data. Additionally, this post shows how AI bias is used to predict future criminals used in sentencing guidelines by judges in criminal trials.

While the social justice issues involving algorithms continue to receive some attention, there is little discussion around ageist algorithmic bias.

How ageism persists in modern culture

As the comedian Bill Maher proclaimed back in 2014, “Ageism is the last acceptable prejudice in America.” He was right back then, and (sadly) still is today. Much of the aforementioned research concerns the hiring of older individuals.

But what AI creates can sometimes be solved by better-trained AI models too, as this study points out. Research has been done on how AI models can be used to identify this hiring discrimination

If hiring is based on skills and performance, managers can level the playing field for older workers. As Hari Kolam, CEO of the people intelligence platform Findem, was quoted in the above piece, “Ageism is ironic, since it is so prevalent and has such resounding effects on those searching for a job and the diversity of the workforce overall.” But it doesn’t have to be all that sophisticated to be biased: “If your listing is searching for candidates who are ‘energetic’ and ‘on a fast-growth career path’ versus ‘seasoned’ or ‘experienced,’ it could inadvertently be biased toward a younger set of applicants,” Kolam says.

The link between hiring bias and misinformation

But hiring bias is just one place for ageist algorithms. “For older adults, there are concerns that algorithmic bias could have discriminatory impacts in health care, housing, employment, and in banking and finance issues.” This is from the Get Older Adults Online project, which is concerned that these biases could result in misinformation for elders or various forms of discrimination. It is certainly time to make AI algorithms more inclusive, both in terms of the age and cultural backgrounds of potential people that are part of the model.


Further reading:
Why we spread misinformation — and what to do about it
The citizen’s guide to spotting fake news
Tired of fake news? Here’s how to avoid it


Aside from AI, there is a bigger issue at hand: filtering out misinformation is a huge task, and even the best of us have problems with inadvertently forwarding emails or sharing social media content containing fake news (or worse yet, with malware-infested links).

We touched on some of these topics briefly in a series of blog posts about online safety and how to prevent elders from getting scammed online. But for seniors using social media we recommend you take a look at how to better protect your social media accounts, such as using unique passwords and be careful what you click on, especially those sites that have offers that are just too good to be true.

--> -->