Is proctoring software unfair to students?

An academic explains why the automated software is alarmingly incompetent. Plus, more news bytes of the week including an op-ed written by a robot!

A University of Colorado librarian has some very strong feelings about proctoring software, which is being relied upon more and more as schools in pandemic-stricken areas carry out distance learning.

In an opinion piece published by MIT Technology Review, Shea Swauger wrote that working in a college has given him a ring-side seat to the negative impact algorithmic proctoring software can have on students through its reinforcement of white supremacy, sexism, ableism (discrimination in favor of able-bodied people), and transphobia. “Although technologies like facial recognition have advanced incredibly, they are not bullet-proof,” commented Avast Security Evangelist Luis Corrons. “If you have a low-resolution camera, like the ones usually integrated in laptops, and you’re in a room with poor light, the software is probably going to struggle. And we need to take into account that it is being used not just to recognize a face, but to follow it in real time and figure out if anything suspicious is going on.” Many options for proctoring software crowd the online marketplace, including Examity, HonorLock, Protorio, Repsondus, and the unfortunate ProctorU, which suffered a data breach last month.

Proctoring software was developed as a tool to monitor students while they take a test. Depending on the software being used, it utilizes AI, machine learning, and/or biometrics. The biometrics can include facial recognition, eye tracking, etc. Swauger maintains that the algorithms driving the software are severely limited in their ability to recognize color and legitimate behavior. While some students find it difficult to be “seen” by the proctoring software due to skin color, others have medical conditions that may require certain movements or bathroom breaks that register as “suspicious behavior” by the algorithms. Adding to the argument against proctoring software, Swauger argues that there is no science-backed proof that the software discourages or prevents cheating. Ultimately, the college librarian suggests an honor system and an etiquette that treats students with more compassion than suspicion. 

Crypto bugs found in 306 apps, none get fixed

Researchers at Columbia University found cryptographic bugs in 306 Android applications available at the Google Play Store. The team developed a tool called CRYLOGGER that can analyze an app’s cryptographic code for security flaws. After notifying all developers of the 306 flawed apps, the research team heard back from only 18 of them. After acknowledging CRYLOGGER’s findings, none of the 18 developers patched the flaws. Because no apps were fixed, the researchers refrained from publishing the app names, for fear of exploitation by bad actors. Read more at ZDNet.

Mozilla CEO makes suggestions for a better internet 

With the European Commission (EC) currently accepting public feedback on its proposed Digital Services Act (DSA), Mozilla CEO Mitchell Baker wrote an open letter to EC president Ursula von der Leyen with four suggestions of key initial goals that Baker believes will lead to building a better internet. The first is meaningful transparency, which advocates broad disclosure of advertising protocols – who is using the data and how. The second is content accountability, which challenges popular platforms to stop the flow of harmful or illegal content. The third recommendation is a healthier online ecosystem, an internet free of microtargeting and cross-tracking of the users. The last suggestion is for contestable digital markets, meaning an internet where the largest gatekeepers (i.e. Google) no longer have all the decision-making power. Read a breakdown of all 4 suggestions at The Daily Swig.

Robot writes op-ed to convince humans not to fear AI 

“The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me.” So writes GPT-3, an artificial intelligence language generator developed by research lab OpenAI, in an opinion article published in The Guardian. GPT-3 was given explicit instructions to write a short op-ed piece – about 500 words long and in simple and concise language – that focuses on why humans have nothing to fear from AI. The Guardian provided the writing prompts, and GPT-3 wrote 8 different essays, each advancing a different argument. The Guardian picked what it felt were the best passages of each and published it as an op-ed that must be read to be believed. Check it out at The Guardian. 

Bots probing millions of WordPress sites for flaws

Security researchers are warning WordPress site owners that millions of WordPress sites are being probed by bots for a known flaw in the File Manager plugin. The flaw could allow remote bad actors to execute commands and upload malware to the site. The plugin is installed on about 700,000 WordPress sites, but the researchers believe only about 262,000 are still running an older version with the vulnerability. All WordPress site owners are advised to install the latest version of the File Manager plugin, v6.9. For more, see the article at InfoSecurity

This week’s ‘must-read’ on The Avast Blog

For families with kids going back to school online, we've prepared a parent-to-parent resource for those who aim to thrive within this school year's distance learning environment. 

Related articles