0
0

Yes, “algorithms” can be biased


 invite response                
2019 Jan 25, 11:36am   1,215 views  16 comments

by null   ➕follow (0)   💰tip   ignore  

Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition "algorithms" (and by extension all "algorithms") "always have these racial inequities that get translated" and that "those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."

She was mocked for this claim on the grounds that "algorithms" are "driven by math" and thus can't be biased—but she's basically right. Let's take a look at why.

First, some notes on terminology—and in particular a clarification for why I keep putting scare quotes around the word "algorithm." As anyone who has ever taken an introductory programming class knows, algorithms are at the heart of computer programming and computer science. (No, those two are not the same, but I won't go into that today.) In popular discourse, however, the word is widely misused.

Let's start with the Merriam-Webster dictionary definition, which defines "algorithm" as:

[a] procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation broadly: a step-by-step procedure for solving a problem or accomplishing some end

It is a step-by-step procedure. The word has been used in that sense for a long time, and extends back well before computers. Merriam-Webster says it goes back to 1926, though the Oxford English Dictionary gives this quote from 1811:

It [sc. the calculus of variations] wants a new algorithm, a compendious method by which the theorems may be established without ambiguity and circumlocution.

Today, though, the word "algorithm" often means the mysterious step-by-step procedures used by the biggest tech companies (especially Google, Facebook, and Twitter) to select what we see. These companies do indeed use algorithms—but ones having very special properties. It's these properties that make such algorithms useful, including for facial recognition. But they are also at the root of the controversy over "bias."

More: https://arstechnica.com/tech-policy/2019/01/yes-algorithms-can-be-biased-heres-why/

#Algorithms #Bias #SciTech

Comments 1 - 16 of 16        Search these comments

1   RWSGFY   2019 Jan 25, 11:57am  

Explain to me, how a facial recognition system can produce "biased output"? Because the article has failed in this regard.

The purpose of facial recongition system is to match a fucking face to a fucking name. That's it. It either fucking matches the image of a fucking face to a particular fucking guy or it doesn't.

The only place where the author lazily attempts to explain it is the following: "There is at least the perceived risk that, say, computerized facial recognition will be used for mass surveillance. Imagine the consequences if a biased but automated system differentially misidentified African-Americans as wanted criminals." What does that mean exactly? I mean, if it has high rate of error in matching faces, why would this rate would be higher when it tries to match black face to a black criminal than white face to white criminal or asian face to asian criminal? Or he's trying to say that there is a possibility that the system would more often match black dudes to white wanted criminals than white dudes to black criminals? It makes no fucking sense.
2   anonymous   2019 Jan 25, 11:58am  

If a human is involved in any what so ever with something, there will be a bias transferred, even your own and you do have some just as everyone else in the world does .
3   RWSGFY   2019 Jan 25, 12:02pm  

Kakistocracy says
If a human is involved in any what so ever with something, there will be a bias transferred, even your own and you do have some just as everyone else in the world does .


How do you "transfer bias" into facial recontion in particular? What would be the manifestation of bias against blacks (this is what is assumed, right?) in, say, system which goest through surveillance video stream and tries to pick up wanted guys based on their pictures on file? It will be more accurate for black subjects? Less accurate?
4   anonymous   2019 Jan 25, 12:07pm  

DASKAA says
How do you "transfer bias" into facial recontion in particular

Humans have to program and write the software and their is no such thing as an unbiased human.

Think of it this way - "bias" is how each of sees the world and none of us see it exactly the same way. Much as we would like to, we are not going to be able to get rid of it - mitigate and minimize it perhaps but not eliminate it.

Perhaps at some point in the future when robots/AI many generations removed from the creators are able to construct their own - bias may be eliminated - maybe - but then there would exist the possible bias against humans.
5   RWSGFY   2019 Jan 25, 12:35pm  

Kakistocracy says
DASKAA says
How do you "transfer bias" into facial recontion in particular

Humans have to program and write the software and their is no such thing as an unbiased human.

Think of it this way - "bias" is how each of sees the world and none of us see it exactly the same way. Much as we would like to, we are not going to be able to get rid of it - mitigate and minimize it perhaps but not eliminate it.

Perhaps at some point in the future when robots/AI many generations removed from the creators are able to construct their own - bias may be eliminated - maybe - but then there would exist the possible bias against humans.


Lots of vague words, no answer still. I repeat my question: how would "bias" in facial recognition would manifest itself when applied to finding faces from database of mugshots of wanted criminals in a live surveillance stream. Assuming that "bias" will be "against blacks", will it result in more accurate identification of black subject, less accurate, some other outcome?
6   Heraclitusstudent   2019 Jan 25, 1:18pm  

Kakistocracy says
Humans have to program and write the software and their is no such thing as an unbiased human.



There are 2 types of biases:
1 - it doesn't work well in some cases: https://www.nytimes.com/2019/01/24/technology/amazon-facial-technology-study.html
Because of training sets. Probably easy to correct, and there is nothing wrong with the algorithm.

2 - it says the truth, but it hits a topic where for social reasons, we don't want to stick to the exact "truth".
Ex1: Doesn't give loans to black people because it associates black to "don't pay back loans at the same rate".
Ex2: Amazon scraps 'sexist AI' that learn to give preference to men. https://www.bbc.com/news/technology-45809919

In which case, the problem is not that the algorithm is biased, it is that we want to be biased, and the algo isn't.
7   NDrLoR   2019 Jan 25, 1:42pm  

Kakistocracy says
"bias"
In the first place, what's wrong with it?
8   Bd6r   2019 Jan 25, 2:36pm  

Heraclitusstudent says
2 - it says the truth, but it hits a topic that where for social reasons, we don't want to stick to the exact "truth".

Reality is biased in that case.
9   CBOEtrader   2019 Jan 25, 2:49pm  

Any algo that simply chooses the best fit for a job/loan/opportunity will end up being racist IF race is allowed to be a variable in the model.

Even if the algo is blind to race, the natural differences in competence produced by various cultures would be CONSIDERED racist by todays left. Although this would simply be reality snacking us in the face.

The latter is what the left is afraid of, because there would be no way to rig a system like that without completely destroying the value of the algo in the first place.

Competence isn't evenly distributed and an algo would identify this VERY quick
10   CBOEtrader   2019 Jan 25, 2:52pm  

d6rB says
Heraclitusstudent says
2 - it says the truth, but it hits a topic that where for social reasons, we don't want to stick to the exact "truth".

Reality is biased in that case.


Wait till Harvard is half Asian, 30% Jewish, and 20% white/indian/etc...
11   anonymous   2019 Jan 25, 3:37pm  

P N Dr Lo R says
In the first place, what's wrong with it?


Can't seem to think of a thing - oh,wait....maybe this ?

12   anonymous   2019 Jan 25, 3:40pm  

Or this



or this



or this

13   anonymous   2019 Jan 25, 3:46pm  

Heraclitusstudent says
There are 2 types of biases:


Almost 200 cognitive biases rule our everyday thinking. A new codex boils them down to 4.

Aside from mythical spiritual figures and biblical kings, humans are not objective in how they react to the world. As much as we would like to be fair and impartial about how we deal with the situations that arise on a daily basis, we process them through a complex series of internal biases before deciding how to react. Even the most self-conscious of us cannot escape the full spectrum of internal prejudices.

Brain biases can quickly become a hall of mirrors. How you understand and retain knowledge about cognitive shortcuts will determine what, if any, benefits you can derive from the substantial psychological science that's been done around them. Here we take a look at different ways of understanding cognitive biases, and different approaches to learning from them. Enjoy!

https://bigthink.com/mind-brain/this-giant-cognitive-bias-codex-will-transform-your-understanding-of-yourself?rebelltitem=1#rebelltitem1
14   Heraclitusstudent   2019 Jan 25, 4:05pm  

Kakistocracy says
200 cognitive biases rule our everyday thinking


Human cognitive biases are one thing, AI software biases are an other.
You cannot claim that the biases magically jump from people to AI software. That's ridiculous.

The notion that everything we do is governed by prejudices, including engineering, is claptrap from people who never sat in front of a computer and wrote software.
Do it and see how the computer deals with your "cognitive shortcuts".
15   Ceffer   2019 Jan 25, 4:55pm  

If they vaporize perps instantaneously on the basis of facial recognition algorithms, then maybe the issue of bias would be relevant.

There is very little twisted by humans that doesn't wind up purposely and with agenda reflecting the biases of the creator, or the creator's bosses. There is always a finger on the scale somewhere.
16   MisdemeanorRebel   2019 Jan 25, 6:02pm  

Kakistocracy says


Or this.



It's amazing how black guys are always having to stop white guys from behaving badly.

Unless, of course, a White Female SJW from Australia is biased.



It's also impossible for a Black Democracy and Black Politicians in it to studiously ignore racial violence, including the murder of children. Just bringing it up makes you a racist. But bringing up 80 year old lynching photos, mostly of criminals hanged in fear the judge would only sentence them to prison, is righteous and a good reminder of bias.

After all "Shoot to kill the Boer" is just a Rebel Song, like the kind Bono's Uncle used to sing.

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions