For the record? As a self-employed small business owner with a preexisting condition, my pre-ACA health insurace suuuuuuuuucked. My current plan actually provides substantially better coverage than the old one and costs half as much. I only wish I’d done it sooner– getting on the exchange was, tbh, entirely a spite-motivated action on my part, and probably the sole positive impact of 45′s presidency on my life.
i put new yellow shoelaces on my boots to show people i am Cool. I walk to my work full of old lady cashiers. “I like your shoelaces.” one of them says to me. I see my life flash before my eyes as this ancient test is presented before me
“I just want to clarify and say that Kianah was not flagged because she was African American,” says Joel Simonoff, Predictim’s CTO. “I can guarantee you 100 percent there was no bias that went into those posts being flagged. We don’t look at skin color, we don’t look at ethnicity, those aren’t even algorithmic inputs. There’s no way for us to enter that into the algorithm itself.”
I tell them I am sure that they don’t have a ‘Do Racism’ button on their program’s dashboard, but wonder if systemic bias could nonetheless have entered into their datasets. Parsa says, “I absolutely agree that it’s not perfect, it could be biased, it could flag things that are not really supposed to be flagged, and that’s why we added the human review.” But the human review let these results stand.
“I think,” Simonoff says, “that those posts have indications that someone somewhere may interpret as disrespectful.”
…
Simonoff says Predicitm “doesn’t look at words specifically or phrases. We look at the contexts. We call it vectorizing the words in the posts into a vector that represents their context. Then we have what’s called a convolution neural net, which handles classification. So we can say, is this post aggressive, is it abusive, is it polite, is it positive, is it negative?’ And then, based on those outputs, we then have several other models on top of it which provide the risk levels and that provide the explainability.” (He and Parsa insist the system is trained on a combination of open source and proprietary data, but they refused to disclose the sources of the data.)
…
“The black woman being overly penalized—it could be the case that the algorithm learns to associate types of speech associated with black individuals, even if the speech isn’t disrespectful,” Kristian Lum tells me. Dr. Lum is the lead statistician at the Human Rights Data Analysis Group, and has published work in the prestigious journal Nature concluding that “machine-learning algorithms trained with data that encode human bias will reproduce, not eliminate, the bias.”
Lum says she isn’t familiar with Predictim’s system in particular, and to take her commentary with a grain of salt. But basically, a system like Predictim’s is only as good as the data that it’s trained on, and those systems are often loaded with bias.
“Clearly we’re lacking some context here in the way that this is processing these results,” Lum says, “and that’s a best case scenario. A worst case is that it’s tainted with humans labeling black people as disrespectful and that’s getting passed onto the algorithm.”
So what was the company I read about few weeks ago that used AI to judge applicants and choose the best ones but decided to go back to humans doing the work because their AI (based on their previous choices!) learned they prefer to employ men and didn’t give women a chance? If your behaviour is discriminating to others then don’t be surprised if the AI starts to discriminate.
That was Amazon — because they had a pattern of employing mostly men and paying them more, all they did was ask the algorithm to repeat those “successful” patterns and not “take risks” on this newfangled idea of women being just as deserving of careers and money as men. Maintaining that status quo keeps stockholders happy, and they hid behind the popular idea that machine learning is 100% impartial and objective.