- Sep 20, 2018
I'm all for punishing internet trolls and racists for their behavior. I agree that the website makes a hand wavy appeal to them just being "mistakes". My problem is that left to it's own devices, machine learning will associate you with other people who have a habit in common. You can become associated with people who made "mistakes" even if you never did such a thing. As Innula said:Thanks.
That's my primary thinking on this. There's growing talk that people are being unfairly punished for "mistakes" that they should be entitled to forgiveness for. But in my opinion, there's a line between mistakes and deliberate bad acts and choices that people only act contrite about because they're forced to by exposure or backlash. And it's not a thin or fuzzy line at all; but there's increasing attempts by some with agendas to blur that line, and I am resistant to that.
Machine learning isn't a magical truth telling machine, it's a very powerful stereotyper. Many computer scientists have been very vocal about their distrust of these algorithms for this. Tools for courts to see who is likely to reoffend have already been shown to be very racist. Resume readers punish female or foreign names etc.My problem with this sort of profiling is that it risks institutionalising prejudice, and particularly when the scoring rules and algorithms are generated by AI.
That's because, if, for example, existing social prejudices tend to stigmatise and marginalise members of recognisable ethnic or religious minorities, then the AI will pick up on this, noticing that people with particular first or last names tend to get themselves arrested quite a lot, as do people who live in particular parts of town, and that they are also more likely than most to have irregular or insecure employment patterns, and act accordingly.
Stereotypically "masculine" hobbies can be associated with domestic violence and liking certain movies may make an AI think you are more likely to be a drug user etc etc.
I've spoken before about my mental health issues. I take an antipsychotic for them. I post on subreddits that talk about mental health. An AI is likely to associate things like that with trouble, even if my own record is squeaky clean (which it is).
This isn't being paranoid, this is how machine learning works if left to it's own devices.
You're thinking about this like a lawyer, but this is a total nightmare from an engineering perspective. Machine learning algorithms are easy to setup on their own, but what you're talking about is a maintenance nightmare as we'd have to produce an ever growing pile of rules that overrule the natural results of machine learning to make it more fair.However, I would say that's a design problem rather than anything else, and best remedied by encouraging companies to guard against this, possibly by imposing on them a duty to nominate a director to take all reasonable measures to ensure their software, as well as their staff, understands all the fine words in the company's mission statement about equality and fairness and non-discrimination, and hitting both the company and the nominated director for large and well-publicised financial penalties and compensation if they fail to comply.
I think that rather than starting with the AI then trying to force it to be more fair, financial institutions and employers should be banned from many data points when trying to grant loans or evaluate employees. For example, I think a bank should get to see your normal credit history and maybe criminal record, and nothing else. Employers don't need to know what an AI thinks of your social media posts...