Facebook stepped in it again, this time by allowing users of their advertising platform to target users via vile anti-Semitic terms. Do I think the people or the company is anti-Semitic? Of course not.
I think the company* is anti-human. We get in the way of making money. We refuse to be upsold predictably, consume on demand, and take predictable “customer journeys” that conveniently spew forth the most profit.
I think Facebook’s problem was the blind faith that algorithms can replace what people do. But the algorithms can’t be written to demonstrate empathy. You can’t score common sense. Sure, algorithms can maximize Facebook’s advertising revenue. A side effect–as we see–is that they can minimize humanity when humans aren’t empowered to deploy common sense.
There’s an article at The Atlantic that asks if Facebook could have caught the problem, and I expect you’ll see more articles along those lines. That line of thinking is ridiculous. Of course they could have caught the problem. But nobody was looking.
People instantly saw the abhorrent nature of that type of targeting and correctly shut it down. But I believe the people inside the company have been minimized as well. The techbro coder culture takes away people’s decision-making ability: The algorithm built by the expensive guy in the hoodie, living on Soylent, can’t possibly be wrong.
Silicon Valley algorithms are just the work of humans, riddled with their own biases, layered with the inevitable software bugs, and rushed out rapidly to maximize profit. To fix human problems, you need to empower humans and place them, not the algorithms, at the top of the heap.
Takeaway: Build the best tech you can. Make sure humans, with their ability to be empathetic, nuanced, and unpredictable are always allowed to use judgement. Build up humanity. And win.
* I lump Facebook, Google, Amazon and other Silicon Valley companies that practice the destruction of value and human lives through “disruption” together.