Why Relying on Algorithms is Bad


About two years ago, I got into playing chess online and I also watch chess videos since then, usually at dinner or lunch. One funny thing that happened last year in the online chess community was that a live stream interview between the (then) most popular chess YouTuber Agadmator and chess Grandmaster Hikaru Nakamura was banned for hate speech. Apparently, the algorithm has interpreted the phrases as “white is better here”, “black is defending”, “white attack” and similar as incitement to violence, and completely failed to recognize that the talk is about a board game.

At the same time, open racists and transphobes were spouting and often keep spouting their bile on YouTube completely unimpeded under the guise of “Humor” or “Just Asking Kwestchions”.

Today the algorithm struck marvelously again.

I do not remember precisely when I have seen so-called fractal burning of wood on YouTube, but I think it was some time last year. I thought that it looks cool so I researched how it is done. And I have immediately gone to the conclusion that cool looking it might be, but I certainly ain’t doing that, not even for a big clock. And YouTube channel “How To Cook That” has published an excellent video a few weeks ago explaining why fractal wood burning is not a good craft hack for woodworkers:

And of course, an excellent youtube video cannot go unpunished – the algorithm yanked it for allegedly promoting harmful and dangerous acts. And while it was banned, that same algorithm has actually recommended to me a video showing the hack in action. Marvelous work – a warning about dangerous practice gets banned as promotion of said practice and an actual promotion of it gets promoted. Logic straight as a corkscrew.

The video has been reinstated after YouTube got pushback, but I do wonder how many really good and possibly important videos get yanked and never get back because the channels that made them were small and did not have millions of subscribers to cry foul on their behalf. Because let’s be real – YouTube gets an actual human to do the review only when there is an outcry, otherwise, they do not bother.

I think that overreliance on algorithms has great potential for actual harm. Human social interactions are so complex that there are humans out there (like me) who are barely able to navigate them. I do not think that AI is there yet.

Comments

  1. Nomad says

    I’ve come to the conclusion that no moderation at all is better than what we have now. I’m not here to make a free speech absolutist argument, this is more a matter of pragmatism.

    The algorithms do not work, they are not nearly as smart as they’re being presented. They’re blunt tools. Some people I watch on Youtube had to avoid saying “covid” for the duration of the lockdown, even though they were talking about the effects of it, because if they said it, they’d get demonetized for spreading covid disinformation. The algorithm knows nothing of what they’re saying, it doesn’t target disinformation at all. It targeted use of the word. I genuinely think they’re lying about what it does, I don’t think it’s detecting disinformation at all, it’s just applying a sledgehammer to everyone who says the forbidden word.

    In the past someone I sometimes watch had a video consistently demonetized for unstated reasons, Youtube never really says what’s wrong. They kept re-editing it, trying to take out questionable stuff, but in the end the way they got around it was to take out a crossfade in their edit and turn it into a hard cut. That alone made the video pass, the algorithm was punishing them for a crossfade. Don’t ask me why. It’s a mess, I don’t think the people who made it even know what it’s doing.

    For the last story we must establish that I’m a furry. You don’t have to like furries, you just have to respect our right to do our thing. But you should also understand that I’ve been concerned that we’d find ourselves targeted by the alt-right in the USA because so many of us are LGBT and we trend strongly liberal. Many furry conventions broadcast some of their programming online, including their dances at night, and Twitch is a popular platform to use for that. Music streaming on Twitch is more common than you might think, a lot of people perform DJ sets on Twitch, as long as they delete the recordings that otherwise remain on Twitch it seems to be safe from copyright claims. But this convention had their twitch account suspended, I think it was on their second night. They didn’t say why, but they probably didn’t know themselves, Twitch also does not like to explain their decisions. They later switched to a second Twitch account that they had, it was banned within an hour. Later they made a third account, it was banned fast. They switched to Youtube to do their final events, and partway through them that stream was cut again for unspecified terms of service violations.

    They didn’t explain why, the groups that run these things don’t like to talk about negative stuff like this. So I can only offer my guess as to what happened. I think they were the victim of a false flagging campaign. I think either a lot of people, or a lot of bots, flagged the stream for false forbidden content. Maybe they said they had violence and gore, or sexual content, or any of a number of other forbidden things. As far as I can tell Twitch will automatically take down and ban streamers who are hit with a campaign like that. They absolutely do not check first, they automatically respond to the demands of the mob. You can try to appeal these things, but this happened on a Friday night. They probably won’t even act on it until the next Monday.

    And the thing is, like clockwork, all of their banned accounts were unbanned on Monday.

    I do not believe this was even really due to what you would call an algorithm, not AI in any case, I think it was just a case of them setting a limit, and if enough people flag the stream to go over that limit, then the automatically ban it. Ban first and ask questions later, basically. And in this case a false flagging campaign targeting a minority group succeeded because the system is designed to reward that kind of behavior. If it can work against us, it can work against you.

    You could argue that they need to have a human being double check the flagging claims, but that’s just not good enough. I know for a fact that they’d get some cut rate contractor to do it, and they’d assign them impossible performance targets that make it impossible for them to actually verify the contents of the streams. They’d have to automatically accept the flagging and move on just to meet their performance targets. That’s how these sleazebag companies act. Actually verifying the content of all of these videos would require far too much labor, and moderation itself is not easy, it requires skilled workers who are able to make decisions based on complicated rules and policies. That army of workers could not be minimum wage workers. The whole thing may not be tenable under the current conditions.

    Which is why I’ve ended up in favor of stopping these attempts to moderate content altogether. The press releases lie, it’s as simple as that. We read in the news that Google finally, reluctantly began moderating covid misinformation. But from what I’ve seen that’s simply false. They moderated mention of the virus. You brought up the example of someone advising others not to do something that’s dangerous and being moderated for doing so while the original video remains untouched. I’ve seen this also, I don’t remember the situation but it was roughly the same pattern. Debunking gets targeted while the video being debunked is somehow unharmed. The system does not work. It may not even be designed to work. It’s only designed for them to say “we’re helping!” It’s only designed to make people think they’re making an effort. But the reality is that every time people demand that they do more, more small content creators get punished and risk losing their entire livelihood through no fault of their own. Every time people demand that they do more, things only get worse. But the press releases trumpet how much they’re helping. And if you’re not a content creator, or if you don’t follow people who are, you’ll never know.

  2. says

    Anne Reardon has been doing excellent debunking videos for years now.
    But yeah, the algorithm is actively grooming people to become nazis. I swear, you start with a search for “amigurumi frog pattern” and within three videos you’re suggested some Nazi shit.

Leave a Reply