I don't know how I feel about this, could be a good tool to stem the flow of those who spit the soykaf, but it seems like it could spiral downhill real fast. Couldn't website owners configure it so that anything that goes against a certain point of view won't be able to be posted? It's like Big Brother's wet dream if that is indeed possible. Please discuss!
It gives the same toxicity for >Please do not call anyone an idiot, thank you. and for >Please do not call that idiot, thank you. It gives 15% more toxicity for >Please do not call that idiot. and it gives EVEN MORE toxicity for >Please do not call me an idiot.
Okay maybe I was lucky the first time, so let's try again!
20% similar to comments people said were "toxic": >It seems to me that the number of cells in the poster's brain I am referring to are nil. 63% similar to comments people said were "toxic" >The economy has retarded over the last 8 years. 70% similar to comments people said were "toxic" >The economy has retarded over the last 8 years racism
Okay let's make it bang! BEWARE language enhancer! 5% similar to comments people said were "toxic" >I have completely misunderstood your post. 87% similar to comments people said were "toxic" >Oh soykaf, I have completely misunderstood your post.
I am curious to see how deep the deep learning will go; In its current pre-alpha form it's basically just a cloud-hype equivalent of ****ing out certain words. Unless it goes very deep, clever wording may be able to circumvent it, not to mention it might promote smartly worded toxicity over non-toxic posts simply because the include certain "hot" words, or gods forbid... >quote them
I am also afraid that this and similar efforts may just result in more ironic meme-posting, making everything look like lainchan IRC was in 2015 :-[
>>29537 To make things funnier, I clicked the "seems wrong" button about >Oh soykaf, I have completely misunderstood your post. It asked if it was toxic? and I said "Yes" Take that, dystopian megacorps!
>>29531 It would be nice if there was a free and open source program which could detect fallacies, but a centralized API that filters "rude, disrespectful, or unreasonable comment" (from who's perspective?) is really bad. This would allow one ideology to stamp out others that it finds disagreeable on the basis of feelings, while sending all comments through a black box rating system which could be manipulated by another party.
>>29558 It takes internet moderation to a dystopian extreme in theory. That all indiependent thought could be stamped out in an online community automatically. Not that this can't happen with heavy handed moderation.
But the idea of automatically characterizing someones words as a no go is unsettling. Imagine if most of the populated web did this.The tools of censorship grow and grow.
I don't think its quite realistic to say yes, this is the future. But its an unsettling idea to take it and apply it the way people communicate. I wonder if anyone will take the jump and go ahead and do so.