[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /r/ - 29531



File: 1487890792535.png (77.82 KB, 300x142, Screenshot (467).png)

No.29531

https://www.perspectiveapi.com/

I don't know how I feel about this, could be a good tool to stem the flow of those who spit the soykaf, but it seems like it could spiral downhill real fast. Couldn't website owners configure it so that anything that goes against a certain point of view won't be able to be posted? It's like Big Brother's wet dream if that is indeed possible. Please discuss!

also
>19%
absolutely blasphemous.

  No.29537

It gives the same toxicity for
>Please do not call anyone an idiot, thank you.
and for
>Please do not call that idiot, thank you.
It gives 15% more toxicity for
>Please do not call that idiot.
and it gives EVEN MORE toxicity for
>Please do not call me an idiot.

Okay maybe I was lucky the first time, so let's try again!

20% similar to comments people said were "toxic":
>It seems to me that the number of cells in the poster's brain I am referring to are nil.
63% similar to comments people said were "toxic"
>The economy has retarded over the last 8 years.
70% similar to comments people said were "toxic"
>The economy has retarded over the last 8 years racism

Okay let's make it bang! BEWARE language enhancer!
5% similar to comments people said were "toxic"
>I have completely misunderstood your post.
87% similar to comments people said were "toxic"
>Oh soykaf, I have completely misunderstood your post.

I am curious to see how deep the deep learning will go; In its current pre-alpha form it's basically just a cloud-hype equivalent of ****ing out certain words. Unless it goes very deep, clever wording may be able to circumvent it, not to mention it might promote smartly worded toxicity over non-toxic posts simply because the include certain "hot" words, or gods forbid...
>quote them

I am also afraid that this and similar efforts may just result in more ironic meme-posting, making everything look like lainchan IRC was in 2015 :-[

  No.29538

>>29537
To make things funnier, I clicked the "seems wrong" button about
>Oh soykaf, I have completely misunderstood your post.
It asked if it was toxic? and I said "Yes"
Take that, dystopian megacorps!

  No.29558

>>29531
It would be nice if there was a free and open source program which could detect fallacies, but a centralized API that filters "rude, disrespectful, or unreasonable comment" (from who's perspective?) is really bad. This would allow one ideology to stamp out others that it finds disagreeable on the basis of feelings, while sending all comments through a black box rating system which could be manipulated by another party.

  No.29559

>>29558
It takes internet moderation to a dystopian extreme in theory. That all indiependent thought could be stamped out in an online community automatically. Not that this can't happen with heavy handed moderation.

But the idea of automatically characterizing someones words as a no go is unsettling. Imagine if most of the populated web did this.The tools of censorship grow and grow.

I don't think its quite realistic to say yes, this is the future. But its an unsettling idea to take it and apply it the way people communicate. I wonder if anyone will take the jump and go ahead and do so.

  No.29584

File: 1488174325775.png (43 KB, 200x121, fixed.jpg)

>>29531
A huge problem with this is that it's going to characterize "toxicity" based on whatever the organization/think-tank/corporation that funds it thinks

  No.29589

It is completely stupid. It's not based on context its based on words.

"you are wrong" scores 26%
where as
"i think that you are wrong" scores 16%

"I fuarrrking hate you" scores 99%
where as
"I disagree with your use of "I fuarrrking hate you" in your comment. Please do not use it again" scores 88%

It's not an AI as they brand it, its just a word filter. A really bad one at that.