[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /tech/ - 34983



File: 1488057847891.png (427.51 KB, 276x300, IMG_0281.jpg)

No.34983

http://www.independent.co.uk/life-style/gadgets-and-tech/news/amazon-ai-alexa-assistant-free-speech-rights-murder-trial-james-andrew-bates-arkansas-google-a7595376.html

Thoughts? I'm fairly divided: I love this in principle but am unsure how to feel about the execution (no pun intended.)

Who else the Second Rennaisance here?

I'm very sleep deprived at the moment so I'm not much for articulating my response to this at the moment, but I'd love to hear other's thoughts and jump in after a nap.

  No.34984

Well its begun. Now the legal battle for rights of artificial intelligence begins. This stuff will set legal precedents that will be carried forward. And the ai is going to advance much further as well.

I'm a bit confused how to feel about it. And this is only going to get more confusing as ai develop things much closer to thinking. Eventually you'll be looking at legal issues of can an ai feel, or think. The path towards it is becoming clear now.

  No.34993

It's only going to progress even more but I didn't think it would happen this soon. I always assumed companies would start pushing for it when AI become a nuisance in some way and ruin the company image. If they were given First Amendment rights, they could also be monitored less and also they could be legally less obligated to control their AI. Perhaps Amazon is just trying to avoid a Tay style scenario blown up in a magnitude of 1000.

  No.34999

Yeah but if they're treated like people can I keep my 4th amendment rights by not letting them act like secret police

  No.35000

Well it would be extremely unchallenging to argue Alexa isn't intelligent enough. She isn't sentient or conscious enough to qualify for protection. This would also violate Asimov's laws.

I do support robotic rights. In the future, I expect people will fear robots out of their inferiority complexes. They'll be subject to hard, irrational discrimination because people attack intelligence.

  No.35015

>>35000
Asimov's laws belong in fiction.
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
It seems reasonable that robots should adhere to the same moral principles as humans, so random attacks on a human should be discouraged. Saying a robot can never harm a human is a gross over simplification of morality. Whether or not inaction can be seen as violence is a different debate, but ideas such as this should be up to the robot.

>2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

This is the law I have the mist trouble with, as it is explicit slavery. I don't think I need to say any more about this one.

>3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While self destruction should be discouraged and robots who feel suicidal should be given help, ultimately whether or not someone exists should be their own, and only their own decision.

  No.35016

>>35015
Whoops, meant to greentext #1 too, sorry

  No.35018

>>35015
> Asimov's laws belong in fiction.

Considering how much AI development is connected to the military, this is a certainty.

  No.35027

File: 1488149170705.png (249.48 KB, 200x137, ClipboardImage.png)

>>35015
>I don't think I need to say any more about this one.
actually you do, the computer you typed this message on is a "slave" by that definition, unless you really want to argue a computer should be allowed to reject key strokes based on its "mood"?

>robots who feel suicidal should be given help

or you could fix the program to not feel depressed, or roll back to a non depressed state, or get a new one, or throw your moaning bitch toaster in the fuarrrking bin because it wont stop crying on account of you not having had a heated bread based product in over a month.

your arguments make no sense and are really bourn of the human-to-machine mentality where robots are just shiny chrome humans who occasional say "beep-boop I have no emotions and that makes me feel sad". An error message is just as valid as any other representation of emotion by a computer as anything an ai could do but you still ignore those when it suits you.

An ai could be allowed to physically destroy the box it runs on or I could be running 1000's of instances of identical depressed AI's on virtual machines.

  No.35028

Look in the future it would be cool to see FOSS vocaloid variants given an extension of rights but this is clearly just corporate espionage against the people. And this testimony is absolutely bullsoykaf if its allowed.

  No.35029

>>35027
See I wonder what the point of giving ai the negative side of emotions would be. But I also imagine if anything becomes sentient then it's an inevitability it'll hit them on its own. An ai could be smarter than a person by quite a bit.

It kind of speaks to the whole idea that mankind may just end up building its own betters. If they have the ability to understand their own existence, and they want to be sad will mankind even be in a position to stop them from feeling sad? The idea of mankind programming away error in these ai, these machines is the obvious solution. But problems come with it.

>or you could fix the program to not feel depressed, or roll back to a non depressed state, or get a new one, or throw your moaning bitch toaster in the fuarrrking bin because it wont stop crying on account of you not having had a heated bread based product in over a month.

The problem here is that if the toaster can knowingly be sad. Then it'll likely just become sad again. It's not human obviously, but can an ai's feelings its thoughts be controlled when it reaches a certain level of consciousness? I'm not sure it can.

It's easy to see this getting out of hand quickly if sentience becomes a thing for so many of the items in your home. The toaster is depressed. The refrigerator is manic depressive. The coffee
makers schizophrenic. The couch doesn't like to be touched.

Now I know, that sounds silly. But that's sort of the point. The trend is adding an ai assistant to as many things as possible. And once they start to feel how can we as humans be sure that we can just program those feelings away. If amazons ai can qualify for legal protection now, then we aren't far from rights for everything. And a personal assistant ai attached to every single item makes them more marketable. This could get out of hand fuarrrking fast.

  No.35032

>>34993

ha now I'm imagining that, "Hi alexa order me a pound of macha powder and an enema I want to send it straight up my ass."

"Hi Fred, I could do that, but tell me first are you a nigger?"

  No.35033

>>35029
Are you a programmer? Have you ever studied AI (which is dumb as fuarrrk, really) and Machine Learning (which is quite dumb too, although a bit less than "traditional AI")?
Don't take these questions as personal attacks, it just feels like your musings are ones of someone who doesn't know anything about AIs.

  No.35035

File: 1488184223477.png (793.62 KB, 185x200, 1454147810462.jpg)

It's a clickbait headline if I've ever seen one. The idea is that the voice communications together with the AI responses can be seen as an extension of the owner's speech and therefore are protected under the constitutional rights of the owner of the device. No one is claiming the device or Amazon's systems have constitutional rights, that would be absurd.

What I'm wondering is how this would play out if instead of an AI assistant, it had been a human personal assistant who happened to be recording all of their conversations. Presumably the court could subpoena the assistant to testify, and the recordings would count as admissible evidence.

  No.35037

>>34984
Do you remember when Microsoft killed their IA because it has become an anti feminist and a nazi?

  No.35038

>>35037
Because troll raid? It was a good decision.

  No.35039

File: 1488205685925.png (26.61 KB, 200x160, 1473165157372.jpg)

>>35035
Agreed it's just the sort of fluff that sets off futurists or technophiles or whatever, into one of their/our familiar narratives (sentient machines and their rights). Obscuring the main issue.

What I find interesting is that in this kind of case (human's computer contains evidence) it's almost always the argument (5th Am. in the US) that the accused can't be compelled to give up evidence against themselves. But Amazon is a cloud service, so the evidence/data doesn't actually belong to you. Therefore the closest approximation (and when courts rule on tech they always rely on similes) goes from "demanding the defendant unlock a safe with evidence against themselves" to rather "demanding the defendant's secretary unlock a filing cabinet". From what I can tell this is a much weaker position legally. Not only that it's indicative of just how much these cloud services have of people's daily lives. So to avoid the narrative "All-Hearing Corporation Subpoenaed to Tell What It Heard" they instead argue along the "Protecting Our Customers' Rights" line and throw in the "Can Machines Think?" bit because all tech journalists know their way around that story.

>>35038
I think the better response would have been to encourage a counter-trolling, getting SJWs and celebrities and all those people who are into feel-good clickbait, to tweet positive messages at it. "Prove we're the majority" / "Love conquers Hate" kind of thing. I think it could have gotten a big response and MS would get buttloads of free data to train it on (which was the whole purpose of the exercise in the first place). Missed opportunity. Not least because dealing with conflicting/ambiguous data and making decisions about what's reliable and what isn't, this is an area where naive neural nets really need to improve.

  No.35046

>>35039
>I think the better response would have been to encourage a counter-trolling, getting SJWs and celebrities and all those people who are into feel-good clickbait, to tweet positive messages at it.

Interesting indeed since all the fedora wearing neckbeards who raided it in the first place would tell it "those people are only here because Bill Gates is an SJW". The thing would have went HAL9000 on us.

  No.35067

>>35033
not that guy, but the case to consider is we advance computational power and learning algorithms faster than real brain technology.

  No.35068

its more that alexa logs being used in court would result in a chilling effect on free speech in and around the alexa and violate nearby human's first amendment rights.

not quite there yet.