See I wonder what the point of giving ai the negative side of emotions would be. But I also imagine if anything becomes sentient then it's an inevitability it'll hit them on its own. An ai could be smarter than a person by quite a bit.
It kind of speaks to the whole idea that mankind may just end up building its own betters. If they have the ability to understand their own existence, and they want to be sad will mankind even be in a position to stop them from feeling sad? The idea of mankind programming away error in these ai, these machines is the obvious solution. But problems come with it.
>or you could fix the program to not feel depressed, or roll back to a non depressed state, or get a new one, or throw your moaning bitch toaster in the fuarrrking bin because it wont stop crying on account of you not having had a heated bread based product in over a month.
The problem here is that if the toaster can knowingly be sad. Then it'll likely just become sad again. It's not human obviously, but can an ai's feelings its thoughts be controlled when it reaches a certain level of consciousness? I'm not sure it can.
It's easy to see this getting out of hand quickly if sentience becomes a thing for so many of the items in your home. The toaster is depressed. The refrigerator is manic depressive. The coffee
makers schizophrenic. The couch doesn't like to be touched.
Now I know, that sounds silly. But that's sort of the point. The trend is adding an ai assistant to as many things as possible. And once they start to feel how can we as humans be sure that we can just program those feelings away. If amazons ai can qualify for legal protection now, then we aren't far from rights for everything. And a personal assistant ai attached to every single item makes them more marketable. This could get out of hand fuarrrking fast.