[Discussion] If true AI were to be created, should they be considered human?

Should a real AI be considered a human?

  • Yes

    Votes: 3 6.1%
  • No

    Votes: 41 83.7%
  • True AI = Fuckin' Terminator

    Votes: 5 10.2%

  • Total voters
    49

Dist

Proton
Joined
Apr 26, 2016
Messages
188
Nebulae
99
Some people believe that true AI will be evil and destroy mankind which is a possibility but I call it fear of the unknown. It's not like you or I suddenly want to eradicate the human race because we studied a few pages of war on Wikipedia. However, it is an interesting concept because knowledge is power and a computer can "learn" at a far faster rate with a higher memory capacity than us.

How could we even predict what would happen really? It may not even think like us. Computers have always used ones and zeroes and at the end of the day, so does AI.

Having rights would depend on its grasp of emotions and self awareness. But then again does it really count if it's not a "living thing"? The feelings and personality could just be a gimmick, simply because we designed it to simulate a human. I also agree with @MaXenzie on morality. Would AI develop one based on our values or make its own version of "right" and "wrong"?

This is all speculation so take it with a pinch of salt. It seems to me that each question answered raises another two in its place.
 

Adrenaline

relaxed
Joined
Apr 27, 2016
Messages
1,439
Nebulae
589
Well, to be completely honest. I can give an example of where it's quite impossible to predict what a robot thinks.

Someone tells a robot "I want to kill jews." The robot asks why. The person gives a poor explanation like "They just steal.". This robot decides that is a good explanation. It proceeds to murder jews.

You can't change it's mind about jews, unless programmed in from the start.

BY THE WAY, NO OFFENSE TO JEWS. gas *cough* chamber

What I mean is that if we give robots an AI where they can decide stuff themselves, it can go real bad.

They can become an actual racist, nazi or a radical islamic terrorist.

Imagine a robot walking into a school and smashing all the black kids to death.
 
Reactions: List

gigi

Atom
Joined
Apr 27, 2016
Messages
2,611
Nebulae
2,281
theyd never be human (they are articifal after all) but they'll probably have equal rights after a very long fight
i just hope they'd have the sense of whats right and wrong put into them so they dont do shit like massacre jews or kill blacks because some cunt said they should for some bullshit reason
 
Reactions: List

Akula

Sangheili Bias
Joined
Apr 27, 2016
Messages
5,443
Nebulae
19,706
despite the Brotherhood of Steel being whiney borderline nazi faggots who can't handle NCR freedoms I do agree on the whole "fuck the robots race war now" thing
 
Reactions: List

overki11

Electron
Joined
Apr 30, 2016
Messages
545
Nebulae
137
Well, to be completely honest. I can give an example of where it's quite impossible to predict what a robot thinks.

Someone tells a robot "I want to kill jews." The robot asks why. The person gives a poor explanation like "They just steal.". This robot decides that is a good explanation. It proceeds to murder jews.

You can't change it's mind about jews, unless programmed in from the start.

BY THE WAY, NO OFFENSE TO JEWS. gas *cough* chamber

What I mean is that if we give robots an AI where they can decide stuff themselves, it can go real bad.

They can become an actual racist, nazi or a radical islamic terrorist.

Imagine a robot walking into a school and smashing all the black kids to death.
that scenario would never happen. only blacks steal.

in all seriousness, that's a really good point. should "true AI" be defined as something that can learn for itself, and decide and deduce based on objective facts and experiences, or learn via the opinions of others

maybe "true AI" should be defined as one that can develop a personality by itself, again based on objective facts and it's experiences.

just some questions to think about