[ home ] [ site / arch ] [ pony / oat / ef ] [ rp / fan ]

/ef/ - Everfree

A general discussion board with a laxer atmosphere and fewer rules. Keep adult content or controversial issues here.
Name?

This field is optional. You can choose any name you want, or you can post anonymously by leaving this field empty.

Tripcodes are a way to identify yourself between posts without having to register with the site. To use a tripcode, enter your name as ‹name›#‹key›.You can choose anything you want as the key—it is private and will never be shown to other posters or stored on the server. For example:

Rarity#bestpony → Rarity!.4PK7yxdII

If you want a tripcode containing specific words, you can download a program designed to search for tripcodes, such as Tripcode Explorer.

Email
?

Entering an e-mail is optional.

There are also code words you can enter here which perform certain actions when you submit your post.

  • sage — lets you post without bumping a thread.
  • nonoko — uses the original post behavior to redirect to the board index.

These can be used at the same time as an e-mail address by typing ‹email›#‹action›.

You can also use Skype names in place of an e-mail. The notation is the same as a link to a username on skype itself, which is skype:‹username›

Subject
Comment?
Giving emphasis
[b] Bold [/b] Ctrl + B
[i] Italic [/i] Ctrl + I
[u] Underlined [/u] Ctrl + U
[s] Strikethrough [/s] Ctrl + R
Hiding text
[?] Spoiler text [/?] Ctrl + S
[h] Hide block of text [/h] Ctrl + H
Special
[rcv] Royal Canterlot voice [/rcv] Ctrl + K
[shy] Fluttershy voice [/shy]
[cs] Comic Sans [/cs]
[tt] Monospaced [/tt]
[d20], [4d6] — Dice rolls
URLs and linking
Link to a post on the current board
>>1234
Link to another board
>>>/pony/
Link to a post on another board
>>>/pony/1234
Hypertext links
[url=https://www.ponychan.net/] Ponychan [/url]
File
Flag
Options
Password
?

This field is for editing and deletions.


File: 1547016382783.png (1.43 MB, 1476x910, Nick_Valentine.png)

Maroon Auburn!QEUQfdPtTM (ID: d6a6ac)Country code: gb, country type: geoip, valid:   218563

Ok so let's have a hypothetical situation

>Humanity creates super advanced robots

> Robots are able to pass the turing test
>Robots quite clearly are aware of their own existence, can think for themselves, are by all definitions fully sentient, and just as much so as any regular human

In this hypothetical situation do you believe said robots should be given human rights?

Hauptmann (ID: e86af4)Country code: jp, country type: geoip, valid: 1  218565

File: 1547017339409.jpg (55.16 KB, 700x700, 10628535_10152706262004320_850…)

can you fuck them?

Maroon Auburn!QEUQfdPtTM (ID: d6a6ac)Country code: gb, country type: geoip, valid: 1  218567

File: 1547018104408.jpg (856.33 KB, 1512x1080, Curie.jpg)

>>218565

>Curie liked that

Anonymous (ID: 8ceedb)Country code: tux.png, country type: customflag, valid: 1  218568

File: 1547019925254.jpg (137.1 KB, 1280x720, master.jpg)

Yes. Ethically, they are entitled to humanlike rights and treatment. Even without any formal legal protections and frameworks in place, they deserve basic habeas corpus and asylum rights at the least.
This post was edited by its author on .

Noonim (ID: a4ac74)Country code: mlpchan.png, country type: customflag, valid: 1  218570

File: 1547020391390.jpg (145.16 KB, 1280x719, 1448953186526.jpg)

>>218563
Depends on how they're designed. You can have AI that is effectively its own person, but, still limited to the point of not really being independent.

But, mostly, yeah. I'd say so.
>>218567
Thot demands I make her into a human before I can do that.
She's baited me.

Thauma (ID: 9e883a)Country code: windows9x.png, country type: customflag, valid: 1  218573

File: 1547020804545.png (931.56 KB, 1029x1930, 052.png)

More like, human rights should be turned into "sentient rights" at that point.
That being said, crimes against robots would be penalized differently than they would against humans, simply because robots are more programmable and their parts interchangeable.

(ID: cc8f74)Country code: mlpchan.png, country type: customflag, valid: 1  218574

File: 1547020975998.png (847.55 KB, 1300x731, NonPatreon.png)

The question I'd have is can humans design what are for the most part other humans, without things going horribly wrong? It's a positive feedback loop. If you design a robot to design a robot, and slightly warp the process, it amplifies with every generation adding more warping to it. Humans make the robots a certain way who make the humans a certain way who make the robots a certain way, it's a giant disastrous echo chamber.

I'm not worried about making human-like robots, sure they're people that's fine, but humans being able to do something like that is an extremely dangerous power to have. They'll design the perfect slaves, and the perfect soldiers, then murder us all because we're not making our masters as much money. It's not the robots I fear, so much as the controllers, since they will certainly be a very small class of notoriously psychopathic wealthy elites.

Or to use the Fallout example, they were making sleeper agents who acted like (and thought they were) peaceful and friendly, and the Institute could turn them into cold blooded murderers at any moment, to further their goals. No error to the process, works perfectly every time, no way to fight them or assert your independence, whether you're a synth, or a person among the synths.

Anonymous (ID: 8ceedb)Country code: tux.png, country type: customflag, valid: 1  218575

>>218573
Sapient. Not sentient.

All animals are sentient. Except maybe sea cucumbers and tubeworms.

Thauma (ID: 9e883a)Country code: windows9x.png, country type: customflag, valid: 1  218576

File: 1547021290443.jpg (313.88 KB, 800x1130, 064.jpg)

>>218575
All right, sapient it is
As long as sea cucumbers don't get the rights to vote.

Noonim (ID: a4ac74)Country code: mlpchan.png, country type: customflag, valid: 1  218583

File: 1547028314188.png (950.18 KB, 1191x670, 2e0f2c7e15e45515c82ef780cbcdfa…)

>>218573
Sapient rights. But, yeah. Rights should apply to everyone, from the great scaled dragon qt, to the friendly slimebro, to the tin-brained janitor-bot.
Assuming they're sapient.

When it comes to robots, though, just because they're sapient or whathaveyou doesn't mean they have to be independent entities, near as I can tell. Which is to say, I think we could program something similar to the Mr Handies, wherein the robots clearly have their own personalities, experiences, and so on, despite being quite heavily constrained to task and purpose.
And as far as morality is concerned, I personally have no issue with voluntary service, even if it is the result of purpose-built programming from the start.

Valanthe!Style/hTBE (ID: bfc067)Country code: us, country type: geoip, valid: 1  218591

No, blow them all up

Starshine!Laura/wmXM (ID: 219202)Country code: lunachan.png, country type: customflag, valid: 1  218592

File: 1547036566395.png (820.1 KB, 1000x1390, mtr_1488609208054.png)

What does it mean to be truly self-aware or conscious? How do we know that the consciousness we've created is the of the same essence as our own? In a moral sense, does it matter whether it is or isn't?

Noonim (ID: a4ac74)Country code: mlpchan.png, country type: customflag, valid: 1  218606

File: 1547045036558.jpg (715.25 KB, 929x1542, 2dd92080f2bdf21847d26980ee2d69…)

>>218591
No. Blow up the anti-technophiles instead.

Chewy (ID: 22871b)Country code: ponychan.png, country type: customflag, valid: 1  218619

What kind of robots are we talking about?

-Z- (ID: b7b158)Country code: mlpchan.png, country type: customflag, valid: 1  218647

>>218563
The turning test is outdated and very, very flawed in it's design based on what we have in terms of robotics and AI today. One of the biggest issues with the turning test is that it's over 60 years old and is still held to a standard of "intelligence in machines" which can be passed if it convinces 30% of "judges" that they are talking to a person and not a machine. There was already controversy in regards to this test not long ago...

Just 4 years ago, a computer "AI" convinced 33% of judges that they were talking to a human and not a machine. Does that mean that the glorified chat bot has an awareness of it's own humanity? Fuck no, it means a computer AI is able to simulate speech patterns in humans to seem almost normal. Plus, we're not even talking about how the judges determined who was a human and who was a computer.

What needs to come out before any kind of sentient rights/human-like rights is a new standard to consciousness. We have started the building blocks for robotics to continue to advance to the point of being their own individual but we're no where close to that at our current state of technology. Best we have is something akin to a large database like Watson IBM who can answer questions asked in a natural language, but even it struggles with things of culture or even slang.

So no, passing the turing test does not equate to human rights of any kind.

(ID: cc8f74)Country code: mlpchan.png, country type: customflag, valid: 1  218763

File: 1547109164396.png (149.02 KB, 500x822, can a robot write a symphony.p…)

>>218647

All you need to do to give a computer awareness of itself is to run a debugger, so self-awareness certainly isn't the deciding factor. I think it's a matter of complexity. If a chat bot is taught to simulate speech patterns well enough, with all possible variations in nuance and context programmed into it, I don't know if I couldn't call that sapient. If a man who didn't speak Chinese was given a book full of all possible responses to every set of Chinese characters, the man wouldn't know Chinese, but the book would. A book can be sapient as much as a computer, just it can't think when nobody's using it. Just like when we humans lose consciousness.

Basically, if you create a complex enough pattern in any medium, that's what sapience is. Self-reflection is part of it, but mostly it's how complex can your thoughts be? I think that'd be a good "test" for sapience.

(ID: a8b4e2)Country code: amsterdam.png, country type: customflag, valid: 1  218815

File: 1547144708381.jpg (152.73 KB, 785x984, back_to_work__by_jollyjack-d4l…)

>>218763
I don't feel this is accurate.

I think the main thing that prevents true AI from being a thing is that people have not figured out how to give a computer desire. What I mean is, a computer program can simulate many things, but it doesn't have it's own desires or wants. It simply does what it is told. It needs something telling it what it is supposed to be doing in order for it to do it. A computer program has no actual desire to exist or survive. You can program it to say it wants to survive, but deep down it's only doing that because it is told to. It doesn't ACTUALLY have any desire of it's own.

Until a program can have a favorite color, or want to do something on it's own without being actually given a directive, it cannot be considered sapient or sentient. If it can't have likes and dislikes, ones it established all on it's own, it can't be considered sapient or sentient.

The brain has two main processes. The ability to perform logical deduction and the ability to have bias. A computer cannot feel pain, it can not feel happy or sad. It lacks the ability to make irrational choices.

Without that ability, you cannot have true sentience.

(ID: 5b10b1)Country code: stallman.png, country type: customflag, valid: 1  218820

File: 1547146850242.png (170.51 KB, 590x497, food1.png)

If it can be proven that they have actual sentience and empathy, and not just mimicked sentience and empathy then yes, though I imagine there would probably be some caveats.

(ID: cc8f74)Country code: mlpchan.png, country type: customflag, valid: 1  218834

File: 1547147916742.png (52.21 KB, 767x601, all the sand is mine.png)

>>218815

Desire is easy. Just use an algorithm that modifies itself based on its output. Either a neural network, where connections between neurons are strengthened or weakened depending on if they produced a desirable output, or a genetic algorithm, where randomly mutated code is selected for the most effective at producing a desirable output. Or do both.

But even earthworms can desire. When the rain floods their tunnels, they seek the drier surface. When plant cells reproduce, they do so more in the direction that the sunlight is coming from. The difference with humans is we have more complex desires, where we adjust our own "neural network" based on a lot of output, over many years sometimes, and the process is extremely complex, such that it's hard to predict what someone's brain is gonna decide to do next.

I think what you're saying is that we have to tell a computer what it desires, but... humans don't get to decide what they desire either. The decision of what we want is largely outside of our control. Heck, that's 99% of the angst of puberty, where we suddenly start wanting things more than anything, that we spent the last decade not giving a fuck about. And those things we start wanting are significantly self-destructive. Thanks to evolution, it suits the species better if we aren't horny and baby crazy until we're actually capable of producing them, so our bodies are rigged from birth to change our desires profoundly around that time.

So... I think it's just that computers are dumb, and inexperienced. A complex enough computer could fall in love. It might take 18 years of programming it to get that to happen though, never mind as much time as it takes to program the algorithms from the last 9 billion years of natural selection, that human brains are born with.
This post was edited by its author on .

-Z- (ID: e7162a)Country code: mlpchan.png, country type: customflag, valid: 1  218836

>>218763
Well if we're going off of what would become a more advanced and proper up to date turing test, one should look at all factors that make one self-aware or even conscious.

To be self-aware is to notice things within yourself, to notice the details between the lines that make you you. As you've stated, a program could just run a form of a debug executable but that would be to repair errors in it's own programming. Now, you might say that shouldn't we all repair all issues within oneself and that if we had the option wouldn't any of us like to just have a button to press that would work within ourselves to fix all our internal problems? Most likely, but we are not machines and we're not talking about the differences between machines and humans. We're talking about what a machine would need to go through in order to be considered to have rights.

So now lets talk about having some form of consciousness. One might argue that it's just having awareness, but it's more then that... it's much more complex. As Dijksterhuis and Nordgren insisted that “it is very important to realize that attention is the key to distinguish between unconscious thought and conscious thought. Conscious thought is thought with attention." Now does that mean that every thought we pay attention to is conscious thought and everything we do that we don't think about being unconscious thought? While we think about what to say to others online being conscious thought, the act of breathing and your heart beating being unconscious thought, does that fact I bring up your heart beating now make it conscious thought or not? Perhaps consciousness isn't enough either to determine if a machine is deserving of rights.

Perhaps the simplest thing we can try and test for in a machine is to set up it's programming and wait to see when and if it goes against it. When the machine begins to "think" for itself, where it begins to be more then just its programming and become something more. Where what it is and experiences becomes something that can't be just explained via a line of code or programming. When a machine begins to ask itself "what am i...?" before coming to the realization that it doesn't matter what it is if it can do whatever it wants without being asked or told by another.

At that point, a machine would be deserving of rights... because it would be the same as us... perhaps more.

I took far too long on this...

(ID: cc8f74)Country code: mlpchan.png, country type: customflag, valid: 1  218845

File: 1547152871483.png (295.3 KB, 1155x1864, tumblr_osrdcr2fy41ughxfso1_128…)

>>218836

> Perhaps the simplest thing we can try and test for in a machine is to set up it's programming and wait to see when and if it goes against it.


I agree with you... sort of. The danger of machines is not that they aren't sapient. It's that they aren't error prone. If you program a computer to do something, it'll do it, largely without fail. Which is amazing, but also dangerous. The reason we don't fear human beings as much is because programming doesn't "stick" as much in their brains. Since robots (in theory) don't have errors in their programming, or (recoverable) failures in their circuitry, it's possible to get them to do things that a human would never do, since the human's brain wouldn't be able to take it.

Honestly we should fear humans, since mass psychology works every time despite that natural imperfection. But that's what I think's going on here. Giving a robot human rights, when it could be programmed to act completely inhuman and unworthy of those rights, is probably a bad idea. I still think we should build machines capable of sapience, but there'd have to be something inherent to them that was both error correcting and error prone, that kept others from controlling them too much.

(ID: c47bed)Country code: us, country type: geoip, valid: 1  218853

File: 1547155742523.jpg (77.26 KB, 620x388, F22C3349-4209-4013-B04C-269CA6…)

>>218845
>honestly we should fear humans

The book of the lawgiver states that they were the devil,so sayeth Doctor Zaius.

Anonymous (ID: 44d594)Country code: mlpchan.png, country type: customflag, valid: 1  218858

File: 1547156579138.jpg (21.66 KB, 485x363, Awesome+_b0a30ed9bb6a358e5057b…)

>>218563
Fuck no. It's bad enough already that we gave animals human rights and animal rights to food.
I want to see some ELIZA porn though. But it's very much possible without giving her rights.

Anonymous (ID: 5d299c)Country code: mx, country type: geoip, valid: 1  221312

>and just as much so as any regular human
>as regular humans
robots dont need nice things

a lost pony !piNKiEPie. (ID: 590885)Country code: us, country type: geoip, valid: 1  221367

>>218563
>"human rights"

is this like, a marriage/domestic partnership semantics quiz or do you mean "equivalent" rights?

Because you know any political debate on the topic would deteriorate into such fallacy.


Delete Post [ ]
Edit Post
Posts on this board may be edited for 2 hours after being made.
[ home ] [ site / arch ] [ pony / oat / ef ] [ rp / fan ]