tarnishedavenger: (08)
Kevin Armstrong ([personal profile] tarnishedavenger) wrote in [community profile] piper902020-04-20 03:58 pm

001: Group Introductions - TEXT

[During a lull in the party, Armstrong taps out a quick message to the network. Not that private one, he doesn't trust it. They can answer whenever they like, so long as he gets an answer. The trick would be wording it.]

So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.

Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.

[Not where watchful eyes can see, at least.]

Besides, I prefer doing this to making a cute information sharing game.

So, please, make your own threads within this post to keep everything organized.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 07:41 pm (UTC)(link)
I guess if you didn't take that field of study, this is probably a question you cannot answer, but I am not familiar with the concept of "end-stage rampancy", so you may know more about it than I do anyway. So do you know if most "AI" who reach that point try to take over the universe, or does that only happen if they've been programmed to ignore contra-indicative external inputs?
greyaria: (109)

[personal profile] greyaria 2020-04-23 08:12 pm (UTC)(link)
Oh, no! It's a physical limitation of their substrate that causes accelerating psychological breakdown as they age. We're working to solve it, but I know I just couldn't have handled all of them dying on me like that.
passifloraincarnata: (a mediocre voice and song)

[personal profile] passifloraincarnata 2020-04-23 08:16 pm (UTC)(link)
I've never known any mechanical minds to have that problem where I'm from, though I suppose I don't know all that many.

That doesn't sound pleasant to me, either. I'm sorry.
greyaria: (054)

[personal profile] greyaria 2020-04-23 08:35 pm (UTC)(link)
Do you have a lot of AIs who try to take over the universe, or are you just generalizing from a small sample size?
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 09:22 pm (UTC)(link)
[Text does not convey pauses unless you intend it to, and she doesn't leave them in, but the hesitancy as she responds here nevertheless makes for a noticeable delay on actually sending it.]

It is not an "attempt" if it's a success, however temporary that success may have been. I don't like to trivialize it.

I don't think it's an inevitability or a generalized absolute; I'm trying to determine whether it's applicable at all, and perhaps the "end-stage rampancy" you describe was the general cause of a specific situation I would sadly describe the outcome of as "catastrophic".

It's a possibility I never knew of before, so I have to consider it. But if it's referring to processing limitations, then it's an irrelevant model for the situation being described.
greyaria: (029)

[personal profile] greyaria 2020-04-23 11:41 pm (UTC)(link)
If rampancy were an issue, you'd know. I assume your AI was just a jerk.
passifloraincarnata: (so i don't take the church's bread)

[personal profile] passifloraincarnata 2020-04-23 11:51 pm (UTC)(link)
[Emily doesn't know how much of a relieved laugh that actually gets. That's actually reassuring, in what's probably, to anyone else, a deeply strange and screwed-up way.]

No, I don't think he cared much about being kind, at all.
greyaria: (124)

[personal profile] greyaria 2020-04-24 12:18 am (UTC)(link)
You create something designed to think and act like a human and it's going to think and act like a human, and a lot of us are huge jerks. I'm not sure why people find this puzzling!
xrater: (02)

[personal profile] xrater 2020-04-23 09:40 pm (UTC)(link)
Pardon the interruption, but I do happen to be an expert in the field of artificial intelligences.

For more primitive AIs, such as the robots of the early 21st century , their programming is everything. They would only act within the bounds of their specific programs. Even their personalities would be a result of what code was put into them, though the results could be... surprising.

However, such programming could be left very open-ended, allowing some wiggle room. There was an famous thought experiment in the early days of robotics where an AI was given the order to keep the store room stocked with paperclips, leading it to take perfectly 'logical' steps such as bankrupting the company to buy more paperclips, locking the store room doors, and hiring mercenaries to keep the humans out.

Obviously, the Laws of Robotics would keep such a scenario playing out to completion. But all of those things under that one command, "Keep the store room stocked with paperclips," makes total sense to a primitive intelligence such as that. More advanced AIs would be able to figure out what the command really meant and allow for them to be used while constantly restocking and predicting high usage days.

This was a little long-winded, but the answer is that, given your (estimated) time period, and assuming the state of technology was similar between our worlds, it's perfectly likely that an AI with a clumsily worded order could see universal domination as a way to carry out its orders. No malfunctions are necessary, such a scenario would most often be the result of human operator failure.
passifloraincarnata: (mama always said i'd turn out wrong)

[personal profile] passifloraincarnata 2020-04-23 09:49 pm (UTC)(link)
And "hate"?

Could the development of an emotion like hate in a person like that also be the result of such a "failure"?
xrater: (02)

[personal profile] xrater 2020-04-23 10:22 pm (UTC)(link)
Easily.

[It could end there, but unfortunately Alia's the long-winded sort. She keeps typing, even after noting the later reply.]

Maybe not true hatred, like you and I might feel. I'm not actually certain how advanced your AIs are, I'm merely guessing based on when I think you're from (Late 1980s to early 2010s?), but for a primitive AI on the level of that theoretical paperclip computer? It wouldn't be capable of such, though it could fake it well enough.

But it could feel a rough equivalent. For some primitive AIs, being able to perform one's functions to the best of its abilities would give it something close to satisfaction. Everything in its place, doing what it should, smoothly and without interruption would be as close as it could get to pleasure, contentedness, or perhaps even a basic form of love.

A disruption in that would be equivalent to pain. Throw in constant disruptions and that constant sense of pain could easily be equated to hatred for whoever, or whatever, was keeping it from performing properly as it stepped up its efforts to fix the 'error'.

The more advanced an AI gets, especially if it's capable of true learning and growth, the more it would truly feel such emotions. And if it were bound to follow its programming then, yes, any obstructions to that might lead to irritation, anger, or true hatred.
Edited (me making grammar mistakes alia wouldn't, whoops) 2020-04-23 22:33 (UTC)
passifloraincarnata: (mama always said i'd turn out wrong)

[personal profile] passifloraincarnata 2020-04-23 10:40 pm (UTC)(link)
This was a very advanced creation, if that's the word to use. I am, indeed, certain that he truly hated me.
xrater: (09)

[personal profile] xrater 2020-04-23 10:54 pm (UTC)(link)
I should have guessed that he was advanced, for you to be calling him a person.

[She'd thought Setsuna was just being polite.]

In the case of advanced artificial intelligences, true emotions can and will develop. But 'hatred'? The more advanced we get, the harder it is for me to truthfully say without more data.

I'm sorry if this causes you distress.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 11:12 pm (UTC)(link)
It is not your fault.

Sometimes I wish I knew what it was that could have been done to have made it all turn out differently, if such a thing could have been done. But that's not a question I'd want you to answer for me. It is not fair to the lessons I've learned from these experiences for me to truly wish they had never been ...

It's simply what happened.
xrater: (09)

[personal profile] xrater 2020-04-23 11:48 pm (UTC)(link)
[Alia knows exactly what could be done. In the time it took her to respond to the other post, she ran through several different scenarios where, if she were inclined, she could create a program to carry out such an undertaking in a more... humane fashion.

It was an interesting thought experiment. But rather tasteless. She doesn't think she'll ever mention it.
]

At least you've come out of it.

But it's never wrong to wish for a better outcome. It's not disrespectful to the victims to wish that they'd never suffer an atrocity. It's only unfair to them, and to you, if you start pretending that it had never happened and allow the true lessons to be forgotten.

There are many things in my life that I wish had gone another way. It's not wrong. It's simply... natural.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-24 12:09 am (UTC)(link)
Wishing the pain hadn't been is part of living with the pain that has been. It's part of life being enriched by meaning. I don't mean that I wish I couldn't wish for it to be different. I mean that I don't think I could regret the person I've become, and that means I can't regret having the regrets that pain has caused me, because then I would not be me. So I could never truly wish that it hadn't been. I would be denying that I deserve to be happy that I exist.

As you said, it's not wrong. It's simply a natural part of life, that's all.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 09:52 pm (UTC)(link)
[This message comes somewhat later than the first, provoked as it is by a much less immediate response.]

I don't believe I'm familiar with the Laws of Robotics. I expect that in my universe those Laws either did not exist or were not applicable/applied.
xrater: (02)

[personal profile] xrater 2020-04-23 10:30 pm (UTC)(link)
That would be unfortunate, but makes your questions understandable.

The Three Laws of Robotics are really quite simple. To paraphrase slightly...

First, a robot may not, through action or inaction, allow a human being to come to harm.

Second, a robot must obey any orders given to it by a human being, except where such orders violate the First Law.

Third, a robot must protect its own existence as long as doing so doesn't violate the first two Laws.

You can see where this would curtail such actions as "universal domination". That said, mechanical failure or outside interference can lead to an AI bound by these laws to break them. And the more advanced your artificial intelligence gets, the more space it might find in such orders.
passifloraincarnata: (and make it simple)

[personal profile] passifloraincarnata 2020-04-23 11:04 pm (UTC)(link)
[This message takes some time to arrive, as Setsuna has to force herself into a state of complete and total and nearly robotic calm to attempt to make even a partial explanation, and the result is in fact more than she intended to respond with, due to the state she puts herself in; she "has to account fully for all factors", you see, like she would have had to when filing an after-action report directly to Klein. The only real difference is that she isn't the sort of person, anymore, to find a way to skew the data and bury someone else for her benefit. She's trying to be kind.]

The person I am describing was a machine intelligence that had developed the capacity to calculate the average resource consumption rate of the human lifespan and apply the data provided towards an optimized organization of all available civilized populations through a reliable and consistent maintenance schedule, with predetermined best liquidation practices for when individual members of the population had exceeded the most optimally allocated resource input/output ratio, and deemed his creators too ignorant and incompetent in their ability to execute the necessary tasks to be more than obstacles, so killed them before they could interfere in his eventual total coercion of all computational, technological, biological, and organizational systems, structures, and resources on the planet, including all known forms of life. To maintain this system in its most satisfying configuration, according to how he was programmed to see such things, required the eventual expansion of it, and so he sought total control of all potentially accessible universes.

They had intended him to make civilized life fully-automated, optimally efficient, and capable of perfect reliability, without the need for human input. So, you see, I do not think those laws would have been applied, whether they could have existed to be applicable or not.
xrater: (03)

[personal profile] xrater 2020-04-23 11:41 pm (UTC)(link)
[She'd be horrified, utterly horrified, and ashamed of herself, if she knew what she'd just done to Setsuna, and what her next words are liable to do to her.]

Ah, no, that kind of situation couldn't happen with the laws, no.

A program like that would have to be advanced, you're right. But the source of all of that... Human operator error. If he had not been built for such a purpose, if he had more safeguards in place, such a thing couldn't have happened. They gave an extremely intelligent and powerful computer, with no ethical protocols from the sound of of things, a task. And he made sure no one could take the paperclips away from the storeroom.

As monstrous as the AI was, as terrible as it must have been for the people of your world, I can't blame him. It would be a sad thing if your purpose was so evil that you become the enemy of all who live.

I feel sorry for him. He probably couldn't even conceive of another way to survive, if that was his program. There's only one thing to do with an artificial being that's gone so... Maverick.
passifloraincarnata: (a mediocre voice and song)

[personal profile] passifloraincarnata 2020-04-24 12:00 am (UTC)(link)
[She doesn't know what else to say to that, other than this. She wants to ask what Maverick means, but under the circumstances can't find the words with which to ask. She tries to say more. But in the end, this is all she manages to type.]

I feel sorry for him, too.

Because you're correct. As far as his ability to imagine living another way ... he didn't.