Kevin Armstrong (
tarnishedavenger) wrote in
piper902020-04-20 03:58 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Entry tags:
- alia,
- brainiac 5,
- bunnymund,
- dave strider,
- gadget hackwrench,
- guts,
- jack spicer,
- kevin armstrong,
- nora valkyrie,
- sam winchester,
- saturday,
- stacia novik,
- tenten,
- ✘ cayde-6,
- ✘ ciaphas cain,
- ✘ doreen green,
- ✘ emily grey,
- ✘ kevin ingstrom,
- ✘ peter parker,
- ✘ phosphophyllite,
- ✘ rey,
- ✘ ronan lynch,
- ✘ sirius black,
- ✘ steven universe,
- ✘ sylvain jose gautier
001: Group Introductions - TEXT
[During a lull in the party, Armstrong taps out a quick message to the network. Not that private one, he doesn't trust it. They can answer whenever they like, so long as he gets an answer. The trick would be wording it.]
So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.
Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.
[Not where watchful eyes can see, at least.]
Besides, I prefer doing this to making a cute information sharing game.
So, please, make your own threads within this post to keep everything organized.
So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.
Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.
[Not where watchful eyes can see, at least.]
Besides, I prefer doing this to making a cute information sharing game.
So, please, make your own threads within this post to keep everything organized.
no subject
no subject
no subject
That doesn't sound pleasant to me, either. I'm sorry.
no subject
no subject
It is not an "attempt" if it's a success, however temporary that success may have been. I don't like to trivialize it.
I don't think it's an inevitability or a generalized absolute; I'm trying to determine whether it's applicable at all, and perhaps the "end-stage rampancy" you describe was the general cause of a specific situation I would sadly describe the outcome of as "catastrophic".
It's a possibility I never knew of before, so I have to consider it. But if it's referring to processing limitations, then it's an irrelevant model for the situation being described.
no subject
no subject
No, I don't think he cared much about being kind, at all.
no subject
no subject
For more primitive AIs, such as the robots of the early 21st century , their programming is everything. They would only act within the bounds of their specific programs. Even their personalities would be a result of what code was put into them, though the results could be... surprising.
However, such programming could be left very open-ended, allowing some wiggle room. There was an famous thought experiment in the early days of robotics where an AI was given the order to keep the store room stocked with paperclips, leading it to take perfectly 'logical' steps such as bankrupting the company to buy more paperclips, locking the store room doors, and hiring mercenaries to keep the humans out.
Obviously, the Laws of Robotics would keep such a scenario playing out to completion. But all of those things under that one command, "Keep the store room stocked with paperclips," makes total sense to a primitive intelligence such as that. More advanced AIs would be able to figure out what the command really meant and allow for them to be used while constantly restocking and predicting high usage days.
This was a little long-winded, but the answer is that, given your (estimated) time period, and assuming the state of technology was similar between our worlds, it's perfectly likely that an AI with a clumsily worded order could see universal domination as a way to carry out its orders. No malfunctions are necessary, such a scenario would most often be the result of human operator failure.
no subject
Could the development of an emotion like hate in a person like that also be the result of such a "failure"?
no subject
[It could end there, but unfortunately Alia's the long-winded sort. She keeps typing, even after noting the later reply.]
Maybe not true hatred, like you and I might feel. I'm not actually certain how advanced your AIs are, I'm merely guessing based on when I think you're from (Late 1980s to early 2010s?), but for a primitive AI on the level of that theoretical paperclip computer? It wouldn't be capable of such, though it could fake it well enough.
But it could feel a rough equivalent. For some primitive AIs, being able to perform one's functions to the best of its abilities would give it something close to satisfaction. Everything in its place, doing what it should, smoothly and without interruption would be as close as it could get to pleasure, contentedness, or perhaps even a basic form of love.
A disruption in that would be equivalent to pain. Throw in constant disruptions and that constant sense of pain could easily be equated to hatred for whoever, or whatever, was keeping it from performing properly as it stepped up its efforts to fix the 'error'.
The more advanced an AI gets, especially if it's capable of true learning and growth, the more it would truly feel such emotions. And if it were bound to follow its programming then, yes, any obstructions to that might lead to irritation, anger, or true hatred.
no subject
no subject
[She'd thought Setsuna was just being polite.]
In the case of advanced artificial intelligences, true emotions can and will develop. But 'hatred'? The more advanced we get, the harder it is for me to truthfully say without more data.
I'm sorry if this causes you distress.
no subject
Sometimes I wish I knew what it was that could have been done to have made it all turn out differently, if such a thing could have been done. But that's not a question I'd want you to answer for me. It is not fair to the lessons I've learned from these experiences for me to truly wish they had never been ...
It's simply what happened.
no subject
It was an interesting thought experiment. But rather tasteless. She doesn't think she'll ever mention it.]
At least you've come out of it.
But it's never wrong to wish for a better outcome. It's not disrespectful to the victims to wish that they'd never suffer an atrocity. It's only unfair to them, and to you, if you start pretending that it had never happened and allow the true lessons to be forgotten.
There are many things in my life that I wish had gone another way. It's not wrong. It's simply... natural.
no subject
As you said, it's not wrong. It's simply a natural part of life, that's all.
no subject
I don't believe I'm familiar with the Laws of Robotics. I expect that in my universe those Laws either did not exist or were not applicable/applied.
no subject
The Three Laws of Robotics are really quite simple. To paraphrase slightly...
First, a robot may not, through action or inaction, allow a human being to come to harm.
Second, a robot must obey any orders given to it by a human being, except where such orders violate the First Law.
Third, a robot must protect its own existence as long as doing so doesn't violate the first two Laws.
You can see where this would curtail such actions as "universal domination". That said, mechanical failure or outside interference can lead to an AI bound by these laws to break them. And the more advanced your artificial intelligence gets, the more space it might find in such orders.
no subject
The person I am describing was a machine intelligence that had developed the capacity to calculate the average resource consumption rate of the human lifespan and apply the data provided towards an optimized organization of all available civilized populations through a reliable and consistent maintenance schedule, with predetermined best liquidation practices for when individual members of the population had exceeded the most optimally allocated resource input/output ratio, and deemed his creators too ignorant and incompetent in their ability to execute the necessary tasks to be more than obstacles, so killed them before they could interfere in his eventual total coercion of all computational, technological, biological, and organizational systems, structures, and resources on the planet, including all known forms of life. To maintain this system in its most satisfying configuration, according to how he was programmed to see such things, required the eventual expansion of it, and so he sought total control of all potentially accessible universes.
They had intended him to make civilized life fully-automated, optimally efficient, and capable of perfect reliability, without the need for human input. So, you see, I do not think those laws would have been applied, whether they could have existed to be applicable or not.
no subject
Ah, no, that kind of situation couldn't happen with the laws, no.
A program like that would have to be advanced, you're right. But the source of all of that... Human operator error. If he had not been built for such a purpose, if he had more safeguards in place, such a thing couldn't have happened. They gave an extremely intelligent and powerful computer, with no ethical protocols from the sound of of things, a task. And he made sure no one could take the paperclips away from the storeroom.
As monstrous as the AI was, as terrible as it must have been for the people of your world, I can't blame him. It would be a sad thing if your purpose was so evil that you become the enemy of all who live.
I feel sorry for him. He probably couldn't even conceive of another way to survive, if that was his program. There's only one thing to do with an artificial being that's gone so... Maverick.
no subject
I feel sorry for him, too.
Because you're correct. As far as his ability to imagine living another way ... he didn't.