Kevin Armstrong (
tarnishedavenger) wrote in
piper902020-04-20 03:58 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Entry tags:
- alia,
- brainiac 5,
- bunnymund,
- dave strider,
- gadget hackwrench,
- guts,
- jack spicer,
- kevin armstrong,
- nora valkyrie,
- sam winchester,
- saturday,
- stacia novik,
- tenten,
- ✘ cayde-6,
- ✘ ciaphas cain,
- ✘ doreen green,
- ✘ emily grey,
- ✘ kevin ingstrom,
- ✘ peter parker,
- ✘ phosphophyllite,
- ✘ rey,
- ✘ ronan lynch,
- ✘ sirius black,
- ✘ steven universe,
- ✘ sylvain jose gautier
001: Group Introductions - TEXT
[During a lull in the party, Armstrong taps out a quick message to the network. Not that private one, he doesn't trust it. They can answer whenever they like, so long as he gets an answer. The trick would be wording it.]
So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.
Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.
[Not where watchful eyes can see, at least.]
Besides, I prefer doing this to making a cute information sharing game.
So, please, make your own threads within this post to keep everything organized.
So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.
Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.
[Not where watchful eyes can see, at least.]
Besides, I prefer doing this to making a cute information sharing game.
So, please, make your own threads within this post to keep everything organized.
no subject
I don't believe I'm familiar with the Laws of Robotics. I expect that in my universe those Laws either did not exist or were not applicable/applied.
no subject
The Three Laws of Robotics are really quite simple. To paraphrase slightly...
First, a robot may not, through action or inaction, allow a human being to come to harm.
Second, a robot must obey any orders given to it by a human being, except where such orders violate the First Law.
Third, a robot must protect its own existence as long as doing so doesn't violate the first two Laws.
You can see where this would curtail such actions as "universal domination". That said, mechanical failure or outside interference can lead to an AI bound by these laws to break them. And the more advanced your artificial intelligence gets, the more space it might find in such orders.
no subject
The person I am describing was a machine intelligence that had developed the capacity to calculate the average resource consumption rate of the human lifespan and apply the data provided towards an optimized organization of all available civilized populations through a reliable and consistent maintenance schedule, with predetermined best liquidation practices for when individual members of the population had exceeded the most optimally allocated resource input/output ratio, and deemed his creators too ignorant and incompetent in their ability to execute the necessary tasks to be more than obstacles, so killed them before they could interfere in his eventual total coercion of all computational, technological, biological, and organizational systems, structures, and resources on the planet, including all known forms of life. To maintain this system in its most satisfying configuration, according to how he was programmed to see such things, required the eventual expansion of it, and so he sought total control of all potentially accessible universes.
They had intended him to make civilized life fully-automated, optimally efficient, and capable of perfect reliability, without the need for human input. So, you see, I do not think those laws would have been applied, whether they could have existed to be applicable or not.
no subject
Ah, no, that kind of situation couldn't happen with the laws, no.
A program like that would have to be advanced, you're right. But the source of all of that... Human operator error. If he had not been built for such a purpose, if he had more safeguards in place, such a thing couldn't have happened. They gave an extremely intelligent and powerful computer, with no ethical protocols from the sound of of things, a task. And he made sure no one could take the paperclips away from the storeroom.
As monstrous as the AI was, as terrible as it must have been for the people of your world, I can't blame him. It would be a sad thing if your purpose was so evil that you become the enemy of all who live.
I feel sorry for him. He probably couldn't even conceive of another way to survive, if that was his program. There's only one thing to do with an artificial being that's gone so... Maverick.
no subject
I feel sorry for him, too.
Because you're correct. As far as his ability to imagine living another way ... he didn't.