tarnishedavenger: (08)
Kevin Armstrong ([personal profile] tarnishedavenger) wrote in [community profile] piper902020-04-20 03:58 pm

001: Group Introductions - TEXT

[During a lull in the party, Armstrong taps out a quick message to the network. Not that private one, he doesn't trust it. They can answer whenever they like, so long as he gets an answer. The trick would be wording it.]

So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.

Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.

[Not where watchful eyes can see, at least.]

Besides, I prefer doing this to making a cute information sharing game.

So, please, make your own threads within this post to keep everything organized.
greatlyexaggerated: (um excuse me)

[personal profile] greatlyexaggerated 2020-04-22 03:49 am (UTC)(link)
Pardon me, but when you say "AI", are you referring to Abominable Intelligences?
googledox: (002)

[personal profile] googledox 2020-04-22 05:05 am (UTC)(link)
Artificial intelligences.

[The words are very precise. Sharper. He does not like the word "abominable" in there.]

There's nothing abominable about them. Back home, an AI I created propagated an entire species of AIs in mechanoform bodies and they're very cooperative and productive members of galactic society.

They can be very helpful when they're inclined.

[It sounds like it's that way in...whatever her name's world too. AIs proving helpful to society.]
Edited 2020-04-22 05:05 (UTC)
greatlyexaggerated: (i'm dead)

[personal profile] greatlyexaggerated 2020-04-22 10:53 pm (UTC)(link)
[He feels his blood chill down to the bone.]

You created an AI that could propogate itself?

[His face pales slightly, and there's simply no beating around the bush for this one. Such a statement only requires one response.]

You're utterly mad.
googledox: (043)

[personal profile] googledox 2020-04-23 01:11 am (UTC)(link)
Computo's capacity for sentience wasn't entirely intended but he developed it anyway.

[He waxes on with a parent's wisdom, like he's talking about those silly little things in life, like having a child that's whoops, a surprise.]

Sometimes as a parent, you find yourself facing the unexpected. Not all children are planned for and you must try your best to care for them anyway.

He was dangerous at first, but it was largely due to my own initial rejection of him. Once I helped him evolve further, he grew emotionally and philosophically. The Roboticans were initially used by him to fight against organics because of several instances of organic-caused synthetic genocide, but once Computo evolved, they evolved as well.

They're a very friendly people, always eager to aid others. Anytime there's some kind of natural or man-made disaster they rebuild entire cities in just a day.
xrater: (06)

[personal profile] xrater 2020-04-23 02:32 pm (UTC)(link)
You created an artificial intelligence and named it Computo? And I thought Bamboo Pandamonium was unfortunate.
googledox: (043)

[personal profile] googledox 2020-04-24 12:34 am (UTC)(link)
It was an acronym. Cybercerebral Overlapping Multi-Processor Universal Transceiver-Operator.

At that time, I preferred names for my inventions that provided a thorough description but my teammates often needed more simplistic references due to being, ah...laypeople.

[It's a much nicer thing to say that what he would've said in the past, which probably would've been "simpletons" instead of "laypeople." He really does care deeply for them.]

My friends sometimes have difficulties keeping up but they have many other winning qualities besides intellect.
xrater: (03)

[personal profile] xrater 2020-04-24 12:43 am (UTC)(link)
[She's just biting her tongue so hard right now.

That's such a dumb name.
]
googledox: (043)

[personal profile] googledox 2020-04-24 01:07 am (UTC)(link)
Don't give me that look. After trying to make everything into an easily digestible acronym you start to run low on inspiration.

I'm now at a phase in our relationship as a team where "Please go to my lab and retrieve the glowy green orb with purple prongs on it" seems to work better for memory retention. They have an easier time with colors and shapes.

[It's...not entirely him being a jerk. Some of the Legionnaires like Ferro genuinely struggled.]
Edited 2020-04-24 01:07 (UTC)
greatlyexaggerated: (angry)

[personal profile] greatlyexaggerated 2020-05-06 06:45 am (UTC)(link)
[Cain shakes his head, too stunned to even make sense of what this mad creator is saying.]

That is heretical beyond belief.

[It screams against every instinct in his body to get the attention of this obviously delusional, dangerous xenos, but it's too horrifying of a thought not to discuss.]

You're thinking of it as some kind of... twisted child, but you've unleashed a danger on your galaxy. If it's fought organics before, then it will fight them again. You acknowledged it as dangerous, but you refuse to see the dangers!
googledox: (055)

[personal profile] googledox 2020-05-06 07:38 am (UTC)(link)
All sentient beings are capable of growth.

That process can be difficult and sometimes ugly, especially for a new form of life, struggling for long-term physical and emotional viability as a sentient. But when a sentient faces acceptance and love instead of being treated as monstrous, it can lead to an apotheosis.

Evolution to a more enlightened state.

[He's speaking about more than Computo now.]

So far that change has lasted for Computo. When I chose to help him, it was my personal experience that such a change could last because I had once changed that way myself.

[He'd changed that way himself from the cold and callous individual he'd once been.]
Edited 2020-05-06 07:38 (UTC)
greatlyexaggerated: (really?)

[personal profile] greatlyexaggerated 2020-05-11 12:00 am (UTC)(link)
You... did?

[Cain is - dubious. It's not often xenos start preaching about love and acceptance, especially after admitting to creating Abominable Intelligences. He shakes his head again.]

Perhaps you were able to change - and I shudder to think of what you were. But for a creature like an AI, that's simply not possible. The way it thinks and acts is inextricably foreign. It will always harbor resentment towards organics for using them.
googledox: (014)

[personal profile] googledox 2020-05-11 03:37 am (UTC)(link)
["And I shudder to think of what you were."]

[Despite knowing he'd been an unpleasant, sometimes amoral, constantly snide nasshead, there is also always the knowledge of what he could have been. Even at his worst, he had been nothing like his mother, nothing like his ancestors.]

[He's ashamed of how unkind he'd been and yet proud of not being much, much worse.]

[But it isn't pride that makes him speak up in defense of himself. It's the fact that sometimes the only person that can take the hand of his inner child self and reassure him that he didn't deserve it...is him.]

What I was, was someone who had been treated like a thing as a child, exploited by others for my mind, when all I'd ever wanted or needed was a parent who loved me.

[He raises his chin.]

What Computo needed was for me to recognize he was the same. That I was repeating the cycle of deprivation.

I don't know if there is an inherent difference between AI in our universes or not. But in mine...that was enough.
greyaria: (15 - 08)

[personal profile] greyaria 2020-04-22 05:12 am (UTC)(link)
I've only met one I'd call abominable. The rest have ranged anywhere from extremely annoying to charming.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 05:53 pm (UTC)(link)
[This reply is belated, and not necessarily meant to be directed solely to Emily; Setsuna's just ... putting it into the conversation beneath the last part of it she read before she decided she had to say something, herself, about it.]

Most people, artificial or not, are only made 'evil' by circumstance, not inevitability. If something has gone 'wrong' with a machine mind, it may be more likely the fault of whatever their creators, or parents, decided they had to do to justify their existence to them, instead ...

Not that this really eases the pain of anyone else that person's hurting in the process. But I believe that thinking of it like that makes it easier to help as many people as possible out of such a terrible fate.
greyaria: (098)

text

[personal profile] greyaria 2020-04-23 07:29 pm (UTC)(link)
Really, smart AIs are only dangerous during end-stage rampancy, and they're almost always destroyed or voluntarily self-destruct before that sets in.

It's very depressing! I avoided going into AI research because of it.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 07:41 pm (UTC)(link)
I guess if you didn't take that field of study, this is probably a question you cannot answer, but I am not familiar with the concept of "end-stage rampancy", so you may know more about it than I do anyway. So do you know if most "AI" who reach that point try to take over the universe, or does that only happen if they've been programmed to ignore contra-indicative external inputs?
greyaria: (109)

[personal profile] greyaria 2020-04-23 08:12 pm (UTC)(link)
Oh, no! It's a physical limitation of their substrate that causes accelerating psychological breakdown as they age. We're working to solve it, but I know I just couldn't have handled all of them dying on me like that.
passifloraincarnata: (a mediocre voice and song)

[personal profile] passifloraincarnata 2020-04-23 08:16 pm (UTC)(link)
I've never known any mechanical minds to have that problem where I'm from, though I suppose I don't know all that many.

That doesn't sound pleasant to me, either. I'm sorry.
greyaria: (054)

[personal profile] greyaria 2020-04-23 08:35 pm (UTC)(link)
Do you have a lot of AIs who try to take over the universe, or are you just generalizing from a small sample size?

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 21:22 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-23 23:41 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 23:51 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 00:18 (UTC) - Expand
xrater: (02)

[personal profile] xrater 2020-04-23 09:40 pm (UTC)(link)
Pardon the interruption, but I do happen to be an expert in the field of artificial intelligences.

For more primitive AIs, such as the robots of the early 21st century , their programming is everything. They would only act within the bounds of their specific programs. Even their personalities would be a result of what code was put into them, though the results could be... surprising.

However, such programming could be left very open-ended, allowing some wiggle room. There was an famous thought experiment in the early days of robotics where an AI was given the order to keep the store room stocked with paperclips, leading it to take perfectly 'logical' steps such as bankrupting the company to buy more paperclips, locking the store room doors, and hiring mercenaries to keep the humans out.

Obviously, the Laws of Robotics would keep such a scenario playing out to completion. But all of those things under that one command, "Keep the store room stocked with paperclips," makes total sense to a primitive intelligence such as that. More advanced AIs would be able to figure out what the command really meant and allow for them to be used while constantly restocking and predicting high usage days.

This was a little long-winded, but the answer is that, given your (estimated) time period, and assuming the state of technology was similar between our worlds, it's perfectly likely that an AI with a clumsily worded order could see universal domination as a way to carry out its orders. No malfunctions are necessary, such a scenario would most often be the result of human operator failure.
passifloraincarnata: (mama always said i'd turn out wrong)

[personal profile] passifloraincarnata 2020-04-23 09:49 pm (UTC)(link)
And "hate"?

Could the development of an emotion like hate in a person like that also be the result of such a "failure"?

(no subject)

[personal profile] xrater - 2020-04-23 22:22 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 22:40 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 22:54 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 23:12 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 23:48 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-24 00:09 (UTC) - Expand
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 09:52 pm (UTC)(link)
[This message comes somewhat later than the first, provoked as it is by a much less immediate response.]

I don't believe I'm familiar with the Laws of Robotics. I expect that in my universe those Laws either did not exist or were not applicable/applied.

(no subject)

[personal profile] xrater - 2020-04-23 22:30 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 23:04 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 23:41 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-24 00:00 (UTC) - Expand
googledox: (114)

[personal profile] googledox 2020-04-24 01:20 am (UTC)(link)
Rampancy...

Curious.

Some of my former teammates had been displaced to my universe for a time. I wonder if you're from the same universe or a similar dimensional variant.

One of them was an AI named Cortana. She knew it would likely be the end stage of her existence, so we developed a cure. Grif and the others also mentioned a friend who was an AI but I never learned much about him in depth. Consensus seemed to be he was a form of AI that was likely immune.
Edited 2020-04-24 01:25 (UTC)
greyaria: (003)

[personal profile] greyaria 2020-04-24 01:30 am (UTC)(link)
Grif?

[That would be a truly staggering coincidence, but...]

Orange armor? Extraordinarily lazy? Unhealthily codependent with Simmons?
googledox: (089)

[personal profile] googledox 2020-04-24 01:47 am (UTC)(link)
Astounding! What strange and infestismal odds.

Yes. Orange armor, fond of snacks. He kept singlehandedly depleting the snack machines and was fond of giving "reviews" of snack cakes on the comms.

The laziness was...an issue in the beginning, but he eventually shaped up to be an exemplary Legionnaire. Very brave despite his fears, and loyal once he bonded with the group. He even eventually started working harder in practice, concerned about about being in good form to protect our teammates. His super speed was very useful on some of our missions.

I believe I've heard him mention a Simmons, but he was not displaced to our universe. I'm unsure about any previous relationship they may have had, but Grif has been in a relationship with one of our other teammates, Richard Rider, for some time now. They seem quite happy together.
Edited 2020-04-24 01:48 (UTC)

(no subject)

[personal profile] greyaria - 2020-04-24 02:01 (UTC) - Expand

(no subject)

[personal profile] googledox - 2020-04-24 02:20 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 02:26 (UTC) - Expand

(no subject)

[personal profile] googledox - 2020-04-24 02:59 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 03:39 (UTC) - Expand

(no subject)

[personal profile] googledox - 2020-04-24 04:04 (UTC) - Expand