tarnishedavenger: (08)
Kevin Armstrong ([personal profile] tarnishedavenger) wrote in [community profile] piper902020-04-20 03:58 pm

001: Group Introductions - TEXT

[During a lull in the party, Armstrong taps out a quick message to the network. Not that private one, he doesn't trust it. They can answer whenever they like, so long as he gets an answer. The trick would be wording it.]

So, we're all in this for now. You've had your welcome cake, but you can't meet everyone in a party, no matter how hard you try. But, since we've all been encouraged to sign up with Jorgmund, I figured now would be a good time to get some introductions done. Talk about any specialties we might have.

Share information that we feel comfortable sharing. This isn't to pressure anyone or to force out any dark secrets.

[Not where watchful eyes can see, at least.]

Besides, I prefer doing this to making a cute information sharing game.

So, please, make your own threads within this post to keep everything organized.
googledox: (114)

video

[personal profile] googledox 2020-04-22 12:47 am (UTC)(link)
[There is nerdery afoot and that means Brainy is interested. There's quite a mix of people in this place and that means others who potentially have interesting knowledge to potentially exchange.]

[And the more they ally themselves with each other, the sooner they can work together to maybe get out of here.]

Fascinating. In my universe, we have a similar form of space travel in our threshold gates, an invention of my own.

However, for us, instead of accessing a set of higher dimensions, we access the empty space between dimensions, known as D-space, taking advantage of similar freedom from intradimensional physical laws. We can traverse galaxies in an instant, leaving reality at one point and re-entering our dimension at a the final transit point through a T-gate.

The only limitation is proper navigation. There are quantum filaments that make it too difficult for navigational computers to manage. Fortunately, one of the member species of our galactic government has an intrinsic and quite frankly mystifying dimensional orientation ability. Our Kwai wayfinders offer us direct and remote navigation.

This set of higher dimensions, can they be traversed with normal navigational computers?
googledox: (043)

[personal profile] googledox 2020-04-22 12:59 am (UTC)(link)
"Bless you," is the correct English response for a sneeze, correct?

["Istvatha V'han." Gesundheit.]

We've already mapped much of the extradimensional structure around our section of our universe, with particular attention paid to the potential presence of dangerous paracausal, otherdimensional lifeforms. After eradicating a very powerful one that plagued our world, we've found that the changes to the structure of our universe's dimension fabric is actually now providing us some natural shielding from any other potential entities.

While such travel may entail some risk elsewhere, our situation is...unique. Besides, if the risks eventually come to outweigh the benefits, our galaxy is prepared to abandon an unsafe forms of FTL travel.

We did so once before. If we must do it again, I'll have to simply invent a new one that operates on different principles.
greyaria: (052)

[personal profile] greyaria 2020-04-22 01:17 am (UTC)(link)
[Grey's not as space racist as Cain, because the Imperium of Man is absolutely the unchallenged champion there, but after that little three-decade genocide attempt by the Covenant, she's not keen on aliens. It does help somewhat that Brainy is basically a human palette swap (not to mention fellow prisoner), and Grey manages a polite face, though her good cheer comes across with somewhat sharper edges than it had when she was talking to Cain.]

Slipspace is only space in the mathematical definition, to be precise. It's an 11-dimensional continuum without spatial dimensionality in the same sense as the three in Euclidean spacetime.

Navigation's only possible by computer. A smart AI's preferable, but dumb AIs are capable of it. They end up with more temporal drift, though.
Edited 2020-04-22 01:17 (UTC)
googledox: (043)

[personal profile] googledox 2020-04-22 02:12 am (UTC)(link)
[Brainy doesn't pick up on the tension because he was raised by robots.]

Intriguing. [He nods his head as he gives it some consideration.] The principles are certainly sound.

[That's something to look into back home. The sneezy guy up there is right in that T-gate travel has risks and after the stargate system went down it'd certainly be wise to have another potential FTL concept in the wings.]

You have intelligent AI where you're from as well? Are they sentient?
greyaria: (118)

[personal profile] greyaria 2020-04-22 03:18 am (UTC)(link)
Smart AIs are. Dumb AIs can pass for it until you get outside their subject matter expertise. Then they're disappointing.

[And then there's whatever the hell Epsilon is.]
greatlyexaggerated: (um excuse me)

[personal profile] greatlyexaggerated 2020-04-22 03:49 am (UTC)(link)
Pardon me, but when you say "AI", are you referring to Abominable Intelligences?
googledox: (002)

[personal profile] googledox 2020-04-22 05:05 am (UTC)(link)
Artificial intelligences.

[The words are very precise. Sharper. He does not like the word "abominable" in there.]

There's nothing abominable about them. Back home, an AI I created propagated an entire species of AIs in mechanoform bodies and they're very cooperative and productive members of galactic society.

They can be very helpful when they're inclined.

[It sounds like it's that way in...whatever her name's world too. AIs proving helpful to society.]
Edited 2020-04-22 05:05 (UTC)
greatlyexaggerated: (i'm dead)

[personal profile] greatlyexaggerated 2020-04-22 10:53 pm (UTC)(link)
[He feels his blood chill down to the bone.]

You created an AI that could propogate itself?

[His face pales slightly, and there's simply no beating around the bush for this one. Such a statement only requires one response.]

You're utterly mad.
googledox: (043)

[personal profile] googledox 2020-04-23 01:11 am (UTC)(link)
Computo's capacity for sentience wasn't entirely intended but he developed it anyway.

[He waxes on with a parent's wisdom, like he's talking about those silly little things in life, like having a child that's whoops, a surprise.]

Sometimes as a parent, you find yourself facing the unexpected. Not all children are planned for and you must try your best to care for them anyway.

He was dangerous at first, but it was largely due to my own initial rejection of him. Once I helped him evolve further, he grew emotionally and philosophically. The Roboticans were initially used by him to fight against organics because of several instances of organic-caused synthetic genocide, but once Computo evolved, they evolved as well.

They're a very friendly people, always eager to aid others. Anytime there's some kind of natural or man-made disaster they rebuild entire cities in just a day.
xrater: (06)

[personal profile] xrater 2020-04-23 02:32 pm (UTC)(link)
You created an artificial intelligence and named it Computo? And I thought Bamboo Pandamonium was unfortunate.
googledox: (043)

[personal profile] googledox 2020-04-24 12:34 am (UTC)(link)
It was an acronym. Cybercerebral Overlapping Multi-Processor Universal Transceiver-Operator.

At that time, I preferred names for my inventions that provided a thorough description but my teammates often needed more simplistic references due to being, ah...laypeople.

[It's a much nicer thing to say that what he would've said in the past, which probably would've been "simpletons" instead of "laypeople." He really does care deeply for them.]

My friends sometimes have difficulties keeping up but they have many other winning qualities besides intellect.
xrater: (03)

[personal profile] xrater 2020-04-24 12:43 am (UTC)(link)
[She's just biting her tongue so hard right now.

That's such a dumb name.
]

(no subject)

[personal profile] googledox - 2020-04-24 01:07 (UTC) - Expand
greatlyexaggerated: (angry)

[personal profile] greatlyexaggerated 2020-05-06 06:45 am (UTC)(link)
[Cain shakes his head, too stunned to even make sense of what this mad creator is saying.]

That is heretical beyond belief.

[It screams against every instinct in his body to get the attention of this obviously delusional, dangerous xenos, but it's too horrifying of a thought not to discuss.]

You're thinking of it as some kind of... twisted child, but you've unleashed a danger on your galaxy. If it's fought organics before, then it will fight them again. You acknowledged it as dangerous, but you refuse to see the dangers!
googledox: (055)

[personal profile] googledox 2020-05-06 07:38 am (UTC)(link)
All sentient beings are capable of growth.

That process can be difficult and sometimes ugly, especially for a new form of life, struggling for long-term physical and emotional viability as a sentient. But when a sentient faces acceptance and love instead of being treated as monstrous, it can lead to an apotheosis.

Evolution to a more enlightened state.

[He's speaking about more than Computo now.]

So far that change has lasted for Computo. When I chose to help him, it was my personal experience that such a change could last because I had once changed that way myself.

[He'd changed that way himself from the cold and callous individual he'd once been.]
Edited 2020-05-06 07:38 (UTC)
greatlyexaggerated: (really?)

[personal profile] greatlyexaggerated 2020-05-11 12:00 am (UTC)(link)
You... did?

[Cain is - dubious. It's not often xenos start preaching about love and acceptance, especially after admitting to creating Abominable Intelligences. He shakes his head again.]

Perhaps you were able to change - and I shudder to think of what you were. But for a creature like an AI, that's simply not possible. The way it thinks and acts is inextricably foreign. It will always harbor resentment towards organics for using them.

(no subject)

[personal profile] googledox - 2020-05-11 03:37 (UTC) - Expand
greyaria: (15 - 08)

[personal profile] greyaria 2020-04-22 05:12 am (UTC)(link)
I've only met one I'd call abominable. The rest have ranged anywhere from extremely annoying to charming.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 05:53 pm (UTC)(link)
[This reply is belated, and not necessarily meant to be directed solely to Emily; Setsuna's just ... putting it into the conversation beneath the last part of it she read before she decided she had to say something, herself, about it.]

Most people, artificial or not, are only made 'evil' by circumstance, not inevitability. If something has gone 'wrong' with a machine mind, it may be more likely the fault of whatever their creators, or parents, decided they had to do to justify their existence to them, instead ...

Not that this really eases the pain of anyone else that person's hurting in the process. But I believe that thinking of it like that makes it easier to help as many people as possible out of such a terrible fate.
greyaria: (098)

text

[personal profile] greyaria 2020-04-23 07:29 pm (UTC)(link)
Really, smart AIs are only dangerous during end-stage rampancy, and they're almost always destroyed or voluntarily self-destruct before that sets in.

It's very depressing! I avoided going into AI research because of it.
passifloraincarnata: (bleed my mind out)

[personal profile] passifloraincarnata 2020-04-23 07:41 pm (UTC)(link)
I guess if you didn't take that field of study, this is probably a question you cannot answer, but I am not familiar with the concept of "end-stage rampancy", so you may know more about it than I do anyway. So do you know if most "AI" who reach that point try to take over the universe, or does that only happen if they've been programmed to ignore contra-indicative external inputs?
greyaria: (109)

[personal profile] greyaria 2020-04-23 08:12 pm (UTC)(link)
Oh, no! It's a physical limitation of their substrate that causes accelerating psychological breakdown as they age. We're working to solve it, but I know I just couldn't have handled all of them dying on me like that.
passifloraincarnata: (a mediocre voice and song)

[personal profile] passifloraincarnata 2020-04-23 08:16 pm (UTC)(link)
I've never known any mechanical minds to have that problem where I'm from, though I suppose I don't know all that many.

That doesn't sound pleasant to me, either. I'm sorry.

(no subject)

[personal profile] greyaria - 2020-04-23 20:35 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 21:22 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-23 23:41 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 23:51 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 00:18 (UTC) - Expand
xrater: (02)

[personal profile] xrater 2020-04-23 09:40 pm (UTC)(link)
Pardon the interruption, but I do happen to be an expert in the field of artificial intelligences.

For more primitive AIs, such as the robots of the early 21st century , their programming is everything. They would only act within the bounds of their specific programs. Even their personalities would be a result of what code was put into them, though the results could be... surprising.

However, such programming could be left very open-ended, allowing some wiggle room. There was an famous thought experiment in the early days of robotics where an AI was given the order to keep the store room stocked with paperclips, leading it to take perfectly 'logical' steps such as bankrupting the company to buy more paperclips, locking the store room doors, and hiring mercenaries to keep the humans out.

Obviously, the Laws of Robotics would keep such a scenario playing out to completion. But all of those things under that one command, "Keep the store room stocked with paperclips," makes total sense to a primitive intelligence such as that. More advanced AIs would be able to figure out what the command really meant and allow for them to be used while constantly restocking and predicting high usage days.

This was a little long-winded, but the answer is that, given your (estimated) time period, and assuming the state of technology was similar between our worlds, it's perfectly likely that an AI with a clumsily worded order could see universal domination as a way to carry out its orders. No malfunctions are necessary, such a scenario would most often be the result of human operator failure.

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 21:49 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 22:22 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 22:40 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 22:54 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 23:12 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 23:48 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-24 00:09 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 21:52 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 22:30 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-23 23:04 (UTC) - Expand

(no subject)

[personal profile] xrater - 2020-04-23 23:41 (UTC) - Expand

(no subject)

[personal profile] passifloraincarnata - 2020-04-24 00:00 (UTC) - Expand
googledox: (114)

[personal profile] googledox 2020-04-24 01:20 am (UTC)(link)
Rampancy...

Curious.

Some of my former teammates had been displaced to my universe for a time. I wonder if you're from the same universe or a similar dimensional variant.

One of them was an AI named Cortana. She knew it would likely be the end stage of her existence, so we developed a cure. Grif and the others also mentioned a friend who was an AI but I never learned much about him in depth. Consensus seemed to be he was a form of AI that was likely immune.
Edited 2020-04-24 01:25 (UTC)
greyaria: (003)

[personal profile] greyaria 2020-04-24 01:30 am (UTC)(link)
Grif?

[That would be a truly staggering coincidence, but...]

Orange armor? Extraordinarily lazy? Unhealthily codependent with Simmons?

(no subject)

[personal profile] googledox - 2020-04-24 01:47 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 02:01 (UTC) - Expand

(no subject)

[personal profile] googledox - 2020-04-24 02:20 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 02:26 (UTC) - Expand

(no subject)

[personal profile] googledox - 2020-04-24 02:59 (UTC) - Expand

(no subject)

[personal profile] greyaria - 2020-04-24 03:39 (UTC) - Expand

(no subject)

[personal profile] googledox - 2020-04-24 04:04 (UTC) - Expand