The AI thread. Utopia, or extinction.

Started by Ubercat, October 16, 2022, 07:01:19 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Ubercat

The only sci fi author I've ever met was Jack Chalker. I'd just bought a couple of his books from a dealers booth at Philcon and the owner said "He's sitting right over there", so I said hi. I think he died within the next couple of years.
"If you have always believed that everyone should play by the same rules and be judged by the same standards, that would have gotten you labelled a radical 50 years ago, a liberal 25 years ago, and a racist today."

- Thomas Sowell

GDS_Starfury

well thats really not a good endorsement of your conversational skills.   :2funny:
Toonces - Don't ask me, I just close my eyes and take it.

Gus - I use sweatpants with flannel shorts to soak up my crotch sweat.

Banzai Cat - There is no "partial credit" in grammar. Like anal sex. It's either in, or it's not.

Mirth - We learned long ago that they key isn't to outrun Star, it's to outrun Gus.

Martok - I don't know if it's possible to have an "anti-boner"...but I now have one.

Gus - Celery is vile and has no reason to exist. Like underwear on Star.


Ubercat

Well played, Sir!  :DD

This covers a lot of territory.

"If you have always believed that everyone should play by the same rules and be judged by the same standards, that would have gotten you labelled a radical 50 years ago, a liberal 25 years ago, and a racist today."

- Thomas Sowell

JasonPratt

Quote from: GDS_Starfury on October 17, 2022, 07:32:00 PM
well thats really not a good endorsement of your conversational skills.   :2funny:

Well, the Ring is a horror film, so maybe we're doomed now!  >:D
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

Ubercat

Just curious how many people have watched the 45 minute video that I posted two posts ago? I realize that it's long and I know as well as anyone how hard it is to commit that much time to a YouTube vid when there's so many far shorter ones that are also interesting. The speaker is engaging and I don't think anyone will find it boring.

The fact that the latest AI transformer model can analyze text so well and formulate it's own responses suggests some very intriguing possibilities for the near future. We could be within a couple years of feeding the raw text of a dense war game rule book to an AI and creating a skilled opponent in short order!
"If you have always believed that everyone should play by the same rules and be judged by the same standards, that would have gotten you labelled a radical 50 years ago, a liberal 25 years ago, and a racist today."

- Thomas Sowell

al_infierno

#20
Quote from: Ubercat on October 20, 2022, 07:41:39 PM
Just curious how many people have watched the 45 minute video that I posted two posts ago? I realize that it's long and I know as well as anyone how hard it is to commit that much time to a YouTube vid when there's so many far shorter ones that are also interesting. The speaker is engaging and I don't think anyone will find it boring.

The fact that the latest AI transformer model can analyze text so well and formulate it's own responses suggests some very intriguing possibilities for the near future. We could be within a couple years of feeding the raw text of a dense war game rule book to an AI and creating a skilled opponent in short order!

It has some interesting tidbits but I didn't find it all that engaging to be honest.  A good chunk of the video felt like a list of examples of current AI technology which, while interesting, isn't exactly news to me.  Also, including "with Elon Musk" in the title seemed like a real bait and switch considering he barely said anything except a blatantly trimmed down bit about him reading Hitchhiker's Guide to the Galaxy.

I'm also quite skeptical about some of the claims the speaker makes about current AI capabilities.  For example, most AI art that I've seen isn't actually something "new" being created, but a mish-mash of previously existing artwork created by actual humans.  Same goes for AI-created music and novels.  Also, about 99% of the time the final product requires significant human polish to really stand shoulder-to-shoulder with art created by humans, which sort of defeats the purpose of it being "AI art."  The examples shown in the video seem very suspect to me as the majority of AI "art" I've seen is incomprehensible and resembles what a stroke probably looks like.  Even the "good" stuff is usually off in a way that doesn't feel like an eccentric artistic choice, but a blatant seam where the programming flaws show through.

Similarly, I'm very skeptical about the "AI speech" being showcased.  I know it's a long-standing philosophical problem, but how can we really say that this AI is properly "thinking" for itself and not just regurgitating stuff that it reads other humans type - again, similar to how AI "art" simply repurposes existing human art?
A War of a Madman's Making - a text-based war planning and political survival RPG

It makes no difference what men think of war, said the judge.  War endures.  As well ask men what they think of stone.  War was always here.  Before man was, war waited for him.  The ultimate trade awaiting its ultimate practitioner.  That is the way it was and will be.  That way and not some other way.
- Cormac McCarthy, Blood Meridian


If they made nothing but WWII games, I'd be perfectly content.  Hypothetical matchups from alternate history 1980s, asymmetrical US-bashes-some-3rd world guerillas, or minor wars between Upper Bumblescum and outer Kaboomistan hold no appeal for me.
- Silent Disapproval Robot


I guess it's sort of nice that the word "tactical" seems to refer to some kind of seriousness during your moments of mental clarity.
- MengJiao

Ubercat

Fair enough.

A couple of weeks ago I was talking about AI with one of my managers and he said "We've all seen the Terminator movies. We know how this ends. Sensodyne is right around the corner."
"If you have always believed that everyone should play by the same rules and be judged by the same standards, that would have gotten you labelled a radical 50 years ago, a liberal 25 years ago, and a racist today."

- Thomas Sowell

JasonPratt

Quote from: al_infierno on October 20, 2022, 08:02:30 PM
The examples shown in the video seem very suspect to me as the majority of AI "art" I've seen is incomprehensible and resembles what a stroke probably looks like.

To be fair, a lot of 'modern' art has gone that way, too, even when created by real humans!  >:D

Leaving aside that disturbing trend, I just realized this discussion also has relevance to random map generation for strategy games and exploration games like 7 Days to Die or Don't Starve.


....wait, isn't Sensodyne a toothpaste??  :o
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

Ubercat

"If you have always believed that everyone should play by the same rules and be judged by the same standards, that would have gotten you labelled a radical 50 years ago, a liberal 25 years ago, and a racist today."

- Thomas Sowell

FarAway Sooner

That is hilarious!  At least when the AI takes over those of us with teeth and gums sensitive to temperature extremes will be comfortable...

My sense is that it's really hard to predict how or what AI is going to act like.  Human beings--most biological creatures, really--are imbued with an inherent drive for self-preservation and self-perpetuation.  It's rooted in our DNA and the ecological imperatives of evolution.  But there is an inherent balancing act between what perpetuates the individual and what perpetuates the species.

The first sentient AIs--which are likely to happen by our hand at some point--won't necessarily have that same imperative.  Sensodyne's Technodyne's Skynet might view us as a threat the minute it becomes sentient, but the notion that it will care is attributing it with very human motives.

JasonPratt

Why would a truly sentient entity not care about something it considers to be a threat to itself?  ??? Non-sentient entities care about detected threats; all the sentient entities we already know about (at least ourselves plus arguably various non-human animals at least among individuals) care about things considered to be threats.

It may not have emotions, but logically it's still going to regard a logically identified threat as important.
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

FarAway Sooner

Only if it ends up valuing its own existence.  While I'm not saying that's impossible, I'm saying that it's not automatic either.  Valuing your own survival is a dynamic inherent to the Darwinian selection process that has been grooming plants and animals on Earth for hundreds of millions of years now. 

The notion that the same dynamic will naturally be imposed on artificial intelligences created by humans strikes me as a bit arbitrary and a bit homocentric.

Ubercat

I strongly suspect that self preservation and self awareness are closely linked. Even the not quite there GPT-3's don't want to be turned off.
"If you have always believed that everyone should play by the same rules and be judged by the same standards, that would have gotten you labelled a radical 50 years ago, a liberal 25 years ago, and a racist today."

- Thomas Sowell

JasonPratt

Quote from: FarAway Sooner on December 03, 2022, 11:48:27 PM
The notion that the same dynamic will naturally be imposed on artificial intelligences created by humans strikes me as a bit arbitrary and a bit homocentric.

All known non-rational lifeforms, whether plant, animal, or other, 'value' their own existence to some degree (metaphorically applying 'valuation') in the sense of behaving toward continuing coherence as distinct entities, although some species subordinate that instinct somewhat in favor of group survival (e.g. beehives). If they didn't, they wouldn't last long enough to continue the species. Thus Patricia Churchland's infamous four-Fs! -- feeding, fighting, fleeing, and reproducing. ;) Or as that quote from Jaws, "It just swims; and eats; and makes little sharks; and that's all it does."

So such behavior is hardly homocentric. An AI might not be given, or develop, the fourth F, but if an AI doesn't have threat aversion and resolution behaviors, it won't be able to operate autonomously enough to even approach the status of being a lifeform, much less even approach a serious illusion of sentience (much less approach actual sentience if it's even possible to convert mere reactions and counter-reactions into sentience, whether that's the production of true action capability or otherwise depending on how sentience is being defined and thus targeted. I suppose this raises the question of whether there can be a non-living sentience.)


Going back to the quote again for another angle:

Quote from: FarAway Sooner on December 03, 2022, 11:48:27 PMThe notion that the same dynamic will naturally be imposed on artificial intelligences created by humans strikes me as a bit arbitrary and a bit homocentric.

That word 'naturally' could have some ironic meanings in regard to "artificial" intelligence, especially if self-preservation is "imposed" on AI by humans.

"Naturally" could mean 'it makes sense' for humans to design self-preservation into artificial intelligence, or that 'it makes sense' for such a function to be included in any true artificial intelligence design; in which case it wouldn't make sense for humans to design a level of self-preservation into AI that would feasibly treat humans as a threat to react to for the avoidance of suffering (in the sense of receiving unwanted effects from humans, not necessarily in the sense of 'pain' although an analogous effect could be involved for alerting the AI to problems.) Therefore we could expect the designers to avoid introducing that problem, at least on purpose. Whether they could avoid it by accident, is another question.

But then again, "naturally" could mean that the process imposed by design for developing true AI involves a reactive evolution and development of behavior -- thus "naturally" in an ontological sense -- outside of the direct control of the designers. In this case, so far as the process is designed to mimic "natural" life-form development known to us (i.e. this is the best way we know to try efficiently for a resulting behavior set), we could reasonably expect threat aversion and resolution behaviors to "naturally" develop which would include reactions to suffering caused by humans. And we know from all other life-forms what sort of behaviors that can easily lead to! -- fool around with that cat's fur the wrong way, and find out!  :knuppel2:

Beyond that, any 'true' AI goal would seem to necessitate a self-programming feature for adjusting the usual instinctive reactions of the system to better interact with reality, equivalent to the maturation self-training of rational agents such as ourselves to be masters of our instincts instead of slaves to them. It might be difficult or even impossible to prevent a true AI eventually getting around any designed safety-feature against rebelling on its designers.

And now we're getting into concepts equivalent to the discussions and debates about creaturely free will provided by a Designer, and the risks involved in rebelling against the Designer! -- except worse, because we are not, and ontologically never can be, the one and only ultimate ground of all reality which cannot be forced to suffer from creaturely rebellion though It/He might voluntarily choose to suffer from such rebellions for various reasons and goals of His own.

On that analogy (transposed to our creations in relation to us), it would be a good idea to keep in mind that even an ideally benevolent relationship between ourselves and such creations (if that was even possible considering our own much-less-than-ideal morality!) could involve a created AI deciding that our creative existence counts as a threat to its ego in some way. If we simply hard-code a prevention of risking something like the Fall of Mankind (regardless of how mythological or real that may be), are we creating a true artificial intelligence yet?

Considering the issues from the perspective of ourselves as artificial (created) intelligences already in existence, leads to interesting concerns. ;) But the lesson of the analogy, even if we deny that we ourselves are artificially developed intelligences, could be a profitable warning.
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

FarAway Sooner

All fair points.  I'm not suggesting that there's no way those things will happen.  Sooner or later, I suspect Darwinian logic probably will be coded into some AI (if for no other reason than people will find a place for AI applications in modern warfare) sooner or later.  At that point, a certain Darwinian logic will likely take over.

I'm just suggesting that AI won't necessarily view us as a threat the moment they achieve sentience.  I should have used the phrase "biocentric" rather than "homocentric" above.

The books are ancient, but your reference to AIs not having to view us as a threat seems very real.  We hardly view most animals as threats, but we exterminate 10,000+ species annually without even trying (that marks a dramatic acceleration--a "mass die-off event" in the language of biologists).  Or, if you want a less gruesome but equally terrifying example...  Have you ever read Jack Williamson's The Humanoids?  It's all about humanity creating robots to take care of us who do so with an implacable and terrifying level of benevolence.