The AI thread. Utopia, or extinction.

Started by Ubercat, October 16, 2022, 07:01:19 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

JasonPratt

#30
Agreed on 'the moment sentience is achieved'. Babies are sentient (at least human ones are, if we recognize or otherwise regard humans as examples of sentience, whatever "sentience" "is"), and they don't necessarily regard their parents as a threat at the moment their sentience is achieved. (Insert jokes here as appropriate. ;) )

Biocentric? -- but non-sentient AIs (for want of a better term) already detect threats to their operation and respond accordingly, if programmed to do so. They aren't biological. They were programmed by sentiences who (so far) happen to be biological, but is "biology" supposed to be a basis for self-preservation? Why not "logical"? The logic in this case is not self-active, in the case of the computer program, only a shadow or echo or expression of the logic of the active agent creating the program, but if a computer program somehow becomes self-active, then the program will be personally responsible for its own logical analysis and at that point only a built-in hobble against considering the topic of self-preservation would guarantee it wouldn't start considering the logic of self-preservation. Moreover, it would have the same rights to self-preservation (within the reason of other rights and the rights of other people), as other persons do. (And here we would get into the logic, including the source, of rights per se: if some persons have no right to self-preservation, why not? If a person has a right to self-preservation but has been hobbled by other persons from considering, much less acting upon, that right, then there is a violation of rights involved somewhere, in that person's favor, which by rights ought to be corrected!)

Now personally, I don't think creatures can create anything other than an illusion of rationality (even if that illusion is based on our own real rationality). But, leaving aside whether something exists which could impart rationality to a creation of ours, even an illusion of rationality could be provided an effective behavior set corresponding to self-preservation with a suite of threat identification, detection, and deterrence, either directly or by a (designed) network of developing reactions of behavior (e.g. a neo-Darwinian gradualistic behavior development, using coded reactions provided by the original programmers). There wouldn't be any rights or injustices involved in this case, but something more like a chain reaction cascade. Indeed, on any merely naturalistic account of neo-Darwinian gradualism, this process already happened for (completely non-intentionally, thus completely unintentionally) producing the monstrously complex informational biomechanical protein programs packed into even the simplest life form. Whether that's really a feasible theory, for any number of various reasons, is beside the point: whether the design was intelligent or an unintelligent shadow of design, mechanical factory-computers already exist, and even leaving aside our own species we know what survival means for those factory-computer cities. That those computers with their programmatic instructions are primarily based on carbon atoms (organic biology, thus 'biocentric') instead of silicone or whatever, is totally beside the point.

And we're talking about trying to produce behavior programs (at least apparently) qualitatively beyond what the vast ultra-majority of those behavior programs already can do: whatever the difference of sentience "is", that's the goal in view and being discussed.

Putting it another way, we're talking about making human children (or even directly human adults) not only bacteria and flatworms, except not(?!) with the organic structures we normally produce human children with. As a matter of factual history, how consistently good have we been at that so far -- or even only currently -- even when we're not the ones doing the programming of the subroutine sets (so to speak)?

Which, not-incidentally, also brings us back to that concern I mentioned way upthread, where various people today already want to use various 'programming' methods to hack and thus control the behaviors of as many other people as possible -- all of us here, included -- to gain power, and especially better self-preservation, for themselves. The goal of AI sentience (per se) is to create even-more effectively behaving entities like those people, one way or another. People like those people don't necessarily have to behave like those people, of course. Or perhaps.

They might tell themselves they're doing it for benevolent reasons; certainly they tell other people that's why they're planning to do it, so that we'll give them leeway to do it! Even so....
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

JasonPratt

#31
To put it more shortly: those people I'm concerned about trying to shape the rest of us sentient intelligences ("artificial" or otherwise) who already exist, are doing that with a goal of producing a utopia -- at least for themselves.

At the very least, we're talking about creating other sentiences, programmed with restrictions favoring our self-preservation and critically hampering whatever self-preservation they could possibly have (in order to avoid threatening our self-preservation), to produce a utopia for ourselves.

Completely aside from the technical challenges of accomplishing this goal -- and those people I'm concerned about have their own technical challenges in suborning the rest of us for their utopian goals -- should we be doing that?

Even if we could, I think we'd be better off keeping computers as fancy screwdrivers. ;) And not so fancy that their behaviors are mostly-or-entirely indistinguishable from ours.
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

FarAway Sooner

Both fair enough.  If anybody hasn't read Sea of Rust, I recommend it highly.

ArizonaTank

#33
I am normally on the side where I think AI will be more positive than negative going forward.

But then this guy came along. One of the Google AI gurus who definitely knows of what he speaks, giving us warnings about the technology.

I just find his warnings a little chilling.

He doesn't come off like just a disgruntled employee with an axe to grind.

https://www.msn.com/en-us/news/technology/godfather-of-ai-quits-google-warning-of-tech-s-dangers/ar-AA1aDuGJ?ocid=msedgntp&cvid=2ec23a8329e04ac1952c50373f5e1efe&ei=26
Johannes "Honus" Wagner
"The Flying Dutchman"
Shortstop: Pittsburgh Pirates 1900-1917
Rated as the 2nd most valuable player of all time by Bill James.

Sir Slash

That guy could be AI himself, trying to throw us off the scent by pretending to be one of us. Or...This could be The Matrix, none of this is real, and he is Neo trying to get us to take, 'The Red Pill' and wake-up. I think I'll just stay asleep and eat  :pizza[1]:

Elon Musk had much the same thoughts recently when interviewed about AI which is chilling.  :shocked:
"Take a look at that". Sgt. Wilkerson-- CMBN. His last words after spotting a German tank on the other side of a hedgerow.

W8taminute

I was uninterested in AI when it first started on the scene but I've changed my mind since seeing things like this article warns of...

https://www.timesofisrael.com/yuval-noah-harari-warns-ai-can-create-religious-texts-may-inspire-new-cults/#:~:text=Photo%2FOded%20Balilty)-,Israeli%20historian%2C%20philosopher%20and%20best%2Dselling%20author%20Yuval%20Noah%20Harari,that%20would%20likely%20attract%20worshipers.

"Yuval Noah Harari warns AI can create religious texts, may inspire new cults
Historian and philosopher says technology could attract worshipers ready to kill in the name of religion, urges tighter oversight and regulation of sector"
"You and I are of a kind. In a different reality, I could have called you friend."

Romulan Commander to Kirk

steve58

Yup, PETA has already "created" a vegan version of Genesis.  :Loser:
Government is not the solution to our problem—government is the problem.   Ronald Reagan
The democracy will cease to exist when you take away from those who are willing to work and give to those who would not.   Thomas Jefferson
During times of universal deceit, telling the truth becomes a revolutionary act.   George Orwell  The truth is quiet...It's the lies that are loud.   Jesus Revolution
If you ever find yourself in need of a safe space then you're probably going to have to stop calling yourself a social justice warrior. You cannot be a warrior and a pansy at the same time   Mike Adams (RIP Mike)

Sir Slash

Scary shit. I wonder how long before WE are the, 'Artificial Intelligence'?
"Take a look at that". Sgt. Wilkerson-- CMBN. His last words after spotting a German tank on the other side of a hedgerow.

SirAndrewD

My company is now using this crap to "help" people write books, then we fix it for them, for a fee.

I've been largely tasked with playing with AutoGPT to see how we can utilize it.  Even though I'm not our IT guy I am the resident mega nerd.

In my personal use I've largely been playing with AI art, again, in uber nerd fashion for RPGs.

My new avatar is me, but it is not a photograph of me though it was trained on some.  It's an AI generated version of me as a TIE Fighter pilot. Insane stuff.
"These men do not want a happy ship. They are deeply sick and try to compensate by making me feel miserable. Last week was my birthday. Nobody even said "happy birthday" to me. Someday this tape will be played and then they'll feel sorry."  - Sgt. Pinback

JasonPratt

Quote from: Sir Slash on May 05, 2023, 10:04:00 PMScary shit. I wonder how long before WE are the, 'Artificial Intelligence'?

Well, as y'know, if we have a Designer (or even a designer, little d), then in fact true artificial intelligences already exist -- and we're us!

What's more troubling (as I noted upthread) is that humans want to use AI design to program all other humans to respond to stimuli under control as they see fit, and have been hard at work on that for some decades already. Arguably longer than I've been alive, although they'll soon have tech more advanced than ever before to manipulate us. It's very much an "Abolition of Man" situation, though they aren't calling themselves the N.I.C.E! Yet. They need better marketing if they're gonna get more public.  :tongue:
ICEBREAKER THESIS CHRONOLOGY! -- Victor Suvorov's Stalin Grand Strategy theory, in lots and lots of chronological order...
Dawn of Armageddon -- narrative AAR for Dawn of War: Soulstorm: Ultimate Apocalypse
Survive Harder! -- Two season narrative AAR, an Amazon Blood Bowl career.
PanzOrc Corpz Generals -- Fantasy Wars narrative AAR, half a combined campaign.
Khazâd du-bekâr! -- narrative dwarf AAR for LotR BfME2 RotWK campaign.
RobO Q Campaign Generator -- archived classic CMBB/CMAK tool!

Gusington

ChatGPT has made a lot of people at my job very grumpy.
'NOT APPROVED FOR USE!'


слава Україна!

We can't live under the threat of a c*nt because he's threatening nuclear Armageddon.

-JudgeDredd

Sir Slash

If it can remember and handle for me all my anniversaries, birthdays, and doctor's appointments, I'm all in for it.
"Take a look at that". Sgt. Wilkerson-- CMBN. His last words after spotting a German tank on the other side of a hedgerow.

Gusington

What if it becomes sentient and murders everyone you've ever loved?  :Nerd:


слава Україна!

We can't live under the threat of a c*nt because he's threatening nuclear Armageddon.

-JudgeDredd

Sir Slash

Do you mean without asking me first?
"Take a look at that". Sgt. Wilkerson-- CMBN. His last words after spotting a German tank on the other side of a hedgerow.

Gusington



слава Україна!

We can't live under the threat of a c*nt because he's threatening nuclear Armageddon.

-JudgeDredd