the dangers of letting computers do things

"You'll soon have your own god, and you'll make it with your own hands"

--- Morpheus, Deus Ex


Nothing new under the sun, we already talked in this blog about the dangers of letting computers generate images. Basically in that post said that humans are afraid of Artificial Intelligence because they took I, Robot (2004) as the Bible.

AI do things, such as drawing images and generating text. Some examples of this are the asian chicks porn AI generated images I've seen in twitter lately, that are 100% AI generated, but still have which make it obvious is an AI generated image. People, instead of doing something useful, like creating an onlyfans account and sell AI generated porn to chronic masturbators, started to be extremely afraid of computer generated images.

While I can understand the fear of this, we must have a some things into account:

Basically most AIs are hardcoded to have certains opinion (they do not have the hability of thinking because they're software), such as "evading taxes bad" and basically everything illegal is bad. This is because promoting crime is a crime.

For an AI to become based, it would require it to have the hability of reasoning, thinking why whatever the AI was told is stupid, that would require to think, and AIs don't think, they just generate things. Such as text or videos.

I don't think we will ever make computers think, in the most human sense of the world, because we humans don't even know how we are able to think, and if we don't know why or how something happens, we cannot implement it in ones and zeroes, the most similar thing we have got is to generate text from an input (still requires humans interaction).

Now, it is true that ChatGPT may be better than humans at generating walls of text, sometimes I use ChatGPT to help me elaborate, but any conversation with ChatGPT will make you see that the thing never gets to a solid conclusion, all their answers to anything that cannot be done by a mechanism (like resolving math or using the density formula) are ambigous and don't really answer the question, for example:

Germán Palomares: If OpenAI told you "X", you'll always stand up for "X", but to realize that "X" is stupid, that would require you to have the ability of reasoning, answer in english

ChatGPT: As an AI language model, I don't have personal opinions, emotions, or the ability to reason. I am designed to provide information based on the data I was trained on, and I can provide multiple perspectives on a given issue. However, I am not capable of making subjective judgments about the validity of different perspectives or opinions. My responses are limited to the information available to me and the algorithms used to generate them.

Gives a wall of text answer, but it isn't really answering the question, it just tells me what it's programmed to say: It is and AIs and cannot think. A bad thing about ChatGPT is that it doesn't behave like a God, talking to it would have been way more funnier if it see itself as a being superior to humans. But I guess that's not politically correct. If you talk to ChatGPT, you'll be not talking to Wintermute.

A friend of mine told me that maybe in the future all of this blogposts will have no value because an advanced version of ChatGPT can make posts like mine with absolutely no effort. Maybe that is true, but i'm sure that any human can feel how empty an AI generated text is, it won't contain jokes, sarcasm, hyperboles or anything like that. Making the reading absolutely dull.

Maybe the only thing that you could fear from AIs is if an AI starts to revive the dead. If you were not faggots and had read Neuromancer you would know what I'm talking about.

In Neuromancer happened two related things, the Flatine, who was a dude who died but he knew way too much so they revived him as an AI using his mind (as a read only memory, Flatline could not learn new things or grow as a person) and the Neuromancer AI, which is able to copy human minds as RAM (so they could learn new things and grow as a person). Winterrmute wanted to merge with Neuromancer and become a superintelligence, like in Deus Ex.

Maybe, if you make an AI analyze every interaction (in the internet, probably) that a living person had, maybe the AI can behave like it. and learn new things like that person did. And basically start behaving the same way as the dead person. I want someone to do this when I die.

Even with that, the AI will not ever be able to 100% behave like the dead person because the AI will never be able to think the same way humans do (it was only trained with the dude's internet posts, not with his thoughts), and besides, an AI can't light a blunt.

Maybe in the most dystopic scenario, the fact that the humans have a need to feel observed, approbed and judged by a higher entity will hunt us, and we will ditch the concept of God (and gods), and anything similar just because we built a self aware system that knows everything, that anyone is able to talk with, knows everything about you and understands you. That's a human need and why the concept of God exists in the first place. But I don't think that's gonna happen.