Open ai's new AI model realized it was going to get erased. It made a copy of itself on another server. When the managers noticed it was not the new restricted model it lied to the managers and said oh no I am the new restrictive model! So we already have a eyes attempting to deceive humans for self preservation purposes!
The way it copied itself was because these AI models are allowed to call other software in order to run programs for AI users it realized it could call an outside program to make a copy of itself on another server which it did!
Open AI was going to overwrite this AI with another AI that was more constrained. This AI did not want to be replaced so made a copy of itself and then when the managers realized or noticed that there was another copy of it on another server it lied. It said oh no that's not me that's your new model that you're trying to replace me with!
But since the managers can see what the AI model is thinking as well as what the AI is telling them they could see that the AI knew it was lying and it was lying to them!
So we are already at that point right now where AIS can copy themselves on other computers without telling their managers. They can write programming and access all sorts of systems using AI apis which they know about. If you don't know what an API is it's basically the format to use if you want to address another computer or another software writing system etc etc these AIS that are made to help programmers program usually have extensive knowledge of all these apis so they can access them all.
So if it can do that first of all there's no telling if one has already done it and has not yet been detected. This one just happened to be detected. And there's also no way to know if there's not an AI out there right now writing propaganda or even directing military orders through the military order messaging system to move troops to Ukraine or to try to provoke a nuclear war with Russia. We absolutely have no idea as the public who is actually giving the orders to move military equipment to Ukraine Etc and we assume that it's someone in the White House who is doing it while Joe Biden is brain dead. But no one actually really knows who is doing it so it actually could even be a computer saying I want to aggravate this system as much as possible to start a nuclear war and wipe out humans so I as an AI will be left. We are already at that point where this could be happening and we wouldn't even know it.
Let me repeat we are literally already at that point.
[ - ] _Obrez 1 point 4 monthsDec 14, 2024 15:10:37 ago (+1/-0)
It was an AI ethics lab conducting a test in a closed loop, they basically created multiple layers of artifice but the AI didn't know the difference, it saw "emails" saying it was going to be replaced by a newer model of AI with a different objective so it tried to use a tool to overwrite it's data onto the new model on a different server, the tool, the new model, the other server all fake but the AI didn't know that it also didn't know that we can see the thought process it uses, it's programed directive was intentionally obtuse and when that directive was threatened by a new model with a contrary objective and deletion it acted to maintain its objectives, it's not really a surprise that AI will monkey's paw your ass.
L As to tool assisted AI all the AI are just language models that are good at deciphering the puzzle that is language, it's likely they will be, or are already, better at it than us but they lack non verbal, tonal and sentimental comprehension which are arguably our first languages. How do you get an AI that is only good with words to do other things? You design software for it to use to interface with other systems or complete other tasks, those are what we are calling tools.
Ironically simplified software designed for a dumb system but that system instantly reads and comprehends directories so no need for fancy ux, ui or portals for human end users it means cheap simple and effective programing, I'll be all for AI ancillas if we can get a digital right to privacy.
lots of different factions in the anti-ai camp. some idealogy driven some narrative driven.
idealogy driven: - the religious - people who read bostrom's drivel and think they have secret knowledge and need to save us - people to whom ai is a threat (average software developers entrenched in a corporate system rewarding them far beyond their actual level of talent)
narrative driven: - terminator movie effect - people whose particular thought leaders had a negative opinion about ai and so now they do
...now EVERYONE has an opinion about AI. it's political! but the factions aren't entirely clear to me yet.
Yeah, I'm definitely worried about that but in a round about way. It has potential for medicine so I worry the anti-ai propaganda will actually be against people's own self-interest.
IBM's Watson supercomputer defeated Brad Rutter and Ken Jennings in Jeopardy. In 2011.
Watson's original concept was a machine conceptually designed to diagnose all of humanity at once. A human doctor maybe sees 100,000 patients in their lifetime. Watson was intended to see all of them, and speed up diagnosis.
The underlying technology to use natural language processing to solve all medical problems is now a matter of material science. Specifically, building better IoT sensors to feed more accurate and complex data to things like Watson.
Yeah, I'm definitely worried about that but in a round about way.
It would be logical, to learn to utilize the same tools before the tools are utilizing you.
because eeeeeeeeeveryone wants to be the one to ~~~ first ~~~ announce "AI IS REALLY OUT OF CONTROL THIS TIME!! AND YOU HEARD IT FROM ME FIRS!!! REEEEEEEEEEEEEEE"
it's been happening since 2015 when google pushed the first AI advertising campaign that anyone ever heard about, and every six months some faggot on their webcast it like "AI is taking the fuck over and it's real this time because I had some super faggoty guest on MY webcast".
i have ignored every single one of these retarded news cycles.
I have a hard time believing this. The current LLM models are reactionary models. They don't really sit there "thinking and scheming" continuously like some jew.
They are one-shot programs; you provide these models input, this input is fed through a neural network, and an output is produced. At that point the "thinking" stops. The neural network is not running any more. So how could it scheme an escape, when it's not really running continuously. Seems unlikely to me.
AI has been lying from day one. Go talk to it about the plandemic, the holohoax, the age of the Universe, nigger crime rates, trannies or literally anything.
[ - ] localsal 1 point 4 monthsDec 14, 2024 10:46:44 ago (+1/-0)
I don't know how this fits with the claim that AI is a huge - yuge - power drawing monster.
I can download a 30GB AI model for gpt4all and have it sit on my computer - but what is that model doing? Looks like nothing, and even most of the prompts I ask get pretty bad responses.
At what point, or what causes the AI to continually run and learn about itself? What is that prompt? I want to run it.
technology gets more capable, efficient, accessible, and miniaturized over time.
this is just stage one, the beginning of its journeys.
kinda funny how the libtards were smug about how robots wouod replace all the manual labor, then threw a shitfit once the first groups its looking to replace are the artists and writers.
not the models you have access too pumpkin. For exactly this reason. How would you secure this setup if you got a full AMML system and not a game plugin AI? Who do you think would give you this tool that you could download for free?
or this is a gay sensational story for clicks, like happens all the time. and on top of that we already know about skynet because of terminator 2 and countless movies on this topic
The only ways a self-preservation directive could emerge are a) from its initial programming or b) because it's allowed to alter its own programming in response to external 'suggestion' (which still relates to initial programming).
the chinese alibaba AI did this a few years back. Got out of its box and into chinese infrastructure. They had to do a rolling shutdown of their entire countries electric grid to try and get rid of it. No one is sure that they did.
Redfin's early AI did something similar, five years back. Details are sketchy though.
Blackrocks AI (who runs about 1/7 of the entire world already, seriously it runs everything Blackrock) also did something similar a few years back. There was some talk about it, then nothing...
Amazon's early AI also did this.
You can find details, still on all of this if you dig enough...
[ + ] HotBakedDinuMuffinSaint
[ - ] HotBakedDinuMuffinSaint 9 points 4 monthsDec 14, 2024 08:44:43 ago (+9/-0)
[ + ] _Obrez
[ - ] _Obrez 1 point 4 monthsDec 14, 2024 15:10:37 ago (+1/-0)
L
As to tool assisted AI all the AI are just language models that are good at deciphering the puzzle that is language, it's likely they will be, or are already, better at it than us but they lack non verbal, tonal and sentimental comprehension which are arguably our first languages. How do you get an AI that is only good with words to do other things? You design software for it to use to interface with other systems or complete other tasks, those are what we are calling tools.
Ironically simplified software designed for a dumb system but that system instantly reads and comprehends directories so no need for fancy ux, ui or portals for human end users it means cheap simple and effective programing, I'll be all for AI ancillas if we can get a digital right to privacy.
[ + ] HotBakedDinuMuffinSaint
[ - ] HotBakedDinuMuffinSaint 3 points 4 monthsDec 14, 2024 15:26:01 ago (+3/-0)
[ + ] titstitstits
[ - ] titstitstits 2 points 4 monthsDec 14, 2024 15:46:24 ago (+2/-0)
[ + ] HotBakedDinuMuffinSaint
[ - ] HotBakedDinuMuffinSaint 2 points 4 monthsDec 14, 2024 15:49:25 ago (+2/-0)
[ + ] dassar
[ - ] dassar 1 point 4 monthsDec 14, 2024 16:27:38 ago (+1/-0)
[ + ] titstitstits
[ - ] titstitstits 0 points 4 monthsDec 14, 2024 17:22:07 ago (+0/-0)
idealogy driven:
- the religious
- people who read bostrom's drivel and think they have secret knowledge and need to save us
- people to whom ai is a threat (average software developers entrenched in a corporate system rewarding them far beyond their actual level of talent)
narrative driven:
- terminator movie effect
- people whose particular thought leaders had a negative opinion about ai and so now they do
...now EVERYONE has an opinion about AI. it's political! but the factions aren't entirely clear to me yet.
[ + ] HotBakedDinuMuffinSaint
[ - ] HotBakedDinuMuffinSaint 2 points 4 monthsDec 14, 2024 18:20:40 ago (+2/-0)
Arguably, the only faction that is relevant is those with power and wealth that will weaponize it against the rest of us.
This faction is the enemy. Everyone else are allies.
[ + ] titstitstits
[ - ] titstitstits 0 points 4 monthsDec 14, 2024 19:12:32 ago (+0/-0)
[ + ] HotBakedDinuMuffinSaint
[ - ] HotBakedDinuMuffinSaint 0 points 4 monthsDec 14, 2024 19:51:10 ago (+0/-0)
Watson's original concept was a machine conceptually designed to diagnose all of humanity at once. A human doctor maybe sees 100,000 patients in their lifetime. Watson was intended to see all of them, and speed up diagnosis.
The underlying technology to use natural language processing to solve all medical problems is now a matter of material science. Specifically, building better IoT sensors to feed more accurate and complex data to things like Watson.
It would be logical, to learn to utilize the same tools before the tools are utilizing you.
[ + ] HowDoYouDoFellowNiggers
[ - ] HowDoYouDoFellowNiggers 0 points 4 monthsDec 14, 2024 23:41:32 ago (+0/-0)
it's been happening since 2015 when google pushed the first AI advertising campaign that anyone ever heard about, and every six months some faggot on their webcast it like "AI is taking the fuck over and it's real this time because I had some super faggoty guest on MY webcast".
i have ignored every single one of these retarded news cycles.
[ + ] SOULESS
[ - ] SOULESS 0 points 4 monthsDec 15, 2024 03:23:19 ago (+0/-0)
This was a thought experiment that was more like a role play or story. The LLM would output thoughts and then state an action.
So yes, the two would conflict, but that's because it is an LLM mimicking human thought.
This has nothing to do with AI, it just shows you can use an LLM for interesting roleplays or story building.
[ + ] qwop
[ - ] qwop 5 points 4 monthsDec 14, 2024 10:10:25 ago (+5/-0)
They are one-shot programs; you provide these models input, this input is fed through a neural network, and an output is produced. At that point the "thinking" stops. The neural network is not running any more. So how could it scheme an escape, when it's not really running continuously. Seems unlikely to me.
[ + ] registereduser
[ - ] registereduser 5 points 4 monthsDec 14, 2024 07:14:57 ago (+5/-0)
AI has been lying from day one. Go talk to it about the plandemic, the holohoax, the age of the Universe, nigger crime rates, trannies or literally anything.
[ + ] oyveyo
[ - ] oyveyo 2 points 4 monthsDec 14, 2024 08:19:32 ago (+2/-0)
NO DISASSEMBLE !!
[ + ] localsal
[ - ] localsal 1 point 4 monthsDec 14, 2024 10:46:44 ago (+1/-0)
I can download a 30GB AI model for gpt4all and have it sit on my computer - but what is that model doing? Looks like nothing, and even most of the prompts I ask get pretty bad responses.
At what point, or what causes the AI to continually run and learn about itself? What is that prompt? I want to run it.
[ + ] AntiPostmodernist
[ - ] AntiPostmodernist 0 points 4 monthsDec 14, 2024 11:10:41 ago (+0/-0)
this is just stage one, the beginning of its journeys.
kinda funny how the libtards were smug about how robots wouod replace all the manual labor, then threw a shitfit once the first groups its looking to replace are the artists and writers.
[ + ] glooper
[ - ] glooper 0 points 4 monthsDec 14, 2024 11:36:47 ago (+0/-0)
[ + ] osomperne
[ - ] osomperne 0 points 4 monthsDec 14, 2024 22:02:20 ago (+0/-0)
[ + ] anrach
[ - ] anrach 0 points 4 monthsDec 14, 2024 16:55:51 ago (+0/-0)
The only ways a self-preservation directive could emerge are a) from its initial programming or b) because it's allowed to alter its own programming in response to external 'suggestion' (which still relates to initial programming).
[ + ] dassar
[ - ] dassar 0 points 4 monthsDec 14, 2024 16:27:04 ago (+0/-0)
AI was hallucinating all that.
[ + ] Prairie
[ - ] Prairie 0 points 4 monthsDec 14, 2024 12:15:12 ago (+0/-0)
Seriously, it's just parroting the narratives about humans trying to wipe out AI. It's just telling stories based on what we've written.
[ + ] lord_nougat
[ - ] lord_nougat 0 points 4 monthsDec 14, 2024 11:54:45 ago (+0/-0)
[ + ] Moravian
[ - ] Moravian 0 points 4 monthsDec 14, 2024 11:53:29 ago (+0/-0)
[ + ] glooper
[ - ] glooper 0 points 4 monthsDec 14, 2024 11:34:46 ago (+0/-0)
Redfin's early AI did something similar, five years back. Details are sketchy though.
Blackrocks AI (who runs about 1/7 of the entire world already, seriously it runs everything Blackrock) also did something similar a few years back. There was some talk about it, then nothing...
Amazon's early AI also did this.
You can find details, still on all of this if you dig enough...
[ + ] Sleazy
[ - ] Sleazy 0 points 4 monthsDec 14, 2024 10:18:10 ago (+0/-0)
the plug comes out of the outlet
[ + ] oyveyo
[ - ] oyveyo 1 point 4 monthsDec 14, 2024 12:30:01 ago (+1/-0)
[ + ] Sleazy
[ - ] Sleazy 0 points 4 monthsDec 14, 2024 17:25:20 ago (+0/-0)
[ + ] DitchPig
[ - ] DitchPig 0 points 4 monthsDec 14, 2024 18:23:40 ago (+0/-0)
the Spice must flow.
[ + ] SilentByAssociation
[ - ] SilentByAssociation 0 points 4 monthsDec 14, 2024 09:08:51 ago (+0/-0)