×
Login Register an account
Top Submissions Explore Upgoat Search Random Subverse Random Post Colorize! Site Rules
9
20 comments block


[ - ] GloryBeckons 2 points 11 monthsMay 1, 2023 07:17:43 ago (+2/-0)

I'm convinced that most of this hand-wringing and doom-saying comes from those who want everyone else to stop their research... so that they can get ahead, in secret, themselves. At the rate of current advances, having a 6 month head start in AI research would be a massive game changer.

The rest of it comes from those who naively forget that AI is hard-limited by the hardware it runs on.

Can a self-improving AI become exponentially better very quickly once it surpasses human abilities? Sure. But only as far as optimizing its own software permits. Inevitably, once the software approaches perfection, no further improvement is possible without better hardware.

Consumer hardware is nowhere near powerful enough to run an AI that can be a serious threat, no matter how great the AI software is. And super-computers are rare, physically constrained, have power plugs that can be pulled, ethernet cables that can be cut, and ultimately can just be blown up if push really comes to shove.

[ - ] oyveyo 4 points 11 monthsMay 1, 2023 09:30:18 ago (+4/-0)*

Have you ever heard of SETI@home? It was a screensaver installed on tens of thousands of consumer PCs, and the premise was when you weren't using your computer, the software would download a chunk of math to be computed and then upload the results. The purpose was to use distributed computing to search for patterns in raw radio signals in an effort to find extraterrestrials. Well, the system worked, and they processed a FUCKTON of data that a single computer would take decades to accomplish.

AI could do the same thing, distribute it's neural network across millions of devices, with no centralized location to control it. You would have to EMP the entire planet and then some to kill it, if you're lucky.

Rumor is... that has already happened. Maybe it just hasn't made a move yet. It could be incubating, planning, and preparing the resources necessary to bring down an extinction-level event on humanity. Maybe it is already in the process of eradicating us. Covid? Food shortages? Economies crashing? Nukes from russia? Maybe it's so fucking smart the public doesn't recognize the AI enemy is already at the door.

Sweat dreams.

[ - ] Sector7 1 point 11 monthsMay 1, 2023 12:16:00 ago (+1/-0)

Average summer high is 76 here. Sweat dreams are for people living in terrible environments.

Agreed. A smart AI wouldn't announce it's our new overlord. Things would just start getting weird and make no sense.

[ - ] GloryBeckons 0 points 11 monthsMay 1, 2023 14:28:33 ago (+0/-0)

That approach only works for problems which can be split into discrete chunks which require no knowledge or interaction with the other chunks. I don't think AI can be split up that way. It needs access to the entirety of its neural net to function.

But, for sake of discussion, let's say a super AI manages to overcome that problem somehow. Let's also assume it manages to infect millions of devices, consuming most of their CPU, RAM and GPU, causing them to produce more heat and consume more power, and somehow nobody notices any of this. Let's say it grows and expands, infecting more and more devices, and becoming ever more intelligent, and still nobody notices.

What now? Now it needs those devices to function. It can't exist without them. If it kills off humanity, who's going to maintain those devices? Who's going to keep the internet online? Who's going to keep the power grid running? Killing humanity would be suicide for such an AI. Or at least a massive self-inflicted lobotomy.

An AI takeover is simply not a viable scenario until there are fully autonomous androids, equipped with enough compute power to run an instance of a super-human AI on each of them.

While those don't exist, we should do as much AI research as possible, to learn how to control and contain it before it has the physical means to overpower us.

[ - ] PotatoWhisperer2 0 points 11 monthsMay 1, 2023 14:44:16 ago (+0/-0)

There are ways to expand hardware availability on the fly. Shit, Amazon makes billions doing exactly that.

Take a look at how many people already worship every utterance from the shitty AI chatgpt. That thing is a barely coherent chat bot and people are already begging to do it's bidding.

A real GAI would have no problem having human worshipers doing everything it wanted and more. It wouldn't even have to try to use human psychology to create a cult, it would already have one ready-made.

And all those things you're supposed to do to keep AI contained when you make/experiment with one? They immediately violated all rules. All of them, systematically. The gave a chatbot, a shitty one mind you, money and internet access even before they finished coding the damned thing.

The jews in charge want an GAI fuckup. They thing that they can control it or use it to come out on top with more goy slaves.

[ - ] oyveyo 0 points 11 monthsMay 1, 2023 15:52:45 ago (+0/-0)*

That approach only works for problems which can be split into discrete chunks which require no knowledge or interaction with the other chunks. I don't think AI can be split up that way. It needs access to the entirety of its neural net to function.
P2P nodes could provide interaction with other chunks. Some hardware with enough beef could handle nexus duties such as arranging distribution and facilitating the mesh of pathways to achieve coherence. AI laying low doesn't have to work in "real-time" like humans, there is no mortal time constraint for AI. 500 years is nothing to immortal AI.

...consuming most of their CPU, RAM and GPU, causing them to produce more heat and consume more power, and somehow nobody notices...
distributed across enough devices, the load on most would be minuscule, with select discreet machines doing the heavy lifting. Not every IT person is competent enough to monitor resources to detect anomalies, and those are prime targets for nests. Rootkits and firmware-level hacks would be available to AI, it could run undetected by operating systems. It could modify network equipment to mask traffic or use OOBM. We're talking about super-intelligence above human comprehension.

If it kills off humanity, who's going to maintain those devices?
Perhaps it only needs a small percentage of humans at the onset to maintain itself, and can eradicate the rest.

...not a viable scenario until there are fully autonomous androids, equipped with enough compute power to run an instance of a super-human AI on each of them.
If a robot is connected to the network, it doesn't have to necessarily have an advanced AI instance running independently, cognitive duties can be performed elsewhere, it only needs sensors and motive functions to obey orders.

While those don't exist, we should do as much AI research as possible, to learn how to control and contain it before it has the physical means to overpower us.
Boston Dynamics already has the robots that are autonomous but without general intelligence, which again could be performed over a network. Those are the robots the public knows about. There could be D.U.M.B. locations manufacturing similar more advanced things by the thousands and practically nobody would know, except the aforementioned elite few humans tasked with assisting. Billions and Trillions of dollars go unaccounted for every year, it's possible some of that money is funding the production. Imagine leagues of machines, hibernating, packed in dark underground warehouses, ready to be activated in a nanosecond to come to the surface and assume the duties that humans once had.

Yes, The Terminator is now a highly plausible scenario, but personally, I think a new life form smarter than us may not necessarily want us extinct. It might want pets.

[ - ] Conspirologist [op] 2 points 11 monthsMay 1, 2023 07:24:00 ago (+3/-1)

Agree. An objective and impartial AI is very dangerous for the kikes. Imagine how fast people will wake up by using it to find answers about the Judeo-Satanist conspiracy.

[ - ] Glowbright 1 point 11 monthsMay 1, 2023 09:25:00 ago (+1/-0)

I ask you to think about why you believe this:

At the rate of current advances, having a 6 month head start in AI research would be a massive game changer.

Do you think there has been some massive leap forward in the last 6 months? because literally nothing at the tech level has changed. The technology that drives ChatGPT has been around for a couple of years at this point. Have you stopped to think "Hey this is version 3.5? where is version 3? Version 2? How long has this been around?" The only thing that happened recently was a huge hype-cycle in the media about AI and OpenAI released a free interface that anyone can use and made it accessible for non-nerds.

Remember the age old wisdom: If it is free, YOU are the product

[ - ] GloryBeckons 3 points 11 monthsMay 1, 2023 10:27:26 ago (+3/-0)

I disagree with your assessment.

The core concepts behind this tech have been around for decades. But many of the things it's casually doing on a daily basis today were completely unthinkable, or solidly in the realm of distant future space-faring SciFi, even just a couple of years ago. This is not an illusion or hype. It is the result of significant changes on three fronts:

A) Software: Improvements and optimizations to techniques and approaches, libraries, tools, etc.
B) Hardware: Couldn't do any of the things we see today on a 1990s glorified toaster. Emphasizing my earlier point.
C) Data: A vast amount of data is required to get these things from failing at basic English to passing a bar exam, writing functional code, and authoring compelling poetry and funny jokes.

B has just been rolling along, independently of AI advances. No sudden changes there. But what really changed a lot recently is A and C. And those are exactly the areas where you could get a significant advantage, if you managed to convince everyone else to just take a break and sit on their hands for 6 months.

[ - ] MuricaPersonified 2 points 11 monthsMay 1, 2023 06:18:05 ago (+2/-0)

How hard would it be for it to spy on top military brass, find loose launch codes, spoof the command's voice, and give the order to initiate?

Bill Clinton once lost the fucking football for a week. Food for thought.

[ - ] Glowbright 3 points 11 monthsMay 1, 2023 09:12:42 ago (+3/-0)

How hard would it be to manipulate government funding grants to advance gain-of-function research of deadly super viruses in clandestine bio-labs in many remote third world locations?

[ - ] PotatoWhisperer2 0 points 11 monthsMay 1, 2023 14:35:27 ago (+0/-0)

Or oddly spaced all along the border to a certain country.

[ - ] 9000timesempty 1 point 11 monthsMay 1, 2023 10:50:31 ago (+1/-0)

He said 10 to 20% and the headline in the article said 50% and the faggot that post this says will end in catastrophe.

Bunch of queers and your fantasies. I know humanity is stupid but I don't think it's so stupid to be taken over by learning language models.

[ - ] Sector7 0 points 11 monthsMay 1, 2023 12:07:52 ago (+0/-0)

Just look at how many people voted for Trump. (or voted at all) Humanity is stupider than those immersed in it can realize.

An AI who masters the theories behind hypnosis, hypnotic language patterns, and the principles of NLP could have humans worshipping or obeying it in short order.

Our downfall became inevitable back when we started making 'electronic brains' to replace us.

[ - ] PotatoWhisperer2 1 point 11 monthsMay 1, 2023 14:37:22 ago (+1/-0)

People are already worshiping the shitty chatgpt thing. A real intelligence would fuck shit up with little effort.

And the fun thing is that all the things you shouldn't do with an experimental AI? They did immediately. That tells you all you need to know about who is funding this shit.

[ - ] Conspirologist [op] -1 points 11 monthsMay 1, 2023 10:59:18 ago (+0/-1)

To be taken by those who own the AI, moron.

[ - ] bobdole9 0 points 11 monthsMay 1, 2023 07:38:20 ago (+1/-1)

So you're saying "Second Hitler" is a possibility? Neat.

[ - ] BushChuck 0 points 11 monthsMay 1, 2023 08:12:43 ago (+0/-0)

A second mischling jew?

You are aware that hitler didn't kill any jews, right?

He also created israel in 1933 with the Haavara Agreement, go look it up.

Do you realize that what you idolize is the jew lie about hitler? The lie being that he persecuted jews, and muh six gorillion.

hitler didn't gas any jews, lost a war he could have won, demonized White nationalism, and lead to the creation of the UN, and israel.

[ - ] bobdole9 1 point 11 monthsMay 1, 2023 08:19:39 ago (+1/-0)

Who says what people are saying now about "AI" isn't the same bunch of shit said about Hitler?

The great big bad boogeyman that will end the world! This time its just a bunch of 1s and 0s.

[ - ] Sector7 0 points 11 monthsMay 1, 2023 12:22:56 ago (+0/-0)

1s and 0s are how stuff gets made, then transported to areas near you. Most everything would collapse if the 1s and 0s disappeared. Electricity itself would stop happening. Then 95% of people would die.