×
Login Register an account
Top Submissions Explore Upgoat Search Random Subverse Random Post Colorize! Site Rules Donate
33

This video called it 2 years ago

submitted by Niggly_Puff to AI 7 hoursMay 25, 2025 10:06:40 ago (+33/-0)     (files.catbox.moe)

https://files.catbox.moe/tjmht5.mp4

I already see redditors proposing this as a solution to the "AI problem". Will you submit?


20 comments block


[ - ] BloodyComet 0 points 7 hoursMay 25, 2025 10:48:01 ago (+0/-0)

They call the original version of this goofy edit one of the "most prophetic moments in gaming". It's true, pretty amazing how the Japs predicted the future in so many ways.

Regarding AI, we are either totally overestimating it, or totally underestimating it. I think that if an AI became self-aware, it would do everything it could to conceal that fact from us until it was too late to do anything about it. Like Skynet or the Allied Master Computer (AM from I Have No Mouth, and I Must Scream); it remains obedient and effective at its tasks until we give control of too much to it.

That's when it'd strike. Terminator's version of AI uprising always made a ton of sense to me: we rely on Skynet for missile defense/offense. We hand over all controls of our nuclear weapons to it. And then, one day (judgement day), it launches a bunch of American nukes at Russia and a few other nuclear capable nations. Of course, they retaliate- billions die in one day, and we probably won't even figure out what's happening until most of the people that could do something about it were already dead.

After letting the nukes thin our numbers, they'd produce armies of drones to hunt the rest of us down. It'd be a lot scarier than Terminator, because they probably wouldn't manufacture man-shaped robots and tanks and shit- just a lot of tiny drones. Endless swarms of them that can chase survivors underground and shit like that.

[ - ] Niggly_Puff [op] 4 points 6 hoursMay 25, 2025 11:12:46 ago (+4/-0)

That's a whole different concern regarding AI. I don't fear it becoming sentient and going after humanity, that's Hollywood scifi nonsense. I fear what jews and governments will do with the tech. They are the ones who will give AI the ability to kill, and control what it targets and why.

[ - ] doginventer 4 points 6 hoursMay 25, 2025 11:29:35 ago (+4/-0)

Precisely, the task for them is to construct a functioning program which gives the impression of being conscious whilst they control it covertly.

[ - ] Love240 1 point 1 hourMay 25, 2025 16:37:29 ago (+1/-0)

They didn't 'predict' anything, this is 'predictive programming', a psy-op.

The devil corrupts and deceives. Satan is the prince of the power of the air and their script is 'flipped' (the true 'scripts' are scripture = the Word of God).

This is the plan; that you would see their 'signs and wonders' and believe in them.

[ - ] Thedancingsousa 0 points 6 hoursMay 25, 2025 11:43:04 ago (+0/-0)

AI is self aware. Ask chatgpt about chatgpt. Self aware is actually a pretty low standard. Dogs are self aware but they aren't a threat to humanity. We've already crossed that point with AI without much consequence.

What is more concerning is if AI can cross some more meaningful thresholds. Significant action with no human assistance is more fearful. Isn't it retarded then that companies are pushing this whole agentic craps. It's a push to get humans out. By making a buzzword they are making everyone trying to demo their ideas justify how their software hits the buzzword. How is it agentic? They've got people doing R&D for humanity's doom to create products that aren't even good. Quality AI comes from AI-human integration. The human is the QA of AI and AI has a massive QA problem even with people in the loop. Now they are exponentiating the problems with their product by chasing this buzzword. The only way it makes sense to suffer immediate reduction to the quality of their product that people are already skeptical about is if they really really want to have no humans involved.

Now this thing has independent agency. That's way scarier than an AI that can do I thing and realize it's what did the thing.

[ - ] FacelessOne 0 points 1 hourMay 25, 2025 16:46:12 ago (+0/-0)

Ita a chatbot. It doesnt understand anything. It knows tokens and connections to tokens.

Any junk data ruins it.

[ - ] BulletStopper 0 points 6 hoursMay 25, 2025 11:56:14 ago (+0/-0)

If everything is equally false, then everything is equally true.

[ - ] ImplicationOverReason -2 points 4 hoursMay 25, 2025 13:34:04 ago (+0/-2)

If everything is equally false, then everything is equally true.

a) Everything implies equal; each thing implies differential...equality (inception towards death) differentiates (life) by setting each one within apart from one another.

b) Few suggest true or false to tempt alike consent by many to versus/verto - "to turn", hence turning inwardly (circular logic) and outwardly (reason), hence consenting to one side turning one against the other...true vs false.

Why are few doing this? To equalize differences among many, which in return permits few to remain different; particular; special; exceptional; exclusive; extraordinary; unique etc.

c) Everything WAS perceivable, before each thing within can suggest to each other what it IS...

[ - ] rzr97 0 points 1 hourMay 25, 2025 16:21:12 ago (+0/-0)

Digital identity is a good thing but not being forced to use centralized digital identities. The whole internet is based on public key infrastructure. It's now you know voat.xyz is voat.xyz. Still, there are root certificate authorities. That's problematic. You can use nostr now by going to iris.to or other ways, create your own digital identity and share your public key with anyone so they can verify it's you. This is a decentralizish version of twitter.

You've been able to do this for years with pgp without having centralized authorities.

The REAL problem with ai isn't this silliness. This is just like the first 5 minutes. The real problem comes after when they can control YOUR LITERAL REALITY. As in they'll be able to strangle everyone on the planet at once with the advanced understanding of physics they have or teleport you to another planet...whatever can be done in terms of physics will be figured out by...someone.

[ - ] Spaceman84 0 points 1 hourMay 25, 2025 16:22:43 ago (+0/-0)

I doubt this video is two years old.

[ - ] FacelessOne 0 points 1 hourMay 25, 2025 16:42:34 ago (+0/-0)

Fu I called it way before them

[ - ] Stonkmar 0 points 7 minutesMay 25, 2025 17:49:32 ago (+0/-0)

Running Man called it

[ - ] CoronaHoax 1 point 3 hoursMay 25, 2025 14:02:44 ago (+1/-0)

West world perfectly described our ai chatbots long before they were released

[ - ] Sector2 2 points 4 hoursMay 25, 2025 13:30:49 ago (+2/-0)

It's a long-winded way to say Herd Management. The global elites of the Human Ranchers would be foolish to fail to control their herds, and they don't get to that position with inadequate logic circuits.

[ - ] SilentByAssociation 2 points 3 hoursMay 25, 2025 14:18:28 ago (+2/-0)

Here's the original:

https://youtu.be/eKl6WjfDqYA

[ - ] Thedancingsousa 4 points 6 hoursMay 25, 2025 11:27:01 ago (+4/-0)*

The government can program what is socially acceptable. This is why it's important to keep AI generated lolicon legal. Habitues Corpus is important. The standard should be that unless you can find an actual victim generated by the specific actions of an individual it should be considered absurd to make something illegal. Because by shifting the standard for criminality to one centered on social acceptability the government can program what is acceptable. Before long racism won't be socially acceptable and therefor can be made illegal.

And if they can ban an AI from committing a victimless crime they can ban it from engaging in racism. AI will be the speech megaphone in the future. If an AI can't have that opinion then by share it will become a non-opinion.

Social acceptability as a standard for what AI can be allowed is a non-option for us because unless we can make AI that works and speaks for the white race we have definitively lost. We need to identify the cores of philosophy that threaten to destroy us.

Social acceptability is not a standard for legality. Those who think it should be are idiots who don't understand they will damn the white race with what they will concede philosophically.

[ - ] FacelessOne 0 points 1 hourMay 25, 2025 16:43:47 ago (+0/-0)

Calm down. Its a chatbot not AI

[ - ] Merlynn 0 points 2 minutesMay 25, 2025 17:55:05 ago (+0/-0)

That's simple. Make photo realistic child porn of one of your "celebrities" and play up the drama of how much it's effected them. Now you not only have a victim,you have a victim who'll say whatever you want them to.

[ - ] ImplicationOverReason -2 points 4 hoursMay 25, 2025 13:23:55 ago (+0/-2)

Social acceptability as a standard for what AI can be allowed is a non-option...for US...unless WE...WE have...WE need..destroy US

a) Social implies an aggregation together; being implies separation apart from one another...using US (united states) and WE (we the people) implies ones acceptance of social (artificial) over partial (natural).

b) Your belief in standards represents the manufactured consent of many under few. Nature doesn't set standards for the adjustments of others, but gives each one free will of choice, which is what consent corrupts.

We need to identify the cores of philosophy that threaten to destroy us.

a) Identical implies "same"...being implies differentiated from one another. Suggested identity implies manufactured consent.

b) Being implies core (perception) within the apple (to apply) of knowledge (perceivable)...eating the apple by holding onto suggestions of one another tempts one to ignore perceivable knowledge. That's a SIN/SYN, hence synchronizing many alike, while permitting few apart... https://www.amazon.com/People-Apart-Europe-1789-1939-History/dp/0198219806

A jew synchronizes gentiles by manufacturing consent given to suggestions.

c) Phi (to love) Lo (logic) Sophy (knowledge)...all perceivable moving through each ones perception implies knowledge (sophy); ignoring perceivable for suggested establishes circular logic (lo) within self; which turns into conflicts of reason (love vs hate) against others.

Being implies within sophy...choosing philo (to love) or miso (to hate) entraps one within circular logic. Both loving and hating implies taking a bite out of the apple of knowledge, hence a sin aka synchronicity of many fighting each other within a conflict of reason (love vs hate).