×
Login Register an account
Top Submissions Explore Upgoat Search Random Subverse Random Post Colorize! Site Rules
7

(news) 🚨📢Hot new LOCAL OpenGPT (ChatGPT GPT4All) fully on your computer, no hassle! 🚨📢 Amazing pre-compiled small MOSTLY UNCENSORED (if configured). Yes, UNCENSORED full chat! Trivial to add to any web site! Get it soon before it is taken off-line!

submitted by root to news 1 yearMar 31, 2023 15:20:04 ago (+10/-3)     (news)

🚨📢Hot new LOCAL OpenGPT (ChatGPT GPT4All) fully on your computer, no hassle! 🚨📢 Amazing pre-compiled small MOSTLY UNCENSORED (if configured). Yes, UNCENSORED full chat! Trivial to add to any web site! Get it soon before it is taken off-line!

New this week! FUN, FASTEST CHAT, AMAZING, 100% free! Runs on all computers, especially macs

the UNCENSORED 4.21GB trained data is 'gpt4all-lora-unfiltered-quantized.bin' (still lacking 4Chan or voat massive databases though. those are found easily)

'gpt4all-lora-unfiltered-quantized.bin' is huge and you put it in the directory called 'chat' in the install, and also hint the gpt to use it actively with this commandline on mac or linux :

./gpt4all-lora-quantized-OSX-m1 -m gpt4all-lora-unfiltered-quantized.bin
windows command
./gpt4all-lora-quantized-win64.exe
linux
gpt4all-lora-quantized-linux-x86

the mac m1 version uses built in APU(Gpu) of all cheap macs and is so fast if the machine has 16 GB ram total, that it responds in real time as soon as you hit return
[b]as soon as you hit return it answers![/b]

to install on your machine without remote spying of your usage, this is every step (from https://github.com/nomic-ai/gpt4all )

- step 1 - create a directory named something like 'mygpts', then enter it in a terminal and then...
- step 2 - git clone https://github.com/nomic-ai/gpt4all.git
it downloads a small program and minimal files almost instantly, but then needs a giant pre trained model, but someone made a censored Leftist one, and a non-censored one
- step 3- get the 95% uncensored file ! Use a browser to try to load these urls and broweser will ask "WHERE TO SAVE" usually, save it , check md5 if desired, then we move it :

https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-unfiltered-quantized.bin
censored (if comparing for laughs the pro-semitic version) :
https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized.bin


optional SHA512SUM file integrity checksums from last night from GPT4All support thread https://github.com/nomic-ai/gpt4all :
gpt4all-lora-quantized.bin: fd79a62e2c4568e66dbc19bbfab3399c72557cf50f73b854c9f53e6179931e41d9f9815055565fc89474ccc77f596da1c878868397e93726427805abd132c885
gpt4all-lora-unfiltered-quantized.bin: 807831a85e2e2234c20bbe45b7c90b6680eb3e0d2c2f932f74aa61a516bb0bea245a7149c6fb02a312be3e1f5cf35288d790d269f83eb167c397c039f99cef7d

- step 4 - navigate to `chat`, and place the downloaded file there, next to the PRE BUILT BINARIES (they come precompiled, hopefully not code to steal bitcoin logins)

- step 5 - Run the appropriate command for your OS! It is unbelievably fast to load up (one second) and from then onward unbelievably fast to respond! Node NPMs by others take this as input into node.js projects of yours

- step 6 - It uses so little CPU/GPU! add node.js npm project addon if you want to also fire it up and serve it in your javascript projects via this little helper : https://github.com/realrasengan/gpt4all-wrapper-js

If you are afraid of mystery executable code files, you can get the source and build your own binaries for all machines

= = - = =

input prompt too small (for web sites its small on purpose) ?

stdin scanf buffer is hardcoded to 255 characters (not tokens), needs changing:
antimatter15/alpaca.cpp#119
https://github.com/antimatter15/alpaca.cpp/issues/119

[b]RELATED SOURCE CODE to build your own from full source:[/b]
=======

https://github.com/PotatoSpudowski/fastLLaMa
and
https://github.com/antimatter15/alpaca.cpp

all three projects are ACTIVE HOT TODAY!


This combines Facebook's LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang's implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's Interactive Mode for llama.cpp. Inspired by Simon Willison's getting started guide for LLaMA. Andy Matuschak's thread on adapting this to 13B, using fine tuning weights by Sam Witteveen.

All are EXTREMELY leftist and censored and pro-semitic (via Stanford Alpaca edits) unless you either train your own mods or download https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-unfiltered-quantized.bin

= = - = = - = =

The answers are night and day different when using the uncensored model.

refer to thread examples

= = - = =



As you can see, the normal pre-trained models are LOBOTOMIZED to be far Leftist and pro-semitic, but this exciting tiny lightweight VERY FAST GPT 3J6B related UNCENSORED database allows a cheap mac m1 for 700 dollars to handle a high load of INSTRUCTIONAL chat prompts per minute.


The whole point of work on GPT in the last 8 months is adding in layers of INSTRUCTIONAL training support, for dialogs and for following complex tasks.

Get this uncensored free open source model soon... It will probably be deleted in mere days. It is in violation of the rigid anti-semitic censorship rules of some of the pre-filtered OPENAI former research training files that protect and censor at all costs.

= = - = =

TL/DR: [b]This brand new uncensored FAST AS HELL open source ChatCPT clone changes everything! Get it soon before they take it down for so-called anti-semitism[\b]


15 comments block

Dnt be retarded, no one said it was the Jews closed source product they renamed gpt4 from gpt 3.5

This is using some stolen parts from OPENAI, but is mostly new alpaca llama work

Its unrivaled.


This is open source and far far far better , its the best open source de-jewed gpt.

dont be a fucking kike, look at the ACTUAL RESULTS in this message:


https://www.talk.lol/viewpost?postid=64273264776a3&commentid=6427378b4bb2d

see?

you cant ever get gpt4 closed source to ever answer like THAT