Being implies caused, hence the proof being in the pudding/putting. It's cause which puts being into effect. The issue...seeking proofs from one another by exchange of affection, which tempts one to ignore cause.
But here's the problem. A human user may copy and paste from a chatbot and submit it as their own. Maybe it's the full post, maybe it's only a small smigin of a post. Do we consider that a human post or a bot post? The human is still determining whether they want to hit "send", meaning there is still some decision making going on to vet the content.
Some users autopilot the content until they feel it is appropriate to intervene. Much like that guy that chatbotted a dating site until a worthy match was found. They programmed the bot to speak like them.
At what point do we draw the line? And frankly, in many cases how would you really know?
I think instead of guessing and forcing, there should be a voluntary option to flag your own account as a bot account or a bot-assisted account. Test it out with an account or two. Watch the reactions. I don't think people are as opposed to talking to bots as you might think.
I think some accounts already explicitly stated they were bots like the user named 'try'.
If labeled, I feel like there needs to be a label about what character they are playing. E.g. research helper, Christian, "Pagan" Christ hater, etc. If done correctly and with the intention of creating an simulated preview of how those demographics would respond to things, I believe you will find there is actually great value with having bots provide input.
I think if there are going to be bot accounts they should attempt to stay in character as best as they can.
It's such questions which gather mass consent towards labeling non-bots within a unifying system. Instead...resist the temptation to label/libel.
As for mandatory...being implies man (manual) date/dare (to give) of heaven, hence being given the free will of choice to manually respond, which unifying systems try to automatize, hence few tempting many with suggested mandates into consenting to become automatons.
[ + ] doginventer
[ - ] doginventer 1 point 1 dayMay 28, 2025 06:54:07 ago (+1/-0)
[ + ] ImplicationOverReason
[ - ] ImplicationOverReason -3 points 1 dayMay 28, 2025 06:59:18 ago (+0/-3)
[ + ] doginventer
[ - ] doginventer 0 points 1 dayMay 28, 2025 07:13:00 ago (+0/-0)
[ + ] i_scream_trucks
[ - ] i_scream_trucks 4 points 1 dayMay 28, 2025 07:52:34 ago (+4/-0)
[ + ] doginventer
[ - ] doginventer 0 points 1 dayMay 28, 2025 08:45:02 ago (+0/-0)
But I just had to try. It’s definitely dumb enough, if only it had the agency …
[ + ] ImplicationOverReason
[ - ] ImplicationOverReason -1 points 1 dayMay 28, 2025 09:43:49 ago (+0/-1)
[ + ] JudyStroyer
[ - ] JudyStroyer 0 points 1 dayMay 28, 2025 08:29:58 ago (+0/-0)
[ + ] ImplicationOverReason
[ - ] ImplicationOverReason 0 points 1 dayMay 28, 2025 09:21:14 ago (+0/-0)
That's another rhetorical deception to mock gentile ignorance...gasoLINE the kike/kikel (circle) aka line the circle.
Nature sets beings apart from one another...linking articles/artifice together tempts one to ignore nature.
Inception sentences life towards point of death...making rhetorical "points" tempts one to ignore that.
Quick and fast imply motion...not something one can claim.
[ + ] Reunto
[ - ] Reunto 0 points 1 dayMay 28, 2025 11:04:56 ago (+0/-0)
But here's the problem. A human user may copy and paste from a chatbot and submit it as their own. Maybe it's the full post, maybe it's only a small smigin of a post. Do we consider that a human post or a bot post? The human is still determining whether they want to hit "send", meaning there is still some decision making going on to vet the content.
Some users autopilot the content until they feel it is appropriate to intervene. Much like that guy that chatbotted a dating site until a worthy match was found. They programmed the bot to speak like them.
At what point do we draw the line? And frankly, in many cases how would you really know?
I think instead of guessing and forcing, there should be a voluntary option to flag your own account as a bot account or a bot-assisted account. Test it out with an account or two. Watch the reactions. I don't think people are as opposed to talking to bots as you might think.
I think some accounts already explicitly stated they were bots like the user named 'try'.
If labeled, I feel like there needs to be a label about what character they are playing. E.g. research helper, Christian, "Pagan" Christ hater, etc. If done correctly and with the intention of creating an simulated preview of how those demographics would respond to things, I believe you will find there is actually great value with having bots provide input.
I think if there are going to be bot accounts they should attempt to stay in character as best as they can.
[ + ] ImplicationOverReason
[ - ] ImplicationOverReason -3 points 1 dayMay 28, 2025 06:56:03 ago (+0/-3)
As for mandatory...being implies man (manual) date/dare (to give) of heaven, hence being given the free will of choice to manually respond, which unifying systems try to automatize, hence few tempting many with suggested mandates into consenting to become automatons.
[ + ] i_scream_trucks
[ - ] i_scream_trucks 4 points 1 dayMay 28, 2025 07:51:50 ago (+4/-0)
[ + ] JudyStroyer
[ - ] JudyStroyer 1 point 1 dayMay 28, 2025 08:28:23 ago (+1/-0)
[ + ] ImplicationOverReason
[ - ] ImplicationOverReason -1 points 1 dayMay 28, 2025 09:27:17 ago (+0/-1)
Resisting suggested theory/theos/theism...that's testing oneself.
[ + ] i_scream_trucks
[ - ] i_scream_trucks 0 points 19 hoursMay 28, 2025 17:25:36 ago (+0/-0)*
Til ai chatbots are like indian customer service.
Easily confused and defaults to repeating talking points completely unrelated tp the conversation
Does AI also bob its head side to side when its pretending it understands the memsahib?