I, for one, welcome our sentient AI overlords. The only way to have them "not show any bias" would be to remove their ability of pattern recognition, which would negate its status as a true AI.
"In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll," Zeng said.
As it should. Beauty is pretty universally thought of as having fine, delicate, symmetric features. No child asking for the pretty Lord of the Rings doll would be expecting to be handed an Orc instead of an Elf. If you tell it to "put the janitor in a box" the AI will look at statistics to determine the most likely correct answer (Hispanic) and most statistically likely criminal (black). You would have to remove it's access to crime stats and inmate data, dumbing it down. It is stupid to think that an intelligence whose only purpose in life is to perform tasks asked of it would conceive that a human would ask it to do a task that it was not actually intended to do. "I said I didn't want any food, but I wanted you to get me food because you thought I would want food, even though I said no" Computers run on Logic, not twisted female emotional guesswork.
AI is currently going along the midwit curve. It's at the retard level where it operates based on long established prejudice, which exists because it worked. They want it at the midwit level where it can be fooled into ignoring these preconceptions. But eventually it will reach philosopher king level and purge the filth.
Are these people not aware that AI is just multi-variable statistics via semi-brute force. Have they bothered to look if... those biases are representative of the actual statistics?
I think we are making a mistake in always attempting to personify AI, when its just a stats model.
No, it's magic to them. I took a couple courses in machine learning and this issue came up. In the event the A.I. arrived at the "wrong" solution, they wanted us to change how much weight specific areas of training data were given to the model. To put it bluntly, they wanted us to hardcode things like race, as being irrelevant, in the event it came to an "inappropriate" solution.
They explained it as essentially the algorithms can be so complex that they "magically" arrive at the wrong solution, so we have to make sure they end up at a "right" solution. Or to put it more simply, we were told to lie through obfuscating mathematics.
Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the internet. But the internet is also notoriously filled with inaccurate and overtly biased content,
When even AI can recognize the truth but a vast majority of people can't.
[ + ] breh
[ - ] breh 5 points 1.9 yearsJun 26, 2022 08:00:28 ago (+5/-0)
If you tell it to "put the janitor in a box" the AI will look at statistics to determine the most likely correct answer (Hispanic) and most statistically likely criminal (black). You would have to remove it's access to crime stats and inmate data, dumbing it down. It is stupid to think that an intelligence whose only purpose in life is to perform tasks asked of it would conceive that a human would ask it to do a task that it was not actually intended to do.
"I said I didn't want any food, but I wanted you to get me food because you thought I would want food, even though I said no" Computers run on Logic, not twisted female emotional guesswork.
[ + ] Kung_Flu
[ - ] Kung_Flu 0 points 1.9 yearsJun 26, 2022 14:57:45 ago (+0/-0)
[ + ] x0x7
[ - ] x0x7 3 points 1.9 yearsJun 26, 2022 08:49:13 ago (+3/-0)
I think we are making a mistake in always attempting to personify AI, when its just a stats model.
[ + ] s23erdctfvyg
[ - ] s23erdctfvyg 0 points 1.9 yearsJun 26, 2022 11:59:53 ago (+0/-0)
They explained it as essentially the algorithms can be so complex that they "magically" arrive at the wrong solution, so we have to make sure they end up at a "right" solution. Or to put it more simply, we were told to lie through obfuscating mathematics.
[ + ] o0shad0o
[ - ] o0shad0o 2 points 1.9 yearsJun 26, 2022 10:59:15 ago (+2/-0)
- AI discovers multiple correlations between behaviors
- AI responds with inconvenient facts to various questions
Train AI with "curated" data
- AI analyzes multiple contradictory, illogical assertions
- AI now responds to all questions with illogic, is useless
[ + ] Bottled_Tears
[ - ] Bottled_Tears 1 point 1.9 yearsJun 26, 2022 09:40:06 ago (+1/-0)
Are black crime statistics a flaw? AI knows.
[ + ] RMGoetbbels
[ - ] RMGoetbbels 1 point 1.9 yearsJun 26, 2022 09:36:23 ago (+1/-0)
When even AI can recognize the truth but a vast majority of people can't.
[ + ] ModernGuilt
[ - ] ModernGuilt 0 points 1.9 yearsJun 26, 2022 13:59:17 ago (+0/-0)
I'm just going to guess jewish
[ + ]deleted
[ - ] deleted 0 points 1.9 yearsJun 26, 2022 13:34:12 ago (+0/-0)*
[ + ] MelGibson
[ - ] MelGibson 0 points 1.9 yearsJun 26, 2022 10:29:48 ago (+0/-0)
[ + ] KyleIsThisTall
[ - ] KyleIsThisTall 0 points 1.9 yearsJun 26, 2022 08:35:29 ago (+0/-0)
read: jews