by Karl Denninger, Market Ticker:
Google and others have tried to make a wildly-false claim about “generative” AIs in the general case: They can “hallucinate.”
One thing we’ve learned is generative AIs can hallucinate, meaning they come up with totally false information that bears no resemblance to reality. Don’t always believe what you read. And do be mindful of what you enter as generative AIs keep the information you type in. In fact, some major tech corporations, like Samsung, have banned their employees from using ChatGPT after some sensitive information was leaked.
The second part is correct.
The first part is a lie.
TRUTH LIVES on at https://sgtreport.tv/
Alleged “generative AI” is in fact nothing more than an inference engine. That is, it assigns “weights” to that which it “knows” (e.g. is taught or is fed, from whatever source) and then tries to run a correlation analysis of sorts on that.
Computers are very good at this, and the more information they have the better guess they can make. As processing power increases the amount of data required to draw said inferences shrinks. This is where attention is being paid today.
The problem is the weighting and how that resolves when there is a conflict.
Let’s take a rather general instance: You ask an AI for a list of men in some profession who have been accused of some impropriety.
Note three things about this:
- You pre-selected the sex and profession of the result set, because that was what you are trying to study or determine. There’s nothing wrong with that; this is what you would otherwise, for example, type into a general search engine that simply indexes existing material and draws no judgment about it.
- You did not qualify the request as to require a legal judgment of guilt — you are only asking for an allegation.
- You are presuming that the computer program you asked the question of has an unbiased, fact-based set of data — and only that — with which to evaluate the data and return a response.
You’d expect the AI to return a list of persons and factual references for each of the allegations it allegedly found. Presuming the AI has only factual information and no selection bias in its programming or data set that’s what you’re going to get. Why?
Because it is a machine; it cannot think “out of scope” and cannot ask its own questions of itself nor its references and other input sources.