by Brian Shilhavy, Health Impact News:
The hype over AI Chat searches as well as the “AI is going to take over the world and replace humans” is continuing unabated, and investments into AI are now the only thing left propping up the U.S. economy.
I suppose this is the result of having a generation of adults who have now grown up in the “computer age” starting in the early 1980s who are now running the economy with their beliefs in AI and technology.
Old school technologists like myself, who watched all of this technology develop, and know better, are having little to no effect in trying to dispel these false beliefs. I have earned my living and built my career on this technology for over 25 years now, but that doesn’t seem to matter with this generation.
TRUTH LIVES on at https://sgtreport.tv/
If you want to avoid the catastrophe that is coming as a result of over-investing in this new tech fad, start by reading this historical article on the failures and losses of investing in AI for the past 75 years:
The 75-Year History of Failures with “Artificial Intelligence” and $BILLIONS Lost Investing in Science Fiction for the Real World
My views on the hype over this “new” AI fad are not unique at all, as many others also share them, but it is much more interesting to state that AI is going to take over the world and replace humans, and that is the view that gets clicks and traffic today, which can obviously be monetized as well.
Therefore, the “AI is going to take over the world” view is the predominant view, not because it is true, but because it is more popular and sells more.
So I am going to highlight some of the other dissenting voices in this article, and then I am going to show what this “new” AI Chat software is actually doing today, as it has been out in the public for about 5 months now.
But if you want the spoiler as to what it is actually doing today with hundreds of millions of users, here it is: It is a disinformation and data collection tool.
NYU Professor and Meta Platforms Chief AI Scientist Yann LeCun: “ChatGPT Isn’t Remarkable”
This first dissenting voice over the current AI hype is from Yann LeCun, an NYU professor, who also serves as Meta’s chief AI scientist.
The ‘Godfather of AI’ Says Doomsayers Are Wrong and ChatGPT Isn’t Remarkable
Settle Down. Hi everyone. The excitement over advances in generative artificial intelligence has reached a fever pitch, bringing with it an extreme set of worries.
The fearmongers fit into two camps: either AI will soon enable a vast dystopian future or it will unleash an existential threat to humanity. Last month, a group of technology executives, including Elon Musk and some AI luminaries, added fuel to the fire when they called for a six-month pause on developing advanced AI systems so that the industry could build safeguards against harmful outcomes.
The call from tech executives to pause innovation is both unprecedented and unnecessary.
Barron’s Tech recently talked to Meta Platforms chief AI scientist Yann LeCun about the current state of AI, the rise of ChatGPT, and his views on why asking for a moratorium on AI research is misguided.
LeCun is one of the AI industry’s most prominent scientists and has been an outspoken critic of those who have exaggerated the capabilities of the underlying technology used by AI chatbots such as ChatGPT.
He’s a professor at New York University and joined Facebook—now Meta—in 2013. Along with Geoffrey Hinton and Yoshua Bengio, LeCun received the 2018 ACM Turing Award—known as the Nobel Prize of computing—for his research around deep learning techniques that have become foundational for modern AI technologies.
The three scientists have frequently been called the “Godfathers of AI” for their work in the space.
Here are the edited highlights from our conversation with LeCun.
Barron’s: Explain how ChatGPT and the technology behind large language models (LLMs) work?
LeCun: You can think of it as a super powerful predictive keyboard. Large language models are first trained on an enormous amount of words. We show the model a window of words and ask it what the next word is. It is going to predict the next word, inject the word and then ask itself what the next word is.
What are the models good for and not so good for?
They are good for writing aides. It can help you formulate things in a grammatically correct style. But answering factual questions? They are not so good. The model is either regurgitating what’s stored in its memory or regurgitating some approximate thing that is a mix or interpolation of various things that it has read in the training data. That means it can be factually wrong or it is just making stuff up that sounds good.
Why do AI chatbots have such large problems with accuracy at times?
When you have a system like this that basically predicts one word after another, they are difficult to control or steer because what they produce depends entirely on the statistics they trained on and the given prompt.
Mathematically there is a good chance that it will diverge exponentially from the path of correct answers. The longer the answer that is produced the more likely you end up producing complete garbage.
Are we near AGI, or artificial general intelligence, when machines are able to learn and think for themselves?
There are claims that by scaling out those [LLM] systems we will reach human level intelligence. My opinion on this is that it is completely false. There are a lot of things we do not understand that we do not know how to reproduce with machines yet—what some people call AGI.
We’re not going to be able to use a technology like ChatGPT or GPT4 to train a robot to clear a table or fill up the dishwasher.
Even though this is a trivial task for a child. We still can’t do it. We still don’t have level five [fully] autonomous driving. That requires a complete different skill set you can’t learn by reading text. (Full article.)
Read More @ HealthImpactNews.com