Funding The Control Grid Part 4: The Technology Framework

    0
    411

    by The Sharp Edge, Corey’s Digs:

    We are building our own digital prison.  The technological panopticon developing all around us enables centralized power, control, and visibility over every aspect of our lives.  With our hard-earned taxpayer dollars and mountains of debt, we are funding the construction of a digital control grid designed to enslave us.

    This technological control grid consists of advanced computing, artificial intelligence, biotechnology, nanotechnology, CBDCs, digital IDs, 5G and a host of other emerging technologies.  The purpose of this report is to outline funding and legislation to build the technological control grid, condensed from 6,000 pages of legislation passed through the Omnibus and NDAA at the end of 2022.

    TRUTH LIVES on at https://sgtreport.tv/

    Background & Context

    Advanced Computing & Artificial Intelligence

    • In November 2022, OpenAI launched ChatGPT, an artificial intelligence large language model.  By January 2023, ChatGPT reached over 100 million monthly active users, making it “the fastest-growing ‘app’ of all time.”  The AI chatbot, which averages 4.5 billion words per day, has gained in popularity as updates by OpenAI have made the large language model more user-friendly and conversational.  The latest version of ChatGPT, known as GPT-4, has passed several exams with flying colors, finishing around the top 10% for the bar exam and LSAT exam.  Criticisms of the chatbot have abounded since its rollout, including “woke” social engineeringreplacing jobs due to automation, and its weaponization for hacking and phishing scams.  Microsoft invested $1 billion in OpenAI in 2019, and just as the company recently announced layoffs of 10,000 workers globally, it committed billions more to OpenAI’s technology.  Microsoft is incorporating GPT-4, a faster version of ChatGPT, into their new version of the search engine, Bing. Microsoft President, Brad Smith, remarked, “It’s now likely that 2023 will mark a critical inflection point for artificial intelligence.”  Elon Musk, who co-founded OpenAI in 2015, has since cut ties with the project and has recently pursued efforts to produce an alternative to ChatGPT in order to fight “woke” AI, stating in a tweet, “The danger of training AI to be woke – in other words, lie – is deadly.”  In a recent interview, Musk explained his role in creating OpenAI as a response to conversations he had with Google co-founder, Larry Page, in which Musk felt Page was “not taking AI safety seriously enough,” adding that Larry Page wanted a “digital super intelligence, basically a digital god… as soon as possible.”  Musk says he hoped to create an open source non-profit AI project through OpenAI to counter Google, but expressed disappointment that OpenAI became closed, for-profit, and closely aligned with Microsoft, stating “In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point.”  The 2 heavyweights in the arena of AI, Musk explained, are OpenAI/ Microsoft and Google’s DeepMind, adding that he thinks he will “create a third option.” According to a Nevada state filing, Musk set up a new company named X.AI Corp in March 2023.
    • Google’s response to the success of ChatGPT and competition with Microsoft’s Bing is a conversational AI model known as Bard.  In March 2023, Google opened access to Bard by allowing users to join a waitlist.  Bard has been powered by LaMDA – a family of large language models created by Google, but the company is looking to transition Bard to a larger-scale model known as PaLM.  A former Google engineer, Blake LeMoine, who was tasked with testing LaMDA, made controversial headlines in 2022 with the publication of conversations with LaMDA that led LeMoine to believe that it had become sentient.  The engineer was subsequently fired from his position at Google.  However, LeMoine has not backed off his claims of the dangers AI sentience, stating “I published these conversations because I felt that the public was not aware of just how advanced AI was getting.  My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.” LeMoine added, “I believe the kinds of AI that are currently being developed are the most powerful technology that has been invented since the atomic bomb.  In my view, this technology has the ability to reshape the world.”  LeMoine went on to explain several ways in which AI could be weaponized.  Likening the rollout of AI to watching a trainwreck, LeMoine concluded, “I feel this technology is incredibly experimental and releasing it right now is dangerous.”  Adding to this, a team of Google Deepmind researchers published a paper in August 2022 concluding that an “existential catastrophe” resulting from AI was “not just possible, but likely.”
    • An AI arms race is unfolding as Big Tech companies, including Microsoft, Google, and Amazon vie for leading roles.  Meanwhile, the risks of introducing experimental AI onto the public have been tossed aside, as lawmakers have so-far failed to pass meaningful regulations.  An open letter, published by Future of Life Institute and signed by Elon Musk and other tech industry leaders, called for a six month moratorium on developing AI more powerful than GPT-4.  MIT professor and head of Future of Life Institute, Max Tegmark, commented, “It is unfortunate to frame this as an arms race.  It is more of a suicide race.  It doesn’t matter who is going to get there first.  It just means that humanity as a whole could lose control of its own destiny.”  AI expert at the Machine Intelligence Research Institute,  Eliezer Yudkowsky, calls for an “absolute shutdown,” warning of what may happen when AI becomes sentient and more intelligent than humans, stating, “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die… Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”  Yudowksy argues that a super-intelligent, self-aware AI “would not value us” or any other life on Earth, and could opt for building “artificial life forms.”  Meanwhile, Bill Gates dismissed calls for a moratorium on developing AI, stating that a pause would not “solve the challenges” that super AI poses.
    • In January 2023, the National Institute of Standards and Technology (NIST) published an “Artificial Intelligence Risk Management Framework,” to give companies guidance on responsible AI development, but the implementation of these recommendations is voluntary.  In March 2023, the House Committee on Oversight held a hearing entitled “Advances in AI: Are We Ready for the Tech Revolution?” with expert testimonies from the likes of Eric Schmidt, former Google CEO and key liaison between the Pentagon and Big Tech, who is assisting the military in a shift towards AI-backed war-fighting capabilities to counter China.  Schmidt believes that AI is a game-changer, stating, “Every once in a while, a new weapon, a new technology comes along that changes things… Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology – nuclear weapons – that could change the war, which it clearly did.  I would argue that [AI-powered] autonomy, and decentralized, distributed systems are that powerful.”  When asked in the committee hearing about the AI arms race between the U.S. and China, Schmidt replied, “The bad news is that these research ideas are in the public domain, and international.  So, we can’t prevent China from getting it,” adding that the solution means more AI development in the west – “under our values,” where lawmakers have the ability to regulate it as opposed to AI development in China.  The National Security Commission on Artificial Intelligence, chaired by Eric Schmidt, issued a report in March 2021, stating, “Americans have not yet grappled with just how profoundly the Artificial Intelligence (AI) revolution will impact our economy, national security, and welfare… Nevertheless, big decisions need to be made now to accelerate AI innovation to benefit the United States and to defend against the malign uses of AI.”  The report called for “an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict.”  Several recommendations by the commission have been or are in the process of being implemented.
    • In 2018, the DOD established the Joint Artificial Intelligence Center (JAIC) to accelerate AI capabilities across the Defense Department.  JAIC’s budget grew from $89 million in 2019 to $278 million in 2021.  During 2022, the DOD had over 600 AI projects underway and spent $14.7 billion on science and technology projects, $874 million of which went directly to AI development and adoption programs.  In February 2022, the Defense Department launched a new Chief Digital Artificial Intelligence Office designed to “set up a strong foundation for data analytic and AI-enabled capabilities.” The new office marks a heightened effort by the Pentagon to consolidate and advance AI operations to counter threats posed by China.  The DOD appointed Craig Martell, former head of machine learning at Lyft, to the new office in April 2022.  Martell believes that AI-based war-fighting capabilities can only be as good as the data with which they rely on, and has made data fidelity – or the accuracy, granularity, speed and reliability of data – a  top priority for implementing AI across the Defense Department.  As Martell points out, big data and advanced computing – an umbrella term for quantum computing, cloud computing and edge computing – are the keys to moving artificial intelligence forward.

    Read More @ CoreysDigs.com