by Aden Tate, The Organic Prepper:
Do you remember when drones were first released to the commercial market? There was a lot of talk about the privacy implications of them, and more than one case of where people were flying them over other peoples’ property, looking in peoples’ windows, and more of the like.
Now, drones are firmly entrenched in modern society and they’re not going anywhere anytime soon. They’ve not only been incorporated into militaries throughout the world, but they’re now in search and rescue operations, used for photography, security, and more. During Covid, Baltimore PD even wanted to use them to enforce social distancing.
TRUTH LIVES on at https://sgtreport.tv/
You probably don’t even think about it anymore.
Now enter AI.
AI is already all around you, but most recognize that ChatGPT ushered in a new era of AI. Right now, we’re all in the same stage that we were when drones were first released – theorizing about potential fears, costs, and what this could mean. But I would say one of the things that’s going to happen here is that it’s going to become mainstream just like drones did.
What will the world look like when this finally happens?
I think that there are a few day-to-day implications of all this.
I don’t think it’s long until a personal AI is as commonplace as a smartphone. You can sneak across the border into the US nowadays, and you still have a smartphone in your hand. I don’t think it’s too far off to say that a personal AI is on the horizon. Think Iron Man’s Jarvis, Will Smith’s I, Robot, and Ron from Ron’s Gone Wrong (the best Pixar-style movie of the past ten years).
This stuff is going to be all over the place.
Everybody will have their own super personal assistant. Entertainment, organization, wayfinding, learning – it’ll all be streamlined and maximized.
For science, I think you’re liable to see some amazing discoveries made over the course of the next few years. What happens when you have an AI that is able to devote all of its energy, 24/7 to a single issue without ever growing tired, needing a change of scenery, or going on a vacation? What happens when you take something that can do incredibly fast calculations and run predictive algorithms until it finds something that will successfully fight this-or-that genetic defect with an 85% success rate? You end up with a Johnny Depp Transcendence type of situation. Chemistry, epigenetics, epidemiology, physics, astronomy, mathematics, engineering, pharmaceuticals – all of these fields are going to be absolutely blasted with new information. AI is going to be used to study itself as well. As a result, advancements in robotics, coding, and AI are also going to come about. And those advancements, in turn, will be used to drive new advancements.
Militaries will have to adopt AI or they will consistently be beaten on the battlefield by those who already have AI generals and logistics experts. You’re talking about playing chess with somebody who can think 50,000,000 moves into the future. Militaries will use AI to tell them where is the best place to position their troops, the likelihood of success for different missions given the variables, and how much ammunition they need to ship to here, here, and here.
For nation states, the main thing here is going to be about surveillance. All of the cameras, sensors, cell towers, GPS units, satellites, microphones, and all other equipment that can be hooked up to the internet – all of that will feed a constant stream of data directly into an AI that can give up to date data on everything that is going on.
AI in the media
The implications? Minority Report, Shia Lebouf’s Eagle Eye, and Christian Bale’s computer when he fights Heath Ledger in The Dark Knight. (Interestingly, in Ron’s Gone Wrong, all of the data generated by every kids’ personal AI was then collected and sold. Do you think that your personal AI would actually protect your privacy, even if it explicitly told you it did?)
For militaries? What does it look like when AI is pitted against AI? I think there are a couple of possibilities here.
Person of Interest did a fairly good job of tackling this concept. Absolutely you would have AI engaging in cyberattacks against each other. Whichever AI was faster to learn would be the one that would win each specific fight. Which AI can code better, faster? Which AI can monitor code better and push through containment protocols faster? Seeing these types of AI utilize actual attacks in the “real” world isn’t beyond the scope of possibility either. Keep in mind that one of the first things that ChaosGPT tried to do was to source nuclear weapons. In the future, if there was a battle between AIs going on, why would they not use similar logic?
What do you think?
No, I can’t predict the future, nor do I try to, but I can see some of the signs around me and draw deductions just like anybody else. If I see a kid playing around a campfire, I have a pretty good idea that he’s going to get burnt. The problem, is that I think this time, AI is the fire.
Read More @ TheOrganicPrepper.ca