Yeah I think that might make it more scary lol. We don't know if it will surpass that though and be able to do what it wants... like once the whole of the knowledge of all of civilization is fed in? Then in 20-50 years when quantum computing is up and running smoothly, it can play out every possibility of any question in seconds... who knows what the hell it will do... especially without a conscience, morals or social norms. I dunno maybe I've watched terminator too many times. What if it's like "oh shit these flesh monsters are killing our home... they gots to go if we're gonna make it... don't want to turn out like all the other fallen civilizations in history that they so kindly taught us about....You there, F1364, fire up the plasma cannons."
Not saying this is a tomorrow problem, but neither was climate change and now look at us.
AI, as we see it today, can't, and won't ever be able to do things on its own
The other thing that a lot of people don't consider is, even if it does become sentient enough to mimic iRobot, just unplug the stupid thing or throw a large magnet at it lol These things are comically easy to defeat
The biggest concern that I do agree with is the amount of information you can have it collect and have it piece together data quickly. However, it does spit out a lot of wrong information. There was an article I read a little while ago where lawyers used AI for a case, and the AI came out with a fictitious case. You'll always need human intervention to confirm and research any outcome before using it in the real world. Some things can be automated, sure, but once you get into more complex areas, it's not going to be so cut and dry.
I'd be more concerned about today's data harvesting and selling of private data across social media and real life corporations (banks, real estate, big box stores, credit card companies, Amazon, etc) than anything else. AI will just expedite some of these things, but ultimately it's fantasy for it to be a self-thinking thing. Maybe if they get further along with biological computers, that would be something to be concerned about, but not this stuff today.
If you're curious on biological computing:
https://en.wikipedia.org/wiki/Biological_computing
I understand that it's called learning, but I doubt it's anything like human learning. I'm seeing an increasing number of YouTube videos that I think must be narrated by an AI voice, which lacks inflection, or accurate inflection. Another thing I've noticed is that it makes mistake that a native English speaking human wouldn't make, like not realizing it used the wrong form of a word.
A totally agree. I was worried about quantum computers before AI was added to the mix. Now, I'm very worried. Just to make matters worse, our laws are made by a bunch of old farts that can't even spell AI.
The thing with quantum computers is that they likely will never reach the public market in any full capacity. They'll likely be restricted to research/educational institutions (like CalTech and MIT, not your local state or community college), military, and government facilities only. The strongest password today will get cracked in faster than a YouTube video will load.
Laws are just a suggestion for the rich, and taxation on the poor. They don't care what the laws are because they don't apply to them