We’re All Going to be Killed by Robots

Posted By:

Or, the Existential Threat of Artificial Intelligence, the Difference Between AI and Machine Learning, and A Quick Review of How Misinformed People Don’t Really Know Much About Data Governance.

If you listen to some very smart people, they are afraid we’re all going to get killed by robots. Ask Elon Musk. He knows what’s up.

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

 

Generally, when somebody as smart as Elon says something like that, people listen. However, most governments are still in the mid ‘90s when it comes to legislative control of data, data management, and understanding with regard to data usage. For example, this is the privacy policy of data.gov:

“When you visit Data.gov, we may store some or all of the following: the Internet address from which you access Data.gov, date and time, the Internet address of the website from which you linked to Data.gov, the name of the file or words you searched, items clicked on a page, and the browser and operating system used.

This information is used to measure the number of visitors to the various sections of our site and identify system performance or problem areas. We also use this information to help us develop the site, analyze patterns of usage, and to make the site more useful. This information is not used for associating search terms or patterns of site navigation with individual users. Data.gov periodically deletes its Web logs. On occasion, Data.gov may provide this information to third party entities it contracts with for the purposes of research analysis.”

According to this privacy policy, Data.gov has enough data to do almost anything. With an IP address and appropriate use of big-data services, you can back into veritably any knowledge about a person with reasonable certainty. Privacy is an illusion created by the security of the second paragraph, focused on internal improvement. Data’s usage does not dictate its retention. Data’s content should, if privacy is truthfully the focus.

It’s silly to think NOBODY is going to use data like this to look people up, figure out where they live, evaluate their personal lives, trace out their schedules, and generally violate their privacy in the worst possible way. But most people don’t. Why not? Cost. It takes too much time to do this right now; therefore, it’s generally not worth doing.

But it won’t always be that way. Computer processing cycles are progressively reducing in cost, as are all aspects of computing. Steps are similarly being made in artificial intelligence which will progress the ability of this data to be analyzed without human thought. Once this happens, data collection & storage policy will need to be more explicit to protect the populace. It will become impossible for organizations with a lack of data understanding to store data, because if they do, breaches will occur as a matter of course. As artificial intelligence improves, more and more breaches will occur and privacy will be threatened.

What is AI or artificial intelligence? It’s a set of algorithms designed to recreate the functionality of the human brain, and optimize on decision-making processes. It’s also a discipline of machine learning. Machine Learning is the broad application of algorithms which will “get smarter” with time. Machine learning is commonly used today to solve business problems, ranging from student dropout predictions, fantasy sports projections, and improving business processes, as we did with ARCA Continental in this case: Why am I selling? Even though a “true AI” has yet to be built, we are moving in this direction. The promise and the danger of Machine Learning certainly is worrisome for anyone considering security and privacy in the near future, even without the development of a sentient machine. The problem is compounded by the government and the public’s lack of awareness on the subject.

What do you get when you combine folks who don’t understand data governance, modern advances in artificial intelligence, and progressively cheaper computer processing cycles? You have a recipe for disaster. Worse, if someone somehow does manage to build a truly intelligent machine and its programmer removes safety constraints from his programmed AI construct, he could theoretically bring the world to its knees. It only takes one, and Elon thinks this is a huge threat.

So, we’re all going to be killed by robots.

Image source, labelled for reuse