Artificial intelligence and the "singularity"

One headline-grabbing take on artificial intelligence is warnings of potential dangers of artificial intelligence, in particular the idea of the "singularity" -- that is, that an uncontrolled human-like, and even superhuman intelligence may outmode human capabilities or even pose a direct, intentional threat to humanity's existence.

The idea of a mechanical takeover is not a new one, dating back to at least 1920's science fiction play R.U.R. With the growth of the machine learning field, though, there are those who feel that a new "race" of "thinking machines" is a real possibility in the near term.

Does machine learning lead to artificial general intelligence?

Current technology -- even Adkodas technology -- is a highly directed machine learning tool. While a computer learns to find the rules in data, it responds only to patterns it detects in submitted data and only by reporting what those patterns are. For most machine learning, submitting reports is all that it does: a tool about as threatening and as useful as a (much) better calculator.

As with any technology, it's the application that determines its potential use and danger. When a machine learning piece of software is connected directly to real-world hardware and independently allowed to determine its response, then its response is, of course, of more immediate concern.

The important thing about machine learning is that it is user directed: it has access to the input we supply, whether that is manually as a data set or automatically by connecting it to such information as video feeds and GPS information (e.g. in the case of a self-driving car). Furthermore, it seeks the objectives we give to it, whether that's sorting data that has a given output (e.g. whether a credit applicant is likely to successfully pay back a loan), or achieving a set of pre-programmed conditions (e.g. reaching a user-input destination while obeying traffic laws as programmed into the system as well as not intersecting any other vehicle, person, or object along the way).

This is a critical point to machine learning: computers "learn" what we tell them to learn. They don't set their own output objectives; they are driven by the goals programmed into them or supplied with the input data they take in. One of the powers of machine learning is the ability to generalize -- to predict the outcome of some new case given related, but not identical, input cases -- but this is far different from allowing a machine to determine its own desired output, much less somehow imparting the agency or desire to be able to make decisions entirely independent of, or modifying, its own programming.

It is this component, more than any other, that defines a human-like "artificial general intelligence" -- and possibly the greatest weakness with predictions of a singularity. Firstly, there is no encoding for "free will" that we know of, and secondly, typically predictions of this agency arising depend on it simply doing so spontaneously, due to the "complexity of the system" (e.g. number of processors/processing cycles). However, the Internet as a whole already rivals, if not significantly surpasses, in some sense, the human brain in terms of raw processing power and connectivity. These comparisons pale, though, since the nature of the processing and connections of one is not much like the other - the analogy of computers being "like a brain" is perhaps often pushed too far.

But what if a singularity is possible in a way we don't yet understand?

With more biologically based neurons, Adkodas technology offers, in addition to superior machine learning capability, the ability to study more closely what "like a brain" would even mean for computers. If finding, understanding, or even preventing a singularity is a significant concern, then this type of neural network will be key to doing so.

Even more importantly, however, is Adkodas rule extraction technology. As an analytical tool, Adkodas neural networks offer the advantage of not merely trusting oracular predictions without knowing how the machine makes them -- but as a safeguard against a singularity, the ability to see what a machine is actually thinking is invaluable. Adkodas technology doesn't need a "translator" to open up the black box and ask questions; our networks are designed to let you learn what the machine learns, not just to check but to learn for yourself.

If a machine were ever somehow able to form its own outputs -- its own desires -- to train upon, Adkodas rule extraction would be the perfect tool to realize that's what is happening; by contrast, if such spontaneous machine agency were ever something beyond a science-fiction myth, then existing technologies would enable the possibility of a "runaway AI" simply because no one knows what is actually happening within those networks!

If your goal, concern, or worry is the possibility of a true singularity, Adkodas technology is a key factor to understanding, predicting, or preventing it.