Naomi Assaraf – There Does Exist A Lot More Than You Would Think Here.

As you might imagine, crunching through enormous datasets to extract patterns requires a LOT of computer processing power. In the 1960s they just did not have machines powerful enough to do it, which is the reason that boom failed. Within the 1980s the computers were powerful enough, but they learned that machines only learn effectively once the amount of data being fed to them is big enough, plus they were unable to source big enough quantities of data to give the machines.

Then came the web. Not only made it happen solve the computing problem for good through the innovations of cloud computing – which essentially let us access as much processors as we need in the touch of a button – but people on the internet happen to be generating more data every day than has ever been manufactured in the entire background of planet earth. The quantity of data being produced on the constant basis is completely mind-boggling.

What this means for machine learning is significant: we currently have more than enough data to really start training our machines. Think of the number of photos on Facebook and you commence to understand why their facial recognition technology is so accurate. There is no major barrier (that people currently are conscious of) preventing A.I. from achieving its potential. We have been only just starting to determine what we are capable of doing along with it.

Once the computers will think for themselves. There is a famous scene from your movie 2001: A Place Odyssey where Dave, the primary character, is slowly disabling the artificial intelligence mainframe (called “Hal”) right after the latter has malfunctioned and decided to try to kill all of the humans on the space station it was intended to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it is fearful of dying.

This movie illustrates one of many big fears surrounding A.I. generally speaking, namely what is going to happen when the computers start to think for themselves rather than being controlled by humans. The fear applies: we have been already utilizing machine learning constructs called neural networks whose structures are based on the neurons in the human brain. With neural nets, the information is fed in then processed through a vastly complex network of interconnected points that build connections between concepts in much exactly the same way as associative human memory does. Because of this computers are slowly starting to build up a library of not just patterns, but in addition concepts which ultimately result in the basic foundations of understanding rather than just recognition.

Imagine you are considering an image of somebody’s face. When you first see the photo, several things take place in your brain: first, you recognise that it is a human face. Next, you might recognise that it is male or female, young or old, black or white, etc. Additionally, you will have a quick decision from your brain about whether you recognise the face, though sometimes the recognition requires deeper thinking depending on how often you have come across this kind of face (the event of recognising a person but not knowing straight away from which). This all happens virtually instantly, and computers happen to be able to perform all this too, at almost the same speed. For instance, Facebook can not only identify faces, but may also inform you who the facial area belongs to, if said person is also on Facebook. Google has technology that will identify the race, age along with other characteristics of any person based just tstqiy a photograph with their face. We have now advanced significantly since the 1950s.

But true Cathy Hackl – which is referred to as Artificial General Intelligence (AGI), where machine is as advanced as being a brain – is quite a distance off. Machines can recognise faces, but they still don’t truly know just what a face is. For instance, you could examine a human face and infer a lot of things which are drawn from a hugely complicated mesh of different memories, learnings and feelings. You could take a look at a photo of any woman and guess she is a mother, which in turn may make you assume that she actually is selfless, or indeed the exact opposite depending all on your own experiences of mothers and motherhood. A guy might glance at the same photo and find the girl attractive that will lead him to create positive assumptions about her personality (confirmation bias again), or conversely discover that she resembles a crazy ex girlfriend that will irrationally make him feel negatively towards the woman. These richly varied but often illogical thoughts and experiences are what drive humans for the various behaviours – negative and positive – that characterise our race. Desperation often contributes to innovation, fear results in aggression, and so on.

For computers to really be dangerous, they want a few of these emotional compulsions, but it is a very rich, complex and multi-layered tapestry of different concepts which is tough to train a computer on, no matter how advanced neural networks might be. We are going to get there some day, however, there is lots of time to ensure that when computers do achieve AGI, we will still be in a position to switch them off if needed.

Meanwhile, the advances being made are finding a lot more useful applications in the human world. Driverless cars, instant translations, A.I. cellular phone assistants, websites that design themselves! All of these advancements are intended to make our everyday life better, and therefore we really should not be afraid but rather pumped up about our artificially intelligent future.