Programming the Future
Good programming is about being both logical and literal. Literally.
There’s a good joke floating around (not literally) about a programmer who gets sent to the shop by his wife with the instructions “get a loaf of bread, oh, and if you see eggs – get a dozen!” So naturally, he arrived home some time later with 12 loaves of bread.*
You know what else is annoyingly literally and logical? A computer – because it was set up by a programmer. So why can’t a computer do the same job as a programmer and write code? Well, turns out it can. Computers are pretty ok at replicating, duplicating and putting together small bits of code that already relate to each other in a pre-defined way. But what about going one step further again? What about getting a computer to write its own code? What happens when the challenge presented is no longer a finite one? Now that my friends, is a challenge!
What we have now entered, is the realm of Artificial Intelligence! Welcome to the dark side…
We’re essentially trying to get a computer to learn! We’re now asking it to do a task, remember what worked, remember what didn’t work and apply those lessons to itself so that it can get more efficient at a task while applying this ‘learning’ to other new tasks. It’s some twilight zone stuff people. This is the kind of thinking that sci-fi says will lead to a robot vs human Armageddon show down! (In all seriousness, my money is on Batman – he alone will save us).
But companies like IBM, Qualcomm and Intel are pouring money into researching the very concept of how you can get a computer (and therefore a robot) to learn. The major focus is on processing times, memory storage and increasing efficiency to provide a software ‘learning’ feature for these robots. However the main flaw, and interestingly what separates AI from us in terms of learning – is how it all starts. So far, AI needs an initial command. It needs a starting point for learning. For example, imagine a robot must being told to stand. The first time it ain’t so crash-hot at the task, but after doing it repeatedly it gets pretty good. But it had to be told to stand, it needs instruction because it has no initiative or curiosity.
What happens when I leave my 10 month old alone for a few minutes while I try to use the bathroom in peace? She works out (for no real reason) how to roll sideways across the floor, figures out how to take off her nappy and also works out how to open the DVD player. Did I instruct her to do any of this? Absolutely NOT! My God – what kind of parent do you think I am?! No, what she has (unfortunately for me and my DVD player) is an abundance of natural curiosity and initiative. So what we provide AI with really, is not so much 'learning' but more a pathway for accelerating process efficiency. So for now, I’m fairly comfortable that my job is secure. The contents of my living room however – turns out they aren’t so secure. :(
*if you’re reading an engineering news blog, I really shouldn’t have to explain that joke to you.
Inventor, philanthropist, entrepreneur, crusader of justice, defender of Gotham city and…no wait…sorry, that’s Batman. ok really: Freelancing engineer and writer.