We live in amazing times. What was science fiction is rapidly becoming fact. In 2012 a scientist in England posited that an engine could be built that would have reaction without action. It was a silly fantasy. Now? The damn thing is being beta-tested and the argument isn’t IF it can work, but how. And that is one hell of an argument. Ten years ago being paralyzed was a slow death sentence. Now it’s rapidly becoming just another inconvenience. Oddly, at a time in our lives when science is denounced more and more by those who haven’t got the time to learn, it’s making amazing progress. Diseases once thought insurmountable are now in the cross hairs of defeat. Problems, such as drought and famine, are now being dealt into the dustbin of history. Not completely, but the rout is on and they could be eradicated in our lifetime. But all good things bring a flip side. The part of the coin we’d rather not see. For example, if computers can handle more and more tasks for us what’s to prevent them from becoming our overlords?
One very important thing has stood in the way of that happening. Voice recognition and response is one thing. But, to control a conversation or impose your will, you must be able to argue your point. Deductive logic has eluded our artificial brethren.
Prof. Chris Reed, from the University of Dundee, writing over at the fun factory known as the BBC, informs us that the times they are a changing, whether you want them to or not.
Until very recently, the creation of machines that can argue was an unattainable goal.
The aim is not, of course, to teach computers how to up the pressure in a feisty exchange over a parking space, or to resolve whose turn it is to take out the bins.
Instead, machines that can argue would inform debate – helping humans challenge the evidence, look at alternatives and robustly draw conclusions.
It is a possibility which could advance decision making on everything from how a business should invest its money, to tackling crime and improving public health.
But teaching a computer how people communicate – and what an argument actually is – is extraordinarily complex.
Think about a courtroom as an example of where arguments are central.
Giving evidence is certainly a part of the process, but social rules, legal requirements, emotional sensitivities, and practical constraints all influence how advocates, jury members and judges formulate and express their reasoning.
Over the past couple of years, however, researchers have started to think that it might be possible to model some aspects of human arguments.
Work is now under way to capture how such exchanges work and turn them into AI algorithms.
This is a field known as argument technology.
The advances have been made possible by a rapid increase in the amount of data available to train computers in the art of debate.
Some of the data is coming from domains like intelligence analysis; some from specialised online sources and some from broadcasts such as the BBC’s Moral Maze.
New methods to teach computers how arguments work have also been developed.
Researchers in the area draw on philosophy, linguistics, computer science and even law and politics in order to get a handle on how debates fit together.
At the University of Dundee we have recently even been using 2,000-year-old theories of rhetoric as a way of spotting the structures of real-life arguments.
The rapid advances in the field have led to dozens of research labs around the world applying themselves to the problem, and the explosion in this area of research is like nothing else I have witnessed in 20 years in academia.
He goes on to note, in that typically British form of whimsy, that computers still have trouble with pronouns and such so they aren’t a threat to overthrown us (that’s a pronoun, by the way) any time soon. Simply put they are incapable of assigning the pronoun to the referenced noun.
Still, as I noted a while back, no everything is artificial sunshine, unicorns, and rainbows. Artificial Intelligence is carving out its own future in some ways. There’s nothing in that future that need include us.
On Ruins Your Weekend, I called in live from the World News Center on what began as a bright and beautiful day but soon turned into a dark day of impending doom. After a brief chat about Spider-Man Homecoming, (listeners) soon learned about self-aware artificial intelligence that is likely to overtake and consume humanity.
One of the things we looked at in that fun episode is why Elon Musk thinks that Artificial Intelligence will overtake humanity and render it extinct. His reasoning is based in real world examples of AI simply creating new languages, and logic pathways, to get around human intervention. MIT has shown that to be the case time, and time, again. On the one hand that has led to programs such as Deep Patient, which is frighteningly accurate at predicting disease in patients (like in a way science can’t even come close to), it has also led to a program that simply removed humans from the decision making process. Yes, you will not be shocked to discover that Facebook was behind that atrocity.
AI is our creation. It’s entirely up to us to guide it in such a fashion that it doesn’t wipe us all out and move on. One simple fact to keep in mind is this; Evolution is not about the survival of the fittest, but the most adept and change. Those species which can adapt to new environments are the ones who continue on. They are not necessarily the strongest or smartest. Neanderthal man was stronger and had a larger cranial capacity than us. Yet we’re here and they’re not.
And, who knows, AI may feel more akin to the crows, octopuses, and simians, which are now climbing the evolutionary ladder.
Who am I kidding. at the rate we’re destroying the planet the evolutionary possibilities of AI are the least of our worries.
Maybe, instead, I should close with this; CAW CAW – OOOK OOOK – slither …. ya’ll.
Listen to Bill McCormick on WBIG (FOX! Sports) every Friday around 9:10 AM.
Stay up to date with his podcasts here and here.
contact Bill McCormick
Your Ad Can Be Here Now!