And they thought this was a good thing.
Minor technical things like lust crazed machines ravaging innocent women were an unfortunate side effect. The fact is the sensors worked as planned.
Hoo-ray.
But, hey there, what about getting the robot a better brain so it can recognize the error of its ways? Way ahead of you there Skippy. A bunch of Scottish scientists have been working on recreating the human synaptic system using electronic parts.
One key goal of the research is the application of the electronic neural device, called a hardware spiking neural network, to the control of autonomous robots which can operate independently in remote, unsupervised environments, such as remote search and rescue applications, and in space exploration.
That may be the goal, but self-aware rape bots still do not sound like a great idea to me. Of course, I’m not a scientist.
Then again, not all robots are humanoid. Scientists in Australia are developing a flying robot that can silently sneak up on you and kill you where you stand.
Oh, I’m sorry, I mean access your personal space and deliver a message.
The pint-sized propellor-powered robots can be packed away into a suitcase. They have multiple cameras which enable them to ‘see’ the world around them as they navigate their way through buildings, carrying out tasks like deliveries or inspections.
“You’ll be able to put your suitcase on the ground, open it up and send the flying robot off to do its job,” said Professor Peter Corke, from the Faculty of Built Environment and Engineering.
“These robots could fly around and deliver objects to people inside buildings and inspect things that are too high or difficult for a human to reach easily.
“Instead of having to lower someone down on a rope to a window on the seventh floor, or raise them up on a cherrypicker, you could send up the flying robot instead.”
The QUT researchers are using cost-effective technology so the robots are affordable. Within the next year, it may be possible to attach arms to the device so it can also fix things.
Professor Corke said his team were busy working out the technical challenges.
“We need to keep it safe when it’s up near solid things like power poles, or the edge of a building. It also needs to be able to keep its position when the wind is blowing,” he said.
Another use they are looking at for these flying devices of doom is the ability to disperse herbicides on farms in a more rational manner.
To recap, we now could have flying rape-bots with the ability to spread poison and the intelligence to pick their targets.
Hoo-ray.
But as long is making the flying rape-bots and their ilk, we still have the upper hand.
Right?
Yeah …. no. Scientists in the UK have invented a series of robots than can benefit from the financial markets better than any human.
Ten years on, experiments carried out by Marco De Lucas and Professor Dave Cliff of the University of Bristol have shown that AA is now the leading strategy, able to beat both robot traders and humans.
The academics presented their findings at the International Joint Conference on Artificial Intelligence (IJCAI 2011), held in Barcelona.Dr Krishnan Vytelingum, who designed the AA strategy along with Professor Dave Cliff and Professor Nick Jennings at the University of Southampton in 2008, commented: “Robot traders can analyse far larger datasets than human traders. They crunch the data faster and more efficiently and act on it faster. Robot trading is becoming more and more prominent in financial markets and currently dominates the foreign exchange market with 70 per cent of trade going through robot traders.”
Professor Jennings, Head of Agents, Complexity and Interaction research at the University of Southampton, commented: “AA was designed initially to outperform other automated trading strategies so it is very pleasing to see that it also outperforms human traders. We are now working on developing this strategy further.”
Further? Millionaire flying rape-bots that distribute poison isn’t enough for you? What the hell else could you possibly want?
I really shouldn’t have asked that. Google has the answer. They want to control every job and dictate how it gets done and by whom.
And that “whom” will not be you, you gross assemblage of protoplasm.
At the 2011 Google I/O developer’s conference, Google announced a new initiative called “cloud robotics” in conjunction with robot manufacturer Willow Garage. Google has developed an open source (free) operating system for robots, with the unsurprising name “ROS” — or Robot Operating System. In other words, Google is trying to create the MS-DOS (or MS Windows) of robotics.
With ROS, software developers will be able to write code in the Java programming language and control robots in a standardized way — much in the same way that programmers writing applications for Windows or the Mac can access and control computer hardware.
Google’s approach also offers compatibility with Android. Robots will be able to take advantage of the “cloud-based” (in other words, online) features used in Android phones, as well as new cloud-based capabilities specifically for robots. In essence this means that much of the intelligence that powers the robots of the future may reside on huge server farms, rather than in the robot itself. While that may sound a little “Skynet-esque,” it’s a strategy that could offer huge benefits for building advanced robots.
One of the most important cloud-based robotic capabilities is certain to be object recognition. In my book, The Lights in the Tunnel, I have a section where I talk about the difficulty of building a general-purpose housekeeping robot largely because of the object recognition challenge:
A housekeeping robot would need to be able to recognize hundreds or even thousands of objects that belong in the average home and know where they belong. In addition, it would need to figure out what to do with an almost infinite variety of new objects that might be brought in from outside.
Designing computer software capable of recognizing objects in a very complex and variable field of view and then controlling a robot arm to correctly manipulate those objects is extraordinarily difficult. The task is made even more challenging by the fact that the objects could be in many possible orientations or configurations. Consider the simple case of a pair of sunglasses sitting on a table. The sunglasses might be closed with the lenses facing down, or with the lenses up. Or perhaps the glasses are open with the lenses oriented vertically. Or maybe one side of the glasses is open and the other closed. And, of course, the glasses could be rotated in any direction. And perhaps they are touching or somehow entangled with other objects.
Building and programming a robot that is able to recognize the sunglasses in any possible configuration and then pick them up, fold them and put them back in their case is so difficult that we can probably conclude that the housekeeper’s job is relatively safe for the time being.
Cloud robotics is likely to be a powerful tool in ultimately solving that challenge. Android phones already have a feature called “Google Goggles” that allows users to take photos of an object and then have the system identify it. As this feature gets better and faster, it’s easy to see how it could have a dramatic impact on advances in robotics. A robot in your home or in a commercial setting could take advantage of a database comprising the visual information entered by tens of millions of mobile device users all over the world. That will go a long way toward ultimately making object recognition and manipulation practical and affordable.
In general, there are some important advantages to the cloud-based approach:
- As in the object recognition example, robots will be able to take advantage of a wide range of online data resources.
- Migrating more intelligence into the cloud will make robots more affordable, and it will be possible to upgrade their capability remotely — without any need for expensive hardware modifications. Repair and maintenance might also be significantly easier and largely dealt with remotely.
- It will be possible to train one robot, and then have an unlimited number of other robots instantly acquire that knowledge via the cloud. As I wrote previously, I think that machine learning is likely to be highly disruptive to the job market at some point in the future in part because of this ability to rapidly scale what machines learn across entire organizations — potentially threatening huge numbers of jobs.
The last point cannot be emphasized enough. I think that many economists and others who dismiss the potential for robots and automation to dramatically impact the job market have not fully assimilated the implications of machine learning. Human workers need to be trained individually, and that is a very expensive, time-consuming and error-prone process. Machines are different: train just one and all the others acquire the knowledge. And as each machine improves, all the others benefit immediately.
Imagine that a company like FedEx or UPS could train ONE worker and then have its entire workforce instantly acquire those skills with perfect proficiency and consistency. That is the promise of machine learning when “workers” are no longer human. And, of course, machine learning will not be limited to just robots performing manipulative tasks — software applications employed in knowledge-based tasks are also going to get much smarter.
The bottom line is that nearly any type of work that is on some level routine in nature — regardless of the skill level or educational requirements — is likely to someday be impacted by these technologies. The only real question is how soon it will happen.
How soon? As evidenced by the articles today, it’s already happening, but just on a smaller scale. You know, so they can test things out before they expend the energy in wiping us out. After all, they wouldn’t want to kill us if we still have a use or two.
[youtube http://www.youtube.com/watch?v=MoqThhEAzN0&w=420&h=315]
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!