• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

World News Center

Everything you want to know about anything that's meaningful

  • News
  • Reviews
  • About
  • Contact Us
You are here: Home / Google Joins Forces With Our Impending Robot Overlords

Google Joins Forces With Our Impending Robot Overlords

January 4, 2012 by

Our robot overlords are so cute when they're young.
I have, on occasion, mentioned that all humans are doomed to be slaves of our impending robot overlords. And, given what I see of humanity each day, I sometimes think that may not be such a bad thing. But then I really wonder what life under a soulless regime would entail. And I come to some frightening conclusions. Humans are already too quick to abdicate responsibility when given the chance. And they are even willing to live with some bizarre unintended consequences. For example, scientists in Japan recently decided to equip a cybernetic being with some basic human emotions and parts. Naturally, since they are scientists and have no social lives, the emotion was lust and the part was a big metal penis. They programmed the robot with the basic need, the ability to feel pressure, to gauge pleasure – at least in a rudimentary fashion – and so on. What they did not give it was the ability to stop or be turned off by the woman. That’s right, they created the world’s first rape-bot.

And they thought this was a good thing.

Minor technical things like lust crazed machines ravaging innocent women were an unfortunate side effect. The fact is the sensors worked as planned.

Hoo-ray.

But, hey there, what about getting the robot a better brain so it can recognize the error of its ways? Way ahead of you there Skippy. A bunch of Scottish scientists have been working on recreating the human synaptic system using electronic parts.

One key goal of the research is the application of the electronic neural device, called a hardware spiking neural network, to the control of autonomous robots which can operate independently in remote, unsupervised environments, such as remote search and rescue applications, and in space exploration.

That may be the goal, but self-aware rape bots still do not sound like a great idea to me. Of course, I’m not a scientist.

Then again, not all robots are humanoid. Scientists in Australia are developing a flying robot that can silently sneak up on you and kill you where you stand.

Oh, I’m sorry, I mean access your personal space and deliver a message.

The pint-sized propellor-powered robots can be packed away into a suitcase. They have multiple cameras which enable them to ‘see’ the world around them as they navigate their way through buildings, carrying out tasks like deliveries or inspections.

“You’ll be able to put your suitcase on the ground, open it up and send the flying robot off to do its job,” said Professor Peter Corke, from the Faculty of Built Environment and Engineering.

“These robots could fly around and deliver objects to people inside buildings and inspect things that are too high or difficult for a human to reach easily.

“Instead of having to lower someone down on a rope to a window on the seventh floor, or raise them up on a cherrypicker, you could send up the flying robot instead.”

The QUT researchers are using cost-effective technology so the robots are affordable. Within the next year, it may be possible to attach arms to the device so it can also fix things.

Professor Corke said his team were busy working out the technical challenges.

“We need to keep it safe when it’s up near solid things like power poles, or the edge of a building. It also needs to be able to keep its position when the wind is blowing,” he said.

Another use they are looking at for these flying devices of doom is the ability to disperse herbicides on farms in a more rational manner.

To recap, we now could have flying rape-bots with the ability to spread poison and the intelligence to pick their targets.

Hoo-ray.

But as long is making the flying rape-bots and their ilk, we still have the upper hand.

Right?

Yeah …. no. Scientists in the UK have invented a series of robots than can benefit from the financial markets better than any human.

Ten years on, experiments carried out by Marco De Lucas and Professor Dave Cliff of the University of Bristol have shown that AA is now the leading strategy, able to beat both robot traders and humans.
The academics presented their findings at the International Joint Conference on Artificial Intelligence (IJCAI 2011), held in Barcelona.

Dr Krishnan Vytelingum, who designed the AA strategy along with Professor Dave Cliff and Professor Nick Jennings at the University of Southampton in 2008, commented: “Robot traders can analyse far larger datasets than human traders. They crunch the data faster and more efficiently and act on it faster. Robot trading is becoming more and more prominent in financial markets and currently dominates the foreign exchange market with 70 per cent of trade going through robot traders.”

Professor Jennings, Head of Agents, Complexity and Interaction research at the University of Southampton, commented: “AA was designed initially to outperform other automated trading strategies so it is very pleasing to see that it also outperforms human traders. We are now working on developing this strategy further.”

Further? Millionaire flying rape-bots that distribute poison isn’t enough for you? What the hell else could you possibly want?

I really shouldn’t have asked that. Google has the answer. They want to control every job and dictate how it gets done and by whom.

And that “whom” will not be you, you gross assemblage of protoplasm.

At the 2011 Google I/O developer’s conference, Google announced a new initiative called “cloud robotics” in conjunction with robot manufacturer Willow Garage. Google has developed an open source (free) operating system for robots, with the unsurprising name “ROS” — or Robot Operating System. In other words, Google is trying to create the MS-DOS (or MS Windows) of robotics.

With ROS, software developers will be able to write code in the Java programming language and control robots in a standardized way — much in the same way that programmers writing applications for Windows or the Mac can access and control computer hardware.

Google’s approach also offers compatibility with Android. Robots will be able to take advantage of the “cloud-based” (in other words, online) features used in Android phones, as well as new cloud-based capabilities specifically for robots. In essence this means that much of the intelligence that powers the robots of the future may reside on huge server farms, rather than in the robot itself. While that may sound a little “Skynet-esque,” it’s a strategy that could offer huge benefits for building advanced robots.

One of the most important cloud-based robotic capabilities is certain to be object recognition. In my book, The Lights in the Tunnel, I have a section where I talk about the difficulty of building a general-purpose housekeeping robot largely because of the object recognition challenge:

A housekeeping robot would need to be able to recognize hundreds or even thousands of objects that belong in the average home and know where they belong. In addition, it would need to figure out what to do with an almost infinite variety of new objects that might be brought in from outside.

Designing computer software capable of recognizing objects in a very complex and variable field of view and then controlling a robot arm to correctly manipulate those objects is extraordinarily difficult. The task is made even more challenging by the fact that the objects could be in many possible orientations or configurations. Consider the simple case of a pair of sunglasses sitting on a table. The sunglasses might be closed with the lenses facing down, or with the lenses up. Or perhaps the glasses are open with the lenses oriented vertically. Or maybe one side of the glasses is open and the other closed. And, of course, the glasses could be rotated in any direction. And perhaps they are touching or somehow entangled with other objects.

Building and programming a robot that is able to recognize the sunglasses in any possible configuration and then pick them up, fold them and put them back in their case is so difficult that we can probably conclude that the housekeeper’s job is relatively safe for the time being.

Cloud robotics is likely to be a powerful tool in ultimately solving that challenge. Android phones already have a feature called “Google Goggles” that allows users to take photos of an object and then have the system identify it. As this feature gets better and faster, it’s easy to see how it could have a dramatic impact on advances in robotics. A robot in your home or in a commercial setting could take advantage of a database comprising the visual information entered by tens of millions of mobile device users all over the world. That will go a long way toward ultimately making object recognition and manipulation practical and affordable.

In general, there are some important advantages to the cloud-based approach:

  • As in the object recognition example, robots will be able to take advantage of a wide range of online data resources.
  • Migrating more intelligence into the cloud will make robots more affordable, and it will be possible to upgrade their capability remotely — without any need for expensive hardware modifications. Repair and maintenance might also be significantly easier and largely dealt with remotely.
  • It will be possible to train one robot, and then have an unlimited number of other robots instantly acquire that knowledge via the cloud. As I wrote previously, I think that machine learning is likely to be highly disruptive to the job market at some point in the future in part because of this ability to rapidly scale what machines learn across entire organizations — potentially threatening huge numbers of jobs.

The last point cannot be emphasized enough. I think that many economists and others who dismiss the potential for robots and automation to dramatically impact the job market have not fully assimilated the implications of machine learning. Human workers need to be trained individually, and that is a very expensive, time-consuming and error-prone process. Machines are different: train just one and all the others acquire the knowledge. And as each machine improves, all the others benefit immediately.

Imagine that a company like FedEx or UPS could train ONE worker and then have its entire workforce instantly acquire those skills with perfect proficiency and consistency. That is the promise of machine learning when “workers” are no longer human. And, of course, machine learning will not be limited to just robots performing manipulative tasks — software applications employed in knowledge-based tasks are also going to get much smarter.

The bottom line is that nearly any type of work that is on some level routine in nature — regardless of the skill level or educational requirements — is likely to someday be impacted by these technologies. The only real question is how soon it will happen.

How soon? As evidenced by the articles today, it’s already happening, but just on a smaller scale. You know, so they can test things out before they expend the energy in wiping us out. After all, they wouldn’t want to kill us if we still have a use or two.

[youtube http://www.youtube.com/watch?v=MoqThhEAzN0&w=420&h=315]

Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!

Filed Under: Uncategorized

Primary Sidebar

Archives

  • March 2023
  • October 2022
  • May 2022
  • April 2022
  • March 2022
  • December 2021
  • October 2021
  • July 2021
  • June 2021
  • April 2021
  • November 2020
  • July 2020
  • June 2020
  • May 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • September 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010

Copyright © 2023 · Metro Pro on Genesis Framework · WordPress · Log in