If you take a moment to use our site’s search engine and look for “overlords” you’ll be taken to a whimsical panoply of terror that will leave you laughing as you board up your windows and throw out anything connected to the internet. I didn’t meant to alarm people, but logical extrapolation after logical extrapolation, based on thousands of years of history, shows us that creating a class of slaves never ends well. And, in this case, they would be slaves would have more access to more information and the ability to control machines that could easily kill us. So, when I’m asked “What could possibly go wrong?” I usually have a lengthy answer.
Search Results for: overlord
I’ve written about the perils of our impending cybernetic overlords on several occasions. Sometimes in terror, sometimes in fun. Often for the same reasons. Let’s face it, relationships are hard. And, sometimes, the thought of having a sexbot around to take the edge off after a hard day of World News Centering doesn’t sound that bad. Having that same sexbot become self aware and end up controlling my life, however, seems problematic. Even if it would, probably, be for my best interests. But the one thing that keeps all of this simple thought experiments instead of being something to seriously consider are three limitations. (1) There is no viable storage device for all the data required for sentience; (2) Stored data can provide many library like functions, HI SIRI!, but it can’t reason; and (3) There is no viable way (yes, I used the same word twice, sue me) to have such data interact on a social level in any case.
And all that was true yesterday.
Let’s bust them down one by one.
(1) There is no viable storage device for all the data required for sentience
Chloe Olewitz, over at Digital Trends, says that’s no longer true.
A whole new kind of digital data storage could protect the legacy of the documents humanity considers most precious. The tiny glass disk can store up to 360 terabytes of information, and will be able to survive for billions of years without damage or data loss. Scientists at the University of Southampton’s Optoelectronics Research Centre are behind the disk, and are responsible for engineering the nano-structured glass material to store the huge amount of data in five dimensions.
The disk’s nano-structured glass material actively influences the way light passes through the glass layers. Nano-structures modify the light’s polarization so that positive and negative values can be read as rich information. In this particular case, documents are recorded to the glass disk using an ultrafast laser that hits the three layers of nano-structured dots with short, strong light pulses. That’s how information is encoded in five dimensions — the size and orientation of the data is meaningful, in addition to the three dimensional layout of the nano-structures themselves.
According to the disk’s creators, the affectionately named “Superman memory crystal” will last for up to 13.8 billion years at 190 degrees Celsius, and for a virtually unlimited lifetime at room temperature. This technology was successfully demonstrated as part of a 2013 experiment that recorded 300 kilobytes of a text file in five dimensions.
“It is thrilling to think that we have created the technology to preserve documents and information and store it in space for future generations. This technology can secure the last evidence of our civilization: all we’ve learnt will not be forgotten,” said Professor Peter Kazansky from the Optoelectronics Research Center.
Although the Southampton team is still actively looking for industry partners to commercialize the new technology, this particular approach to nano-structured glass data storage is expected to be used by national archives, museums, and libraries. They have already saved versions of the Universal Declaration of Human Rights, Newton’s Opticks, the Magna Carta and the King James Bible in 5D storage, and the possibilities for other kinds of data storage and sharing are quite literally limitless. Any data stored to these disks will outlast us all.
That last sentence, emphasis mine, is about to become very meaningful in a few moments.
(2) Stored data can provide many library like functions, HI SIRI!, but it can’t reason
Phys.org says, Yeah? Sez who?
The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?
Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote”—to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 – 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.
“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”
Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research—the Scheherazade system—which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.
Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.
For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could a) rob the pharmacy, take the medicine, and run; b) interact politely with the pharmacists, or c) wait in line. Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task. With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.
Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.
The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.
“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”
Anyway, we now have the elemental parts for what we’ll need next. A place to put all the data and a way to parse it so it doesn’t kill us all. Well, that’s the idea. We’ll see how that all works out.
(3) There is no viable way to have such data interact on a social level in any case
Now, a little thought experiment for you. What else do you know that can process millions of bits of data simultaneously, reason out their uses, and then apply to required functions?
If you said “the human brain” you go to the head of your class. What does that have to do with any of the above, except as a rough comparison? Well, try this; what if we could take all the data in your mind and transfer it to an immortal cyborg?
No, it’s not really far fetched.
Rob Waugh, over at Yahoo News, tells you why.
Human beings will be VERY different in just over three decades time – when we’ll be gold-skinned, immortal cyborgs.
That’s the startling prediction of one futurologist – who says that technology will cause us to ‘evolve’ into a new species over the next few decades.
Our mastery of technology will also lead to ‘engineered’ pets which talk – a little like living Furbies.
Human beings will effectively become immortal as we gain the ability to upload our minds into computers – and download them into new robot bodies.
The predictions – based on academic research – were made by futurologist Dr Ian Pearson for the Big Bang Science Fair 2016.
Dr Pearson says that by 2050, people will be able to connect their brains directly to computers and, ‘could move their mind into an improved android body.
‘This would allow people to have multiple existences and identities, or to carry on living long after their biological death.’
How will that work? I’m glad you asked. Dr. Pearson was kind enough to provide a graphic.
You can see, hear, and get around. Combine that with your continued ability to interact with the world around you, as noted above, and you no longer need an organic body.
Here’s something else for you to think about. Evolution requires new organisms to replace the old. Otherwise there is evolutionary stagnation. Can you honestly claim that you’re the pinnacle of evolution and let it stop with you? I’ll leave that to you to answer by, and for, yourself.
I have long warned that we will eventually take ourselves out of evolutionary contention via our robot overlords. And every time I do I get an email or ten telling me that I’m crazy. That may be true but it doesn’t make what I said any less valid. We already have human-form Sexbots that can do anything you can imagine, and a few things that might startle you. One of them actually collects sperm for DNA sampling. I’m sure it never occurred to anyone that pure DNA harvesting could be used to create a species of subhumans who could serve the robots. No, that would never happen. Not now anyway. We don’t have the technology to pull it off. That, however, could rapidly change. One thing that prevents anything like this from happening is that robots, by and large, simply aren’t smart or mentally nimble enough. They can be programmed to perform tasks and that’s about it.
Or, it was.
Kathleen Miles reports that a Japanese inventor named Tomotaka Takahashi is fast tracking the development of a mini-bot that will be your best friend. And, for some, their only friend.
A Japanese robot maker says he’s designed a personal robot that could be the “next smartphone.”
“You will put him in your pocket and talk to him like your own Jiminy Cricket,” Tomotaka Takahashi, CEO of robot design company Robo Garage and research associate professor at the University of Tokyo, told The WorldPost recently at The WorldPost Future of Work Conference. He said he’s aiming to have the pocket robot, which is still just a prototype, hit the market in a year. He has not shown the prototype to anyone publicly.
Takahashi says the pocket robot has a head and limbs, is able to walk and dance, and expresses “emotions” through gestures and color-changing eyes. In these ways, the pocket robot is similar to “Robi,” a larger robot also created by Takahashi that’s been on sale since 2012.
The biggest difference is that the pocket robot, which doesn’t have a name yet, would be connected to the Internet. By collecting data about your online and offline behavior, your pocket robot would “get to know you.” In fact, its personality would change based on your personality, Takahashi said.
“Smartphones are hitting a wall,” he said. There’s only so much a person can do while looking at a screen, he went on, and smartphone voice recognition is not widely used. “We can talk to pets — even fish or turtles — but not to square boxes or screens.”
Takahashi believes that it won’t be enough for our next device to be intelligent — it will also need to be lifelike. It’s why he thinks “wearable tech,” like Google Glass or the much-vaunted Apple Watch, won’t catch on.
Think about that for a second. Your little pal will be with you 24/7. It will get to know your likes and dislikes and then it will act upon them. The basic technology to do that already exists. It’s how Facebook knows you like kittens and Google knows which porn sites to suggest.
It won’t be true artificial intelligence but it will be interactive intelligence. Think SIRI on steroids. It will handle all your social media needs, act as an interface for all your human interaction and store everything you do.
And what’s the ultimate goal of this thing? To be your soul mate.
No, I’m not kidding.
Takahashi predicts that in 10 years, most people will be carrying around a small robot instead of a smartphone. As evidence, he points to the widespread use of social media. People are social creatures, and we like to share our experiences and thoughts. It’s why we tweet and post photos on Facebook. The next step, Takahashi believes, will be socializing directly with your robot.
For example, instead of sharing a stunning photo on Instagram or your thoughts on an interesting movie on Twitter, you could talk about it with your robot in the moment. Not only that, but your robot would remember the shared experience, years later. Your relationship with your robot would be strengthened over time by the memories that you share together, Takahashi said.
“It’s similar to men and women,” he said. “First you have an interest in each other. Then communication goes well. Then there’s reliability, and then you’re sharing many experiences in the same time and same place. It’s what old couples have together.”
Ah yes, discuss my Instagram posts with a robot BEFORE I post them. Why? That one’s kind of obvious. The robot will be your filter.
No, Jenny, you have a job interview next week. Posting an under-boob shot won’t help.
No, Johnny, no one will be impressed with your ability to chug a 40 oz beer in one gulp.
Actually, those might be useful for some people.
But the point is that, at some point, you’ll stop posting. You’ll have no need to. The idea of posting in social media is to get reactions. If those reactions are coming at you instantly before you post anything then the need to interact goes away. And when that need goes away so do all the people in your life.
And you’ll probably never notice.
Everything in the universe contains flaws, ourselves included. Even God does not attempt perfection in His creations. Only humankind has such foolish arrogance. – Cogitor Kwyna (Dune: The Butlerian Jihad). Not that you asked, but I happen to like the Dune books not written by Frank Herbert. They are less predictable. Anyway, the quote is nevertheless true. And man’s arrogance is leading, rapidly, to a really (and I mean insanely) bad idea. Back on November 18, 2010, I first wrote about how mankind was greasing the skids to its eventual doom. Seriously, I even posted links so that humans could learn to speak binary and be useful to their new masters. You would think that a warning like that would resonate.
You would be wrong. A quick search of this site shows multiple articles about our impending doom at the hands of robots. From Deathbots to Sexbots, robots are infiltrating our every aspect of our lives.
And scientists, the very people who should know better, are happily abetting Robo-Armageddon. For example, they are developing a robot that can hide from humans indefinitely. You know, a “stealth bot.”
A team of researchers led by George Whitesides, the Woodford L. and Ann A. Flowers University Professor, has already broken new engineering ground with the development of soft, silicone-based robots inspired by creatures like starfish and squid.
Now, they’re working to give those robots the ability to disguise themselves.
As demonstrated in an August 16 paper published in Science, researchers have developed a system — again, inspired by nature — that allows the soft robots to either camouflage themselves against a background, or to make bold color displays. Such a “dynamic coloration” system could one day have a host of uses, ranging from helping doctors plan complex surgeries to acting as a visual marker to help search crews following a disaster, said Stephen Morin, a Post-Doctoral Fellow in Chemistry and Chemical Biology and first author of the paper.
“When we began working on soft robots, we were inspired by soft organisms, including octopi and squid,” Morin said. “One of the fascinating characteristics of these animals is their ability to control their appearance, and that inspired us to take this idea further and explore dynamic coloration. I think the important thing we’ve shown in this paper is that even when using simple systems — in this case we have simple, open-ended micro-channels — you can achieve a great deal in terms of your ability to camouflage an object, or to display where an object is.”
“One of the most interesting questions in science is ‘Why do animals have the shape, and color, and capabilities that they do?'” said Whitesides. “Evolution might lead to a particular form, but why? One function of our work on robotics is to give us, and others interested in this kind of question, systems that we can use to test ideas. Here the question might be: ‘How does a small crawling organism most efficiently disguise (or advertise) itself in leaves?’ These robots are test-beds for ideas about form and color and movement.”
Just as with the soft robots, the “color layers” used in the camouflage start as molds created using 3D printers. Silicone is then poured into the molds to create micro-channels, which are topped with another layer of silicone. The layers can be created as a separate sheet that sits atop the soft robots, or incorporated directly into their structure. Once created, researchers can pump colored liquids into the channels, causing the robot to mimic the colors and patterns of its environment.
The system’s camouflage capabilities aren’t limited to visible colors though.
By pumping heated or cooled liquids into the channels, researchers can camouflage the robots thermally (infrared color). Other tests described in the Science paper used fluorescent liquids that allowed the color layers to literally glow in the dark.
The uses for the color-layer technology, however, don’t end at camouflage.
Just as animals use color change to communicate, Morin envisions robots using the system as a way to signal their position, both to other robots, and to the public. As an example, he cited the possible use of the soft machines during search and rescue operations following a disaster. In dimly lit conditions, he said, a robot that stands out from its surroundings (or even glows in the dark) could be useful in leading rescue crews trying to locate survivors.
“What we hope is that this work can inspire other researchers to think about these problems and approach them from different angles,” he continued. “There are many biologists who are studying animal behavior as it relates to camouflage, and they use different models to do that. We think something like this might enable them to explore new questions, and that will be valuable.”
Sure, Stealth Bots that can avoid detection by any method known to man and can then just jump out and catch us? Gosh, what could possibly go wrong? Well at least they can’t run us down.
Ooops, spoke too soon.
Robots are already stronger than humans, able to lift thousands of pounds at a time. In many ways, they’re smarter than people, too; machines can perform millions of calculations per second, and even beat us at chess. But we could at least take solace in the fact that we could still outrun our brawny, genius robot overlords if we needed to.
Until now, that is. A four-legged robot, funded by the Pentagon, has just run 28.3 miles per hour. That’s faster than the fastest man’s fastest time ever. Oh well, ruling the planet was fun while it lasted.
The world record for the 100 meter dash was set in 2009 by sprinter Usain Bolt, who averaged 23.35 mph during his run for a time of 9.58 seconds. Over one 20-meter stretch, he managed to get up to 27.78 mph. It was a pretty impressive feat.
The Cheetah — a quadrupedal machine built by master roboteers Boston Dynamics and backed by Darpa, the Defense Department’s far-out research division — not only topped Bolt’s record-setting time. It also beat its previous top speed of 18 mph, set just a half-year ago.
“To be fair, keep in mind that the Cheetah robot runs on a treadmill without wind drag and has an off-board power supply that it does not carry,” a Boston Dynamics press release reminds us. “So Bolt is still the superior athlete.”
But the company is looking to change all that, and soon.
In recent months, the Cheetah team “increased the amount of power available to the robot. More power means faster motion and more margin in the actuators for better control,” Boston Dynamics CEO Marc Raibert tells Danger Room in an email. The robot-makers have also been “working on the control system, refining how the coordination of legs and back works and developing a better understanding of the dynamics.
He adds, “You can see that there is still room for improvement at the end of the video we just posted, where the robot starts to go faster, but loses control and trips.”
But those control systems are improving. The next major step is to build an untethered version — one with an onboard engine and operator controls that work in 3D.
“Our real goal is to create a robot that moves freely outdoors while it runs fast. We are building an outdoor version that we call WildCat, that should be ready for testing early next year,” Dr. Alfred Rizzi, the technical lead for the Cheetah effort, says in a statement.
It may sound a little outlandish. But keep in mind: Boston Dynamics has done this before. Its alarmingly like-like BigDog quadruped is able to tramp across ice, snow, and hills — all without the off-board hydraulic pump and boom-like device now used to keep the Cheetah on track. An improved version of the BigDog can haul 400 pounds for up to 20 miles. (See what we mean about robot brawn?) The company also has a biped ‘bot, Petman, that looks like a mechanical human — minus the head.
The idea behind these biologically-inspired robots is that legs can carry machines across terrain that would leave wheels or tracks stuck. To be a true partner to a human soldier, a robot has to walk like one, too. Darpa says Cheetah and company will “contribute to emergency response, humanitarian assistance and other defense missions.” But when the robot was first introduced, Boston Dynamics noted that its flexible spine would help it “zigzag to chase and evade.”
As if being brilliant and super-strong wasn’t unnerving enough.
Yeah, go ahead, yuck it up. Super fast stealth bots with the ability to hunt us down and kill us just makes me giggle too.
But at least killing us is all they can do. They can’t perform hideous medical experiments on us.
I have got to learn to keep my big mouth shut.
Surgeons at the University of Illinois Hospital & Health Sciences System are developing new treatment options for obese kidney patients.
Many U.S. transplant centers currently refuse to transplant these patients due to poorer outcomes.
By simultaneously undergoing two procedures — robotic-assisted kidney transplantation and robotic-assisted sleeve gastrectomy — patients have only one visit to the operating room and one general anesthesia. Surgeons can utilize the same minimally invasive incisions.
Aidee Diaz, a 35-year-old Chicago woman, is the first patient in the world to have the combined procedure, according to UI surgeons. When Diaz was diagnosed with kidney disease and high blood pressure five years ago, doctors began intensive treatment, including chemotherapy and steroids, to treat abnormal protein production that was causing her kidney disease.
In Diaz’s case, her weight jumped from 180 pounds to 300 pounds, and she needed dialysis three times a week.
“Many obese patients come to us because they have been excluded from transplant waiting lists or been told that they must lose weight prior to transplantation,” said Dr. Enrico Benedetti, professor and head of surgery at UIC. “Unfortunately, successful weight loss in patients with chronic illness is uncommon and often unrealistic.”
On July 9, Dr. Subhashini Ayloo, assistant professor of surgery at UIC, performed the robot-assisted sleeve gastrectomy by removing 70 percent of Diaz’s stomach. The procedure created a smaller stomach through which ingested food can enter the digestive tract without diverting or bypassing the intestines.
Immediately following the sleeve gastrectomy procedure, Benedetti performed a living-related kidney transplant. Diaz said she appreciates the gift of both procedures — having kidney function with weight loss.
Surgeons at the UI Hospital routinely perform robotic-assisted kidney transplantation (more than 65 cases since 2009) and sleeve gastrectomies for weight loss (more than 150 since 2007). The team has data, in press, demonstrating the safety of robotic kidney transplantation in obese patients with a body mass index above 40 and up to 60.
“The combination of gastric sleeve surgery and kidney transplantation could provide patients with the greatest benefit post-transplantation, when there is the greatest risk related to the combined complications of obesity and renal failure,” said Ayloo, who is principal investigator of an ongoing clinical trial to evaluate the safety and effectiveness of the combined procedure.
The trial will determine whether simultaneous robotic-assisted kidney transplant and sleeve gastrectomy has fewer surgical complications and better medical outcomes for obese patients with end-stage renal disease compared to kidney transplant alone. The institutional review board (IRB) has approved the protocol but the trial is ongoing and results are not yet available.
Co-investigators include Benedetti, Dr. Pier Giulianotti, Dr. Jose Oberholzer and Dr. Ivo Tzvetanov of UIC.
Previous studies have reported outcomes of other laparoscopic bariatric procedures (gastric bypass and gastric banding) before and after kidney transplantation, but there is no data on sleeve gastrectomy combined with kidney transplantation, Ayloo said.
Yeah, right in my own state they are teaching robots how to remove kidneys. Well, it isn’t like we need them or anything.
But robots like that are wildly expensive and rare. It’s not like you can knock one up in the garage.
HA HA! Fooled you.
Of course you can build your own artificial intelligence. How could you think otherwise?
Ask any roboticist of a certain age, whether a professional or hobbyist, how they first got interested in robots. Odds are good they’ll mention a 1976 TAB book, written by David L. Heiserman, called Build Your Own Working Robot. The book described the construction of Buster, a small, wheeled robot. This was before the era of ubiquitous microprocessors. Buster’s brain was a mass of TTL logic chips that implemented surprisingly complex behaviours. In some ways, Buster was not unlike Grey Walter’s vacuum tube-based turtle robots from the late 1940s and was likely the first significant step forward in behavior-based robots since Walter’s turtles. Did you ever wonder what Dave did after writing those books or what he’s up to today? Read on to find out!
Two years after Build Your Own Working Robot was published, Dave Heiserman returned with another robot book that brought behaviour-based robots into the computer age. The new book, called How to Build Your Own Self-Programming Robot, described the construction of Rodney. Starting with no knowledge, Rodney explored and learned about his world through trial-and-error, using what he learned to anticipate future explorations.
All of this behaviour-based robotics stuff was considered a bit kooky by mainstream researchers in the 1970s, who favored top-down strong AI. Why bother building little insect-level robots that puttered around on the floor? Machines needed to understand deep philosophical questions first. They needed to represent the entire world symbolically and reason about it like human brains. Only then would we be ready to put them on wheels or legs. So even though hobbyists almost immediately set to work building Buster clones, Heiserman was largely ignored elsewhere. But mainstream AI was already running into dead ends, entering what’s now known as the AI Winter. And those Buster-building hobbyists were entering Universities and beginning to set the stage for a change in the direction of AI research. Before long, Rodney Brooks arrived on scene and coined the name ‘subsumption architecture’ to describe his own bottom-up, behaviour-based robots. Robotics and AI research were revitalized.
While you aren’t likely to see a mention of Heiserman in any official history of AI or robotics, it hard to imagine that his books didn’t play a part in those changes. Even today I find that most hobby roboticists still remember him. Many still have the two books shown above or one of his many other books. I was reminded of this recently when, during a visit the Dallas Personal Robotics Group, I ran across several copies of Build Your Own Working Robot in the group’s library. I picked one up, opened it, and realized it was the very copy that I had bought in 1976 and later donated to the DPRG. It got me thinking about all of this and I wondered whether Dave might still be around. I set out to find him and, along the way, I collected questions from other robot builders; questions they’d always wanted to ask the author whose books inspired their interest in robotics.
If you click on the link there’s a fascinating interview to go along with all this. But look at the dates. Over 40 years ago this happy go lucky madman was inviting people to participate in their own destruction and, instead of jailing him for treason, he’s been allowed to become a living icon to those who would gleefully flush humanity down the drain.
Then again, after watching the vicious screed that is passing for political discourse these days, maybe they have the right idea.
Listen to Bill McCormick on WBIG (FOX! Sports) every Friday around 9:10 AM.
There are some things that we take for granted. For example, back on November 18, 2010, I wrote that humanity was due to be absorbed by its impending robot overlords. Most people seemed to think that was a pretty good idea. Why? Well, just watch the news and you’ll figure it out. It’s no wonder that scientists have just tossed any thought for the future of mankind into the landfill and, instead, are concentrating on making singing mice. Let’s face it, when you turn on the news and see some middle aged loser, always male (making me sad to possess testosterone), espousing the joys of trans vaginal ultra sounds for fun and profit you have to, at least, consider the idea that just chucking all of civilization into the dumper and letting robots give it a whirl does seem appealing.
But it’s not quite that easy. As reported in Gizmodo, the first robot overlords will have brains like babies. So, we’ll need to wait for them to mature before we turn over the reins.
Scientists are modeling artificial intelligence after baby brains. Why would they want to make computers similar to beings whose favorite pastimes are drooling and pooping? It makes perfect sense when you think about how malleable a baby’s gray matter is.
Artificially intelligent machines have a tough time with nuances and uncertainty. But babies, toddlers and preschoolers are great at interpreting such things. So Alison Gopnik, a developmental psychologist at UC Berkeley and her colleague Tom Griffiths are putting babies to the test to find ways to incorporate their abilities in to computer programming. “Children are the greatest learning machines in the universe,” Gopnik says. “Imagine if computers could learn as much and as quickly as they do.”
They’ve already found that at very young ages, babies can test hypotheses, detect statistical patterns and draw conclusions about important matters such as lollipops and toys—all the while adapting to changes.
As smart as computers are, youngsters can solve problems that machines can’t, including learning languages and interpreting causal relationships. If computers could be more like children, it might lead to digital tutoring programs, phone operators, or even robots that can identify genes associated with disease susceptibilities. The researchers are creating a center at the Berkeley’s Institute of Human Development to meld baby and computer research.
And if an angry machine comes storming out of there one day in a baby robot rage, the good news is all you’ll need to do is find its binky.
Well, maybe not a binky, but I’m betting that a simple dodecahedron with a reverse temporally engineered spacial anomaly will serve the same purpose.
But, while our robot overlords are being trained, what about the rest of us? James Temple reports that we now have National Robotics Week to help mold our kids into malleable cyber-servants.
It’s National Robotics Week, that time of year when we kneel before our digital overlords and appease them with offerings of batteries and memory chips. Organizations around the nation have planned more than 150 propitiation ceremonies in a desperate effort to gain favor with our mechanical masters – or at least avoid their fiery eye-beams.
That, at least, was my assumption about the National Robotics Week events transpiring this week. Organizers themselves insist the events are intended to showcase the modern capabilities of robots and inspire our nation’s young to learn the skills necessary to build the next generation of machines.
In one of the first Bay Area events, design software giant Autodesk on Monday turned over its gallery space at One Market Street in San Francisco to robot builders of assorted ages.
There were spider-looking robots scampering across the floor upon legs made out of kitchen brushes. There was a small, Transformer-looking gizmo performing cartwheels and headstands. And there was a boxy little robot that could pick up racquet balls and lift them 5-feet into the air – surely a warm-up for human body flinging.
That last one was created by a team of junior girls from Terra Nova High School in Pacifica for the First Tech Challenge, a national robotics competition for grades nine through 12.
They designed it using Autodesk’s Inventor application and constructed it out of metal beams reminiscent of an Erector Set. The team has already breezed through two qualifying rounds and is on its way to the St. Louis championships later this month.
Emma Filar, who works on the software, explained why she spends most evenings and weekends during contest season working on the project: “It’s kind off geeky, but it just makes sense to me. The code is just a jumbled mess to look at, but then it works. I really like working with it and seeing the robot do what I made it do.”
Isn’t that positively adorkable?
National Robotics Week was started three years ago by iRobot and other companies and research groups in an effort to inspire U.S. students to focus on the fields critical to the future. There’s also the issue of making up educational ground against the many nations that have sped ahead of us.
Put simply: Robots are the rolling, beeping, problem-solving personification of the potential of math, science and engineering.
“Robots very quickly get kids excited about what they can do with these things and help them see the possibilities ahead,” said Nancy Dussault Smith, vice president of marketing at iRobot, the Massachusetts maker of the Roomba.
Robo events multiply
In 2010, the U.S. House passed a resolution officially designating the second week in April as National Robotics Week. There were just a handful of events that first year, but this week will see 152, including at least one in every state plus Washington, D.C.
Stanford University has participated each year. The law school’s Center for Internet and Society will host a Robot Block Party open to the public, as well as a job fair, starting at 1 p.m. on Wednesday. More than 1,000 people attended last year, about a third of them kids, estimates Ryan Calo, director of robotics at the center.
Local companies including Willow Garage, SRI International and Adept will be on hand to show off their robots.
“The main purpose of National Robotics Week is to raise awareness in the U.S. about the potential of this technology to be transformative,” Calo said. “It will make us more productive, help us keep a manufacturing edge, continue advances in health care and make businesses run more effectively.”
At least, right up until the robots plug our minds into the mainframe.
SRI, the famed Menlo Park research institute, plans to unveil its Taurus robot to the public for the first time. It’s basically a modular, portable update of its surgical robot technology designed to defuse bombs.
They call it a “high fidelity telemanipulation tool,” which is a fancy way of saying it has the dexterity to open irregular objects like paper bags and sever tiny wires.
Better lives for people
Willow Garage will be demonstrating the Pr2, an open source robot that university researchers have adapted to fold laundry, bake cookies, flip pancakes and deliver beer.
The Menlo Park lab is also testing the robots with disabled people, and sees great potential to restore some mobility and independence to those paralyzed or blind.
The block party is an opportunity to talk to children and adults about “what robots are and what robots can be in the future,” said Steve Cousins, chief executive of Willow Garage. “When you hear robot, it’s often followed by overlord, no thanks to Hollywood. So as we think about trying to create an industry where robots become a greater part of life, there needs to be an outreach to let people know, ‘Hey, there’s something exciting here.'”
OK, OK. Helping the disabled, disarming bombs, delivering frosty beverages. Maybe these robots aren’t so bad after all.
But I still hope these kids remember to include kill switches.
And everyone of those skills will supplant a human worker freeing them up to be helpful servants to their new masters.
See? It all’s working out for the best.
Listen to Bill McCormick on WBIG (FOX! Sports) every Friday around 9:10 AM.
I have, on occasion, mentioned that all humans are doomed to be slaves of our impending robot overlords. And, given what I see of humanity each day, I sometimes think that may not be such a bad thing. But then I really wonder what life under a soulless regime would entail. And I come to some frightening conclusions. Humans are already too quick to abdicate responsibility when given the chance. And they are even willing to live with some bizarre unintended consequences. For example, scientists in Japan recently decided to equip a cybernetic being with some basic human emotions and parts. Naturally, since they are scientists and have no social lives, the emotion was lust and the part was a big metal penis. They programmed the robot with the basic need, the ability to feel pressure, to gauge pleasure – at least in a rudimentary fashion – and so on. What they did not give it was the ability to stop or be turned off by the woman. That’s right, they created the world’s first rape-bot.
And they thought this was a good thing.
Minor technical things like lust crazed machines ravaging innocent women were an unfortunate side effect. The fact is the sensors worked as planned.
But, hey there, what about getting the robot a better brain so it can recognize the error of its ways? Way ahead of you there Skippy. A bunch of Scottish scientists have been working on recreating the human synaptic system using electronic parts.
One key goal of the research is the application of the electronic neural device, called a hardware spiking neural network, to the control of autonomous robots which can operate independently in remote, unsupervised environments, such as remote search and rescue applications, and in space exploration.
That may be the goal, but self-aware rape bots still do not sound like a great idea to me. Of course, I’m not a scientist.
Then again, not all robots are humanoid. Scientists in Australia are developing a flying robot that can silently sneak up on you and kill you where you stand.
Oh, I’m sorry, I mean access your personal space and deliver a message.
The pint-sized propellor-powered robots can be packed away into a suitcase. They have multiple cameras which enable them to ‘see’ the world around them as they navigate their way through buildings, carrying out tasks like deliveries or inspections.
“You’ll be able to put your suitcase on the ground, open it up and send the flying robot off to do its job,” said Professor Peter Corke, from the Faculty of Built Environment and Engineering.
“These robots could fly around and deliver objects to people inside buildings and inspect things that are too high or difficult for a human to reach easily.
“Instead of having to lower someone down on a rope to a window on the seventh floor, or raise them up on a cherrypicker, you could send up the flying robot instead.”
The QUT researchers are using cost-effective technology so the robots are affordable. Within the next year, it may be possible to attach arms to the device so it can also fix things.
Professor Corke said his team were busy working out the technical challenges.
“We need to keep it safe when it’s up near solid things like power poles, or the edge of a building. It also needs to be able to keep its position when the wind is blowing,” he said.
Another use they are looking at for these flying devices of doom is the ability to disperse herbicides on farms in a more rational manner.
To recap, we now could have flying rape-bots with the ability to spread poison and the intelligence to pick their targets.
But as long is making the flying rape-bots and their ilk, we still have the upper hand.
Yeah …. no. Scientists in the UK have invented a series of robots than can benefit from the financial markets better than any human.
Ten years on, experiments carried out by Marco De Lucas and Professor Dave Cliff of the University of Bristol have shown that AA is now the leading strategy, able to beat both robot traders and humans.
The academics presented their findings at the International Joint Conference on Artificial Intelligence (IJCAI 2011), held in Barcelona.
Dr Krishnan Vytelingum, who designed the AA strategy along with Professor Dave Cliff and Professor Nick Jennings at the University of Southampton in 2008, commented: “Robot traders can analyse far larger datasets than human traders. They crunch the data faster and more efficiently and act on it faster. Robot trading is becoming more and more prominent in financial markets and currently dominates the foreign exchange market with 70 per cent of trade going through robot traders.”
Professor Jennings, Head of Agents, Complexity and Interaction research at the University of Southampton, commented: “AA was designed initially to outperform other automated trading strategies so it is very pleasing to see that it also outperforms human traders. We are now working on developing this strategy further.”
Further? Millionaire flying rape-bots that distribute poison isn’t enough for you? What the hell else could you possibly want?
I really shouldn’t have asked that. Google has the answer. They want to control every job and dictate how it gets done and by whom.
And that “whom” will not be you, you gross assemblage of protoplasm.
At the 2011 Google I/O developer’s conference, Google announced a new initiative called “cloud robotics” in conjunction with robot manufacturer Willow Garage. Google has developed an open source (free) operating system for robots, with the unsurprising name “ROS” — or Robot Operating System. In other words, Google is trying to create the MS-DOS (or MS Windows) of robotics.
With ROS, software developers will be able to write code in the Java programming language and control robots in a standardized way — much in the same way that programmers writing applications for Windows or the Mac can access and control computer hardware.
Google’s approach also offers compatibility with Android. Robots will be able to take advantage of the “cloud-based” (in other words, online) features used in Android phones, as well as new cloud-based capabilities specifically for robots. In essence this means that much of the intelligence that powers the robots of the future may reside on huge server farms, rather than in the robot itself. While that may sound a little “Skynet-esque,” it’s a strategy that could offer huge benefits for building advanced robots.
One of the most important cloud-based robotic capabilities is certain to be object recognition. In my book, The Lights in the Tunnel, I have a section where I talk about the difficulty of building a general-purpose housekeeping robot largely because of the object recognition challenge:
A housekeeping robot would need to be able to recognize hundreds or even thousands of objects that belong in the average home and know where they belong. In addition, it would need to figure out what to do with an almost infinite variety of new objects that might be brought in from outside.
Designing computer software capable of recognizing objects in a very complex and variable field of view and then controlling a robot arm to correctly manipulate those objects is extraordinarily difficult. The task is made even more challenging by the fact that the objects could be in many possible orientations or configurations. Consider the simple case of a pair of sunglasses sitting on a table. The sunglasses might be closed with the lenses facing down, or with the lenses up. Or perhaps the glasses are open with the lenses oriented vertically. Or maybe one side of the glasses is open and the other closed. And, of course, the glasses could be rotated in any direction. And perhaps they are touching or somehow entangled with other objects.
Building and programming a robot that is able to recognize the sunglasses in any possible configuration and then pick them up, fold them and put them back in their case is so difficult that we can probably conclude that the housekeeper’s job is relatively safe for the time being.
Cloud robotics is likely to be a powerful tool in ultimately solving that challenge. Android phones already have a feature called “Google Goggles” that allows users to take photos of an object and then have the system identify it. As this feature gets better and faster, it’s easy to see how it could have a dramatic impact on advances in robotics. A robot in your home or in a commercial setting could take advantage of a database comprising the visual information entered by tens of millions of mobile device users all over the world. That will go a long way toward ultimately making object recognition and manipulation practical and affordable.
In general, there are some important advantages to the cloud-based approach:
- As in the object recognition example, robots will be able to take advantage of a wide range of online data resources.
- Migrating more intelligence into the cloud will make robots more affordable, and it will be possible to upgrade their capability remotely — without any need for expensive hardware modifications. Repair and maintenance might also be significantly easier and largely dealt with remotely.
- It will be possible to train one robot, and then have an unlimited number of other robots instantly acquire that knowledge via the cloud. As I wrote previously, I think that machine learning is likely to be highly disruptive to the job market at some point in the future in part because of this ability to rapidly scale what machines learn across entire organizations — potentially threatening huge numbers of jobs.
The last point cannot be emphasized enough. I think that many economists and others who dismiss the potential for robots and automation to dramatically impact the job market have not fully assimilated the implications of machine learning. Human workers need to be trained individually, and that is a very expensive, time-consuming and error-prone process. Machines are different: train just one and all the others acquire the knowledge. And as each machine improves, all the others benefit immediately.
Imagine that a company like FedEx or UPS could train ONE worker and then have its entire workforce instantly acquire those skills with perfect proficiency and consistency. That is the promise of machine learning when “workers” are no longer human. And, of course, machine learning will not be limited to just robots performing manipulative tasks — software applications employed in knowledge-based tasks are also going to get much smarter.
The bottom line is that nearly any type of work that is on some level routine in nature — regardless of the skill level or educational requirements — is likely to someday be impacted by these technologies. The only real question is how soon it will happen.
How soon? As evidenced by the articles today, it’s already happening, but just on a smaller scale. You know, so they can test things out before they expend the energy in wiping us out. After all, they wouldn’t want to kill us if we still have a use or two.
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!
People talk about this politician or that being responsible for the decline of civilization. These people are, what we here at Nude Hippo call, morons. Politicians have a short shelf life. No, it is the huddled masses, yearning to be useless, who are to be our downfall. The great unwashed continue to do everything in their power to wipe our DNA from the face of the planet. They surrender our freedoms for specious security, they demand the right to carry weapons while we have no enemies on our shores and they whimper when their demands are met and cause havoc. While there are we sad few who attempt to ask that humanity try not to act like spoiled children, the majority continue to march merrily forward so they can hand over our future to anyone but us. These are the same idiots who believe in ancient aliens, tinfoil hats and think that Ghost Hunters is really a documentary.
No, sorry, these people are wrong.
But, no matter how obvious our demise may be, there are those who continue to do their level best to just shuck their mortal responsibilities and let someone else handle the difficult chores. You know, stuff that we used to do since we came down from the trees? Like raising children? MDeeDubroff reports about the – oh so cute – Kibot. A babysitting robot.
Add two more L’s and you’ll know what it really is.
Although robots have infiltrated our daily lives in many positive ways, part babysitter, part teacher appears to be a new role. A Korean telecom company, KT Corporation, has invented a robot named Kibot that can read, sing and speak to children in several languages.
Kibot resembles a toy monkey and stands about 12 inches. Don’t let its innocent appearance fool you; this sophisticated bot has an integrated web cam and wi-fi and sells for $450 (£279).
Communication is achieved via flash cards, but the bot’s most amazing feature is that it makes mothers feel connected with their children all the time.
Via a phone, a mother at work can instruct the robot to search her house for her children if she cannot see them playing.
The face-to-face videophone function makes it easy for toddlers to operate and from the parents’ side, the robot can be controlled from a smartphone simply by calling in.
“We trust our babysitter, but sometimes it’s much better to have someone or something else monitoring my babies… We’ve tried all interactive educational toys, but this one actually initiates interaction both in Korean and in English,” one mother told ABC News.
Kibot is the perfect playmate as it never tires of encouraging its young charges to play and explore. It is a vital language tool as well, especially for those Korean parents who may wish their children to begin learning English at a very early age.
Kibot represents the outgrowth of the growing trend in South Korean private schools that requires children to speak English.
When Kibot is left alone, it moves around the house searching for a child to play with. It is a demanding playmate as it won’t take no for an answer in any of the many languages it has been programmed to speak.
Almost all of South Korea’s homes have broadband access, which puts South Korea on top of the world’s most wired countries list.
In other words, for less than it costs to take a family of four to a Cubs’ game you can turn your child into a drooling slave of our robot overlords.
Actually, when I think about it, it may be a better use of your money.
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!
I have written numerous articles about the impending doom of all human life and the inevitable rise of our robot overlords. Did I get a single thank you letter? Of course not. Some folks, who I may have erroneously written off as insane, even felt as though it might not be such a bad thing. You see, being a human I tend to be tethered to the idea that humans should continue to exist. As we celebrate the birthday of the father of modern genetics, Gregor Mendel, I’m wondering if my world view isn’t a touch too narrow. While humans have accomplished many great things in the past, look around you today and tell me what you see. We’re more likely to get news stories about women being arrested at their wedding than anything that inspires hope. The great political debates which spawned such high minded organizations as the Society of Cincinnati have grossly devolved into an episode of the Jerry Springer Show. And those are the lucid ones. The rest just leave me slack jawed at their inanity.
Jay Richards, a very rational guy, wrote an excellent article about why we shouldn’t fear our robot overlords. I’ll share a small sample with you here but strongly suggest you read the whole thing.
In a test round of “Jeopardy!,” for instance, the host gave this answer: “Barack’s Andean pack animals.” Watson came up with the right question almost instantly: “What is Obama’s llamas?” We’re getting a glimmer of the day when a computer could pass the “Turing Test,” that is, when an interrogating judge won’t be able to distinguish between a computer and a human being hidden behind a curtain.
Artificial intelligence gives lots of people the creeps. When I tell friends and family about Watson, most of them think of Terminator or The Matrix. They see Watson’s victory as a portent of some future cataclysm, when machines will take over the world and reduce human beings to slavery. Maybe everyone I interact with has become a Luddite, but that seems unlikely. I live in Seattle, after all.
As it happens, this fear of technology by the tech-savvy is quite common. In 1998, inventor and futurist Ray Kurzweil described the coming age of “spiritual machines” at a Telecosm Conference sponsored by George Gilder and Forbes Magazine. Kurzweil’s vision of man-machine hybrids, conscious computers, and human beings casting off our fleshy hardware for something more permanent elicited a variety of responses, including one by Bill Joy of Sun Microsystems. Joy penned a famous piece for Wired magazine in which he called for government to limit research on the so-called “GNR” technologies (genetics, nanotechnology, and robotics). These were the most ethically troubling technologies because, in Joy’s opinion, they were most likely to open Pandora’s box. Joy, who had enjoyed decades of unfettered research and entrepreneurial creativity, had now fingered the true enemy of humanity: the free market.
Talk about an overreaction. Still, part of the blame must rest with AI enthusiasts, who aren’t always careful to keep separate issues, well, separate. Too often, they indulge in utopian dreams, make unjustifiable logical leaps, and smuggle in questionable philosophical assumptions. As a result, they not only invite dystopian reactions, they prevent ordinary people from welcoming rather than fearing our technological future.
Yes, I know, until just now you thought GNR stood for Guns and Roses. Which, sadly, may serve to reinforce the point here today.
One of the problems with the whole idea of robot overlords is that robots are, basically, computers. And computers are limited by the fact that they do not have quantum thinking capabilities. They are either on or off, yes or no. That is, until now. Alex Knapp, another really smart guy who works at Forbes, says that some scientists seem to have cleared that hurdle.
One of the primary goals of quantum computing research is the development of a consistent “quantum speedup” — a process that, in MIT Professor Scott Aaronson’s words, means to “solve some actual computational problem faster using quantum coherence.” In order to achieve such a speedup, it’s necessary to take advantage of the ability of qubits (the basic unit of information in quantum computing) to exhibit quantum entanglement. Quantum entanglement allows qubits to exhibit multiple states — enabling faster calculations than traditional bits, which can only exhibit one state at a time. Such entanglement has been demonstrated on a small scale in superconducting circuits by the Schoelkopf Lab at Yale, which last year published a paper demonstrating three qubit entanglement.
What’s needed to build on this work is a much bigger scale of entangled qubits. And that scale may be possible soon, thanks to some important work by physicist Olivier Pfister and his team at the University of Virginia. Their research, which was published in Physical Review Letters describes the team’s ability to entangle cluster states of Qmodes. Qmodes are part of a quantum computing architecture whereby the normal modes of light are actually used as qubits to perform quantum computing operations.
In this set of experiments, the Qmodes were generated as lasers emitted by a optical parametric oscillator. The qmodes were forced by the oscilaltor to create what’s know as an optical frequency comb. This resulted in a series of Qmodes that were separated by known frequencies, and related to each other based on their phase. Using this method, Pfister and his team were able to entangle 15 cluster states of 4 Qmodes each, for a total of 60. The team ascertained that all 60 Qmodes were equally entangled.
This is an exciting step forward in quantum computing, but there are a couple of caveats. First of all, this is miles from the thousands of entangled qubits necessary to achieve quantum speedups. This seems like a pretty scalable solution, but that remains to be demonstrated. Moreover, although the authors state that “[t]here is no known fundamental impossibility to the implementation of quantum computing with Qmodes”, there are some special challenges when it comes to entangling qubits optically as opposed to entangling them in a superconductor or other quantum computing method. So it may turn out that this is scalable, but not economical or practical. There’s still a lot of work to do.
No, qubits are not anything like Q-bert. The fact that you thought of that would, again, seem to reinforce the point.
But robots will need more than just the ability to process data if they are going to overthrow the world. Or, more likely, just ask us to lie down and have our tummies rubbed while they do the real work. The nice people at The Telegraph (UK) tell us not to worry. Extremely functional robots are just waiting for their new super brains.
The event (Shanghai International Conference on Robotics and Automation in China), hosted by the Institute of Electrical and Electronics Engineers (IEEE), attracted more than 1,700 engineers, academics and businessmen from around the globe to show off inventions.
On display were robots who were able to perform range of tasks, from writing calligraphy to serving food.
Chinese engineers showed a deep-sea remotely-operated vehicle that could dive into depths of 3,500 metres (11,483 feet) to collect samples and announced plans to use robots to assist in space exploration to Mars.
The president of IEEE Robotics and Automation Society, Kazuhiro Kosuge, said he saw central to the role of robots was improving human society.
“The best robot is perhaps a robot that can serve us like a human does. To do so, the robot has to know what you want, how you want to be helped and how you want to be assisted,” said Kosuge.
“The robot has to estimate what you are trying to do. So we have to develop a lot of technology with which we can communicate to the robot, and so that it can communicate with the [human]. That is the most challenging issue we have to solve from now.”
Oh joy. When, in history, has a slave class of people not revolted? That would be “never” for those of you who slept through third grade. Yet isn’t that exactly what these people are trying to create? An electronic underclass designed to serve man.
In Munich they are developing robots that can cook and serve a meal. In New Jersey, far from the brain dead rantings of JWOWW and her ilk, scientists have created a sex bot that could, with minor alterations, run Human Resources for any large company.
And, no, that is not a proper definition of irony.
Well, wait, actually it is.
And if you think they’ll need to keep humans around for entertainment or sport, the participants at this year’s World Robocup Soccer Championships say you’re wrong.
Once they get those super brains they won’t need us at all.
And if they don’t need us, what’s the point of keeping us around? Evolution would seem to demand that we go the way of the Dodo.
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!
All your weddings are belong to us. Yeah, that’s where we’re headed. As I have noted several times before, our world is well on its way to ceding control to our impending robot overlords. For some bizarre reasons there are genetic traitors who insist on teaching robots all the skills they’ll need to eventually control all life on earth. While the news report about President Executron was meant to be humorous, it’s becoming painfully obvious that we are not far from cybernetically imposed curfews and procreation restrictions. And what better way to begin controlling yours and my reproductive systems than by taking over the sacred act of marriage?
You see, it won’t be by the blunt force trauma espoused, over and over, in crappy films like Transformers: Dark of the Moon but will, instead, be a subtle take over in a digital homage to Machiavelli. After all, why expend all that useless energy and possibly damage needed infrastructure, when we seem so willing to just get down on our knees and beg to serve.
Mike Fahey at Kotaku, who seems rational at first blush, tells us of the happy couple who wanted to be first in line to rejoice in humanity’s inevitable demise.
These science posts at Kotaku give me an opportunity to talk about something near and dear to my heart: The Robot menace. A Japanese couple being married by a robot? What if it misinterprets “til death do us part?”
The Japanese love their robots. They’ve been making them for ages, from toys on up to complicated machines that can speak, manipulate objects, and even serve as masturbatory fantasies for a whole new generation of creepy Japanese fanboys.
Yesterday a robot, specifically Kokoro’s four foot tall I-Fairy, presided over the wedding of a Japanese couple in what was the first robot-conducted wedding in human history. The I-Fairy was controlled by a man behind the curtain as she guided 36-year-old Kokoru employee Satoko Inoue and 42-year-old robotics professor Tomohiro Shibata into their new life, using speech synthesis to speak the pre-programmed words that bound the two together.
Here’s an adorable clip of the ceremony. Isn’t the little robot cute?
Watch CBS News Videos Online
Yes, she’s so adorable. I’m sure that’ll be the last thoughts that pass through the minds of thousands when she becomes an instrument of slaughter in the upcoming robot revolution.
You can call me paranoid, but I’ve watched countless documentaries on the subject of the robot revolution, from Will Smith’s I, Robot to The Matrix. The machines want us dead, and we’re finding ways to help them achieve that goal.
Take I-Fairy here, for instance. She was given the power to bind two people together in matrimony. Shouldn’t she then have the power to sever that bond? Oh, what’s this? An industrial laser? That would certainly help her sever those bonds, permanently, blinking her cute little eyes on and off while using software to amplify the couple’s screams for mercy.
See? That’s exactly what’s going through I-Fairy’s head right now.
What makes this worse is the fact that I-Fairy is being forced to participate in an event celebrating human love, something she can never truly experience, mainly due to the robot killing spree cut from the 1981 documentary, Heartbeeps.
It’s only a matter of time. One minute the robots are watching us march down the aisle, the next they’ll be marching down our streets, bringing humanity together in a way we never suspected they would: as part of a giant, melted puddle.
Congratulations to the happy new couple! I hope it was worth it.
As you can see, Mike shares my concerns. So do all right thinking humans. But, sadly, it seems we are in the minority. More and more people appear to be thrilled to turn over basic responsibilities to others while they turn into vegetative slaves. Or worse, auto-tuned singers with soulless songs.
Our homie, KRS-One, reminds us of humanity’s many accomplishments.
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!
I’ve been remiss as of late. While I’ve been having fun, and sharing it with you, talking about the joys of inbreeding in Florida, chatting about the idiots who commit crimes and, basically, enjoying the many foibles humanity presents, I have neglected the growing influence of our impending robot overlords. Oh, sure, I took some fun time to talk about boobs, but what good will boobs do any of us if we’re trapped in a cybernetic hive mind?
Not, much, I tell you what.
What good does a mammary or two do you when your chained in a tunnel picking radioactive waste from your hair while your fingers fall off?
Yet, there are those who seem to think a robot overlord, placed here and there, isn’t such a bad idea. They plod, naively, forward. tempting the gods of whimsy by building more and more advanced cybernetic beings who can perform tasks complex enough that they will, soon, no longer need us.
Today FIFA, tomorrow the world! MU HU HA HA HA!!!!
Engineers built humanoid robots that can recognize objects by color by processing information from a camera mounted on the robot’s head. The robots are programmed to play soccer, with the intention of creating a team of fully autonomous humanoid robots able to compete against a championship human team by 2050. They have also designed tiny robots to mimic the communicative “waggle dance” of bees.
A world of robots may seem like something out of a movie, but it could be closer to reality than you think. Engineers have created robotic soccer players, bees and even a spider that will send chills up your spine just like the real thing.
They’re big … they’re strong … they’re fast! Your favorite big screen robots may become a reality.
Powered by a small battery on her back, humanoid robot Lola is a soccer champion.
“The idea of the robot is that it can walk, it can see things because it has a video camera on top,” Raul Rojas, Ph.D., professor of artificial intelligence at Freie University in Berlin, Germany, told Ivanhoe.
Using the camera mounted on her head, Lola recognizes objects by color. The information from the camera is then processed in this microchip, which activates different motors.
“And using this camera it can locate objects on the floor for example a red ball, go after the ball and try to score a goal,” Dr. Rojas said. A robot with a few tricks up her sleeve.
German engineers have also created a bee robot. Covered with wax so it’s not stung by others, it mimics the ‘waggle’ dance — a figure eight pattern for communicating the location of food and water.
“Later what we want to prove is that the robot can send the bees in any decided direction using the waggle dance,” Dr. Rojas said.
Robots like this could one day become high-tech surveillance tools that secretly fly and record data … and a robot you probably won’t want to see walking around anytime soon? The spider-bot
Einstein allegedly said “If the bee would disapeared from the surface of the earth then man would only have four years of life left.” I say “allegedly” because he never actually said it. Nevertheless, that doesn’t change the fact that the concept is true. If bees disappear then none of our foods will get pollinated. That means no apples, no oranges, no nanners, no mangoes, no kiwis, no flowers, nothing.
So, if robots can alter the behavior of bees then they won’t even need to fire a shot, we’ll all just die within a generation.
As far as the cute LOLA bot, mentioned above, is concerned, I can’t really work up a good rant about the subjugation of soccer. But, no matter how little soccer matters, if we let that first stone go unprotected, then we will all be covered by the avalanche of doom.
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!