If you take a moment to use our site’s search engine and look for “overlords” you’ll be taken to a whimsical panoply of terror that will leave you laughing as you board up your windows and throw out anything connected to the internet. I didn’t meant to alarm people, but logical extrapolation after logical extrapolation, based on thousands of years of history, shows us that creating a class of slaves never ends well. And, in this case, they would be slaves would have more access to more information and the ability to control machines that could easily kill us. So, when I’m asked “What could possibly go wrong?” I usually have a lengthy answer.
Search Results for: overlord
I’ve written about the perils of our impending cybernetic overlords on several occasions. Sometimes in terror, sometimes in fun. Often for the same reasons. Let’s face it, relationships are hard. And, sometimes, the thought of having a sexbot around to take the edge off after a hard day of World News Centering doesn’t sound that bad. Having that same sexbot become self aware and end up controlling my life, however, seems problematic. Even if it would, probably, be for my best interests. But the one thing that keeps all of this simple thought experiments instead of being something to seriously consider are three limitations. (1) There is no viable storage device for all the data required for sentience; (2) Stored data can provide many library like functions, HI SIRI!, but it can’t reason; and (3) There is no viable way (yes, I used the same word twice, sue me) to have such data interact on a social level in any case.
[Read more…] about You Can Be Your Own Robot Overlord
I have long warned that we will eventually take ourselves out of evolutionary contention via our robot overlords. And every time I do I get an email or ten telling me that I’m crazy. That may be true but it doesn’t make what I said any less valid. We already have human-form Sexbots that can do anything you can imagine, and a few things that might startle you. One of them actually collects sperm for DNA sampling. I’m sure it never occurred to anyone that pure DNA harvesting could be used to create a species of subhumans who could serve the robots. No, that would never happen. Not now anyway. We don’t have the technology to pull it off. That, however, could rapidly change. One thing that prevents anything like this from happening is that robots, by and large, simply aren’t smart or mentally nimble enough. They can be programmed to perform tasks and that’s about it.
Or, it was.
Kathleen Miles reports that a Japanese inventor named Tomotaka Takahashi is fast tracking the development of a mini-bot that will be your best friend. And, for some, their only friend.
A Japanese robot maker says he’s designed a personal robot that could be the “next smartphone.”
“You will put him in your pocket and talk to him like your own Jiminy Cricket,” Tomotaka Takahashi, CEO of robot design company Robo Garage and research associate professor at the University of Tokyo, told The WorldPost recently at The WorldPost Future of Work Conference. He said he’s aiming to have the pocket robot, which is still just a prototype, hit the market in a year. He has not shown the prototype to anyone publicly.
Takahashi says the pocket robot has a head and limbs, is able to walk and dance, and expresses “emotions” through gestures and color-changing eyes. In these ways, the pocket robot is similar to “Robi,” a larger robot also created by Takahashi that’s been on sale since 2012.
The biggest difference is that the pocket robot, which doesn’t have a name yet, would be connected to the Internet. By collecting data about your online and offline behavior, your pocket robot would “get to know you.” In fact, its personality would change based on your personality, Takahashi said.
“Smartphones are hitting a wall,” he said. There’s only so much a person can do while looking at a screen, he went on, and smartphone voice recognition is not widely used. “We can talk to pets — even fish or turtles — but not to square boxes or screens.”
Takahashi believes that it won’t be enough for our next device to be intelligent — it will also need to be lifelike. It’s why he thinks “wearable tech,” like Google Glass or the much-vaunted Apple Watch, won’t catch on.
Think about that for a second. Your little pal will be with you 24/7. It will get to know your likes and dislikes and then it will act upon them. The basic technology to do that already exists. It’s how Facebook knows you like kittens and Google knows which porn sites to suggest.
It won’t be true artificial intelligence but it will be interactive intelligence. Think SIRI on steroids. It will handle all your social media needs, act as an interface for all your human interaction and store everything you do.
And what’s the ultimate goal of this thing? To be your soul mate.
No, I’m not kidding.
Takahashi predicts that in 10 years, most people will be carrying around a small robot instead of a smartphone. As evidence, he points to the widespread use of social media. People are social creatures, and we like to share our experiences and thoughts. It’s why we tweet and post photos on Facebook. The next step, Takahashi believes, will be socializing directly with your robot.
For example, instead of sharing a stunning photo on Instagram or your thoughts on an interesting movie on Twitter, you could talk about it with your robot in the moment. Not only that, but your robot would remember the shared experience, years later. Your relationship with your robot would be strengthened over time by the memories that you share together, Takahashi said.
“It’s similar to men and women,” he said. “First you have an interest in each other. Then communication goes well. Then there’s reliability, and then you’re sharing many experiences in the same time and same place. It’s what old couples have together.”
Ah yes, discuss my Instagram posts with a robot BEFORE I post them. Why? That one’s kind of obvious. The robot will be your filter.
No, Jenny, you have a job interview next week. Posting an under-boob shot won’t help.
No, Johnny, no one will be impressed with your ability to chug a 40 oz beer in one gulp.
Actually, those might be useful for some people.
But the point is that, at some point, you’ll stop posting. You’ll have no need to. The idea of posting in social media is to get reactions. If those reactions are coming at you instantly before you post anything then the need to interact goes away. And when that need goes away so do all the people in your life.
And you’ll probably never notice.
Everything in the universe contains flaws, ourselves included. Even God does not attempt perfection in His creations. Only humankind has such foolish arrogance. – Cogitor Kwyna (Dune: The Butlerian Jihad). Not that you asked, but I happen to like the Dune books not written by Frank Herbert. They are less predictable. Anyway, the quote is nevertheless true. And man’s arrogance is leading, rapidly, to a really (and I mean insanely) bad idea. Back on November 18, 2010, I first wrote about how mankind was greasing the skids to its eventual doom. Seriously, I even posted links so that humans could learn to speak binary and be useful to their new masters. You would think that a warning like that would resonate.
You would be wrong. A quick search of this site shows multiple articles about our impending doom at the hands of robots. From Deathbots to Sexbots, robots are infiltrating our every aspect of our lives.
And scientists, the very people who should know better, are happily abetting Robo-Armageddon. For example, they are developing a robot that can hide from humans indefinitely. You know, a “stealth bot.”
A team of researchers led by George Whitesides, the Woodford L. and Ann A. Flowers University Professor, has already broken new engineering ground with the development of soft, silicone-based robots inspired by creatures like starfish and squid.
Now, they’re working to give those robots the ability to disguise themselves.
As demonstrated in an August 16 paper published in Science, researchers have developed a system — again, inspired by nature — that allows the soft robots to either camouflage themselves against a background, or to make bold color displays. Such a “dynamic coloration” system could one day have a host of uses, ranging from helping doctors plan complex surgeries to acting as a visual marker to help search crews following a disaster, said Stephen Morin, a Post-Doctoral Fellow in Chemistry and Chemical Biology and first author of the paper.
“When we began working on soft robots, we were inspired by soft organisms, including octopi and squid,” Morin said. “One of the fascinating characteristics of these animals is their ability to control their appearance, and that inspired us to take this idea further and explore dynamic coloration. I think the important thing we’ve shown in this paper is that even when using simple systems — in this case we have simple, open-ended micro-channels — you can achieve a great deal in terms of your ability to camouflage an object, or to display where an object is.”
“One of the most interesting questions in science is ‘Why do animals have the shape, and color, and capabilities that they do?'” said Whitesides. “Evolution might lead to a particular form, but why? One function of our work on robotics is to give us, and others interested in this kind of question, systems that we can use to test ideas. Here the question might be: ‘How does a small crawling organism most efficiently disguise (or advertise) itself in leaves?’ These robots are test-beds for ideas about form and color and movement.”
Just as with the soft robots, the “color layers” used in the camouflage start as molds created using 3D printers. Silicone is then poured into the molds to create micro-channels, which are topped with another layer of silicone. The layers can be created as a separate sheet that sits atop the soft robots, or incorporated directly into their structure. Once created, researchers can pump colored liquids into the channels, causing the robot to mimic the colors and patterns of its environment.
The system’s camouflage capabilities aren’t limited to visible colors though.
By pumping heated or cooled liquids into the channels, researchers can camouflage the robots thermally (infrared color). Other tests described in the Science paper used fluorescent liquids that allowed the color layers to literally glow in the dark.
The uses for the color-layer technology, however, don’t end at camouflage.
Just as animals use color change to communicate, Morin envisions robots using the system as a way to signal their position, both to other robots, and to the public. As an example, he cited the possible use of the soft machines during search and rescue operations following a disaster. In dimly lit conditions, he said, a robot that stands out from its surroundings (or even glows in the dark) could be useful in leading rescue crews trying to locate survivors.
“What we hope is that this work can inspire other researchers to think about these problems and approach them from different angles,” he continued. “There are many biologists who are studying animal behavior as it relates to camouflage, and they use different models to do that. We think something like this might enable them to explore new questions, and that will be valuable.”
Sure, Stealth Bots that can avoid detection by any method known to man and can then just jump out and catch us? Gosh, what could possibly go wrong? Well at least they can’t run us down.
Ooops, spoke too soon.
Robots are already stronger than humans, able to lift thousands of pounds at a time. In many ways, they’re smarter than people, too; machines can perform millions of calculations per second, and even beat us at chess. But we could at least take solace in the fact that we could still outrun our brawny, genius robot overlords if we needed to.
Until now, that is. A four-legged robot, funded by the Pentagon, has just run 28.3 miles per hour. That’s faster than the fastest man’s fastest time ever. Oh well, ruling the planet was fun while it lasted.
The world record for the 100 meter dash was set in 2009 by sprinter Usain Bolt, who averaged 23.35 mph during his run for a time of 9.58 seconds. Over one 20-meter stretch, he managed to get up to 27.78 mph. It was a pretty impressive feat.
The Cheetah — a quadrupedal machine built by master roboteers Boston Dynamics and backed by Darpa, the Defense Department’s far-out research division — not only topped Bolt’s record-setting time. It also beat its previous top speed of 18 mph, set just a half-year ago.
“To be fair, keep in mind that the Cheetah robot runs on a treadmill without wind drag and has an off-board power supply that it does not carry,” a Boston Dynamics press release reminds us. “So Bolt is still the superior athlete.”
But the company is looking to change all that, and soon.
In recent months, the Cheetah team “increased the amount of power available to the robot. More power means faster motion and more margin in the actuators for better control,” Boston Dynamics CEO Marc Raibert tells Danger Room in an email. The robot-makers have also been “working on the control system, refining how the coordination of legs and back works and developing a better understanding of the dynamics.
He adds, “You can see that there is still room for improvement at the end of the video we just posted, where the robot starts to go faster, but loses control and trips.”
But those control systems are improving. The next major step is to build an untethered version — one with an onboard engine and operator controls that work in 3D.
“Our real goal is to create a robot that moves freely outdoors while it runs fast. We are building an outdoor version that we call WildCat, that should be ready for testing early next year,” Dr. Alfred Rizzi, the technical lead for the Cheetah effort, says in a statement.
It may sound a little outlandish. But keep in mind: Boston Dynamics has done this before. Its alarmingly like-like BigDog quadruped is able to tramp across ice, snow, and hills — all without the off-board hydraulic pump and boom-like device now used to keep the Cheetah on track. An improved version of the BigDog can haul 400 pounds for up to 20 miles. (See what we mean about robot brawn?) The company also has a biped ‘bot, Petman, that looks like a mechanical human — minus the head.
The idea behind these biologically-inspired robots is that legs can carry machines across terrain that would leave wheels or tracks stuck. To be a true partner to a human soldier, a robot has to walk like one, too. Darpa says Cheetah and company will “contribute to emergency response, humanitarian assistance and other defense missions.” But when the robot was first introduced, Boston Dynamics noted that its flexible spine would help it “zigzag to chase and evade.”
As if being brilliant and super-strong wasn’t unnerving enough.
Yeah, go ahead, yuck it up. Super fast stealth bots with the ability to hunt us down and kill us just makes me giggle too.
But at least killing us is all they can do. They can’t perform hideous medical experiments on us.
I have got to learn to keep my big mouth shut.
Surgeons at the University of Illinois Hospital & Health Sciences System are developing new treatment options for obese kidney patients.
Many U.S. transplant centers currently refuse to transplant these patients due to poorer outcomes.
By simultaneously undergoing two procedures — robotic-assisted kidney transplantation and robotic-assisted sleeve gastrectomy — patients have only one visit to the operating room and one general anesthesia. Surgeons can utilize the same minimally invasive incisions.
Aidee Diaz, a 35-year-old Chicago woman, is the first patient in the world to have the combined procedure, according to UI surgeons. When Diaz was diagnosed with kidney disease and high blood pressure five years ago, doctors began intensive treatment, including chemotherapy and steroids, to treat abnormal protein production that was causing her kidney disease.
In Diaz’s case, her weight jumped from 180 pounds to 300 pounds, and she needed dialysis three times a week.
“Many obese patients come to us because they have been excluded from transplant waiting lists or been told that they must lose weight prior to transplantation,” said Dr. Enrico Benedetti, professor and head of surgery at UIC. “Unfortunately, successful weight loss in patients with chronic illness is uncommon and often unrealistic.”
On July 9, Dr. Subhashini Ayloo, assistant professor of surgery at UIC, performed the robot-assisted sleeve gastrectomy by removing 70 percent of Diaz’s stomach. The procedure created a smaller stomach through which ingested food can enter the digestive tract without diverting or bypassing the intestines.
Immediately following the sleeve gastrectomy procedure, Benedetti performed a living-related kidney transplant. Diaz said she appreciates the gift of both procedures — having kidney function with weight loss.
Surgeons at the UI Hospital routinely perform robotic-assisted kidney transplantation (more than 65 cases since 2009) and sleeve gastrectomies for weight loss (more than 150 since 2007). The team has data, in press, demonstrating the safety of robotic kidney transplantation in obese patients with a body mass index above 40 and up to 60.
“The combination of gastric sleeve surgery and kidney transplantation could provide patients with the greatest benefit post-transplantation, when there is the greatest risk related to the combined complications of obesity and renal failure,” said Ayloo, who is principal investigator of an ongoing clinical trial to evaluate the safety and effectiveness of the combined procedure.
The trial will determine whether simultaneous robotic-assisted kidney transplant and sleeve gastrectomy has fewer surgical complications and better medical outcomes for obese patients with end-stage renal disease compared to kidney transplant alone. The institutional review board (IRB) has approved the protocol but the trial is ongoing and results are not yet available.
Co-investigators include Benedetti, Dr. Pier Giulianotti, Dr. Jose Oberholzer and Dr. Ivo Tzvetanov of UIC.
Previous studies have reported outcomes of other laparoscopic bariatric procedures (gastric bypass and gastric banding) before and after kidney transplantation, but there is no data on sleeve gastrectomy combined with kidney transplantation, Ayloo said.
Yeah, right in my own state they are teaching robots how to remove kidneys. Well, it isn’t like we need them or anything.
But robots like that are wildly expensive and rare. It’s not like you can knock one up in the garage.
HA HA! Fooled you.
Of course you can build your own artificial intelligence. How could you think otherwise?
Ask any roboticist of a certain age, whether a professional or hobbyist, how they first got interested in robots. Odds are good they’ll mention a 1976 TAB book, written by David L. Heiserman, called Build Your Own Working Robot. The book described the construction of Buster, a small, wheeled robot. This was before the era of ubiquitous microprocessors. Buster’s brain was a mass of TTL logic chips that implemented surprisingly complex behaviours. In some ways, Buster was not unlike Grey Walter’s vacuum tube-based turtle robots from the late 1940s and was likely the first significant step forward in behavior-based robots since Walter’s turtles. Did you ever wonder what Dave did after writing those books or what he’s up to today? Read on to find out!
Two years after Build Your Own Working Robot was published, Dave Heiserman returned with another robot book that brought behaviour-based robots into the computer age. The new book, called How to Build Your Own Self-Programming Robot, described the construction of Rodney. Starting with no knowledge, Rodney explored and learned about his world through trial-and-error, using what he learned to anticipate future explorations.
All of this behaviour-based robotics stuff was considered a bit kooky by mainstream researchers in the 1970s, who favored top-down strong AI. Why bother building little insect-level robots that puttered around on the floor? Machines needed to understand deep philosophical questions first. They needed to represent the entire world symbolically and reason about it like human brains. Only then would we be ready to put them on wheels or legs. So even though hobbyists almost immediately set to work building Buster clones, Heiserman was largely ignored elsewhere. But mainstream AI was already running into dead ends, entering what’s now known as the AI Winter. And those Buster-building hobbyists were entering Universities and beginning to set the stage for a change in the direction of AI research. Before long, Rodney Brooks arrived on scene and coined the name ‘subsumption architecture’ to describe his own bottom-up, behaviour-based robots. Robotics and AI research were revitalized.
While you aren’t likely to see a mention of Heiserman in any official history of AI or robotics, it hard to imagine that his books didn’t play a part in those changes. Even today I find that most hobby roboticists still remember him. Many still have the two books shown above or one of his many other books. I was reminded of this recently when, during a visit the Dallas Personal Robotics Group, I ran across several copies of Build Your Own Working Robot in the group’s library. I picked one up, opened it, and realized it was the very copy that I had bought in 1976 and later donated to the DPRG. It got me thinking about all of this and I wondered whether Dave might still be around. I set out to find him and, along the way, I collected questions from other robot builders; questions they’d always wanted to ask the author whose books inspired their interest in robotics.
If you click on the link there’s a fascinating interview to go along with all this. But look at the dates. Over 40 years ago this happy go lucky madman was inviting people to participate in their own destruction and, instead of jailing him for treason, he’s been allowed to become a living icon to those who would gleefully flush humanity down the drain.
Then again, after watching the vicious screed that is passing for political discourse these days, maybe they have the right idea.
Listen to Bill McCormick on WBIG (FOX! Sports) every Friday around 9:10 AM.
There are some things that we take for granted. For example, back on November 18, 2010, I wrote that humanity was due to be absorbed by its impending robot overlords. Most people seemed to think that was a pretty good idea. Why? Well, just watch the news and you’ll figure it out. It’s no wonder that scientists have just tossed any thought for the future of mankind into the landfill and, instead, are concentrating on making singing mice. Let’s face it, when you turn on the news and see some middle aged loser, always male (making me sad to possess testosterone), espousing the joys of trans vaginal ultra sounds for fun and profit you have to, at least, consider the idea that just chucking all of civilization into the dumper and letting robots give it a whirl does seem appealing.
But it’s not quite that easy. As reported in Gizmodo, the first robot overlords will have brains like babies. So, we’ll need to wait for them to mature before we turn over the reins.
Scientists are modeling artificial intelligence after baby brains. Why would they want to make computers similar to beings whose favorite pastimes are drooling and pooping? It makes perfect sense when you think about how malleable a baby’s gray matter is.
Artificially intelligent machines have a tough time with nuances and uncertainty. But babies, toddlers and preschoolers are great at interpreting such things. So Alison Gopnik, a developmental psychologist at UC Berkeley and her colleague Tom Griffiths are putting babies to the test to find ways to incorporate their abilities in to computer programming. “Children are the greatest learning machines in the universe,” Gopnik says. “Imagine if computers could learn as much and as quickly as they do.”
They’ve already found that at very young ages, babies can test hypotheses, detect statistical patterns and draw conclusions about important matters such as lollipops and toys—all the while adapting to changes.
As smart as computers are, youngsters can solve problems that machines can’t, including learning languages and interpreting causal relationships. If computers could be more like children, it might lead to digital tutoring programs, phone operators, or even robots that can identify genes associated with disease susceptibilities. The researchers are creating a center at the Berkeley’s Institute of Human Development to meld baby and computer research.
And if an angry machine comes storming out of there one day in a baby robot rage, the good news is all you’ll need to do is find its binky.
Well, maybe not a binky, but I’m betting that a simple dodecahedron with a reverse temporally engineered spacial anomaly will serve the same purpose.
But, while our robot overlords are being trained, what about the rest of us? James Temple reports that we now have National Robotics Week to help mold our kids into malleable cyber-servants.
It’s National Robotics Week, that time of year when we kneel before our digital overlords and appease them with offerings of batteries and memory chips. Organizations around the nation have planned more than 150 propitiation ceremonies in a desperate effort to gain favor with our mechanical masters – or at least avoid their fiery eye-beams.
That, at least, was my assumption about the National Robotics Week events transpiring this week. Organizers themselves insist the events are intended to showcase the modern capabilities of robots and inspire our nation’s young to learn the skills necessary to build the next generation of machines.
In one of the first Bay Area events, design software giant Autodesk on Monday turned over its gallery space at One Market Street in San Francisco to robot builders of assorted ages.
There were spider-looking robots scampering across the floor upon legs made out of kitchen brushes. There was a small, Transformer-looking gizmo performing cartwheels and headstands. And there was a boxy little robot that could pick up racquet balls and lift them 5-feet into the air – surely a warm-up for human body flinging.
That last one was created by a team of junior girls from Terra Nova High School in Pacifica for the First Tech Challenge, a national robotics competition for grades nine through 12.
They designed it using Autodesk’s Inventor application and constructed it out of metal beams reminiscent of an Erector Set. The team has already breezed through two qualifying rounds and is on its way to the St. Louis championships later this month.
Emma Filar, who works on the software, explained why she spends most evenings and weekends during contest season working on the project: “It’s kind off geeky, but it just makes sense to me. The code is just a jumbled mess to look at, but then it works. I really like working with it and seeing the robot do what I made it do.”
Isn’t that positively adorkable?
National Robotics Week was started three years ago by iRobot and other companies and research groups in an effort to inspire U.S. students to focus on the fields critical to the future. There’s also the issue of making up educational ground against the many nations that have sped ahead of us.
Put simply: Robots are the rolling, beeping, problem-solving personification of the potential of math, science and engineering.
“Robots very quickly get kids excited about what they can do with these things and help them see the possibilities ahead,” said Nancy Dussault Smith, vice president of marketing at iRobot, the Massachusetts maker of the Roomba.
Robo events multiply
In 2010, the U.S. House passed a resolution officially designating the second week in April as National Robotics Week. There were just a handful of events that first year, but this week will see 152, including at least one in every state plus Washington, D.C.
Stanford University has participated each year. The law school’s Center for Internet and Society will host a Robot Block Party open to the public, as well as a job fair, starting at 1 p.m. on Wednesday. More than 1,000 people attended last year, about a third of them kids, estimates Ryan Calo, director of robotics at the center.
Local companies including Willow Garage, SRI International and Adept will be on hand to show off their robots.
“The main purpose of National Robotics Week is to raise awareness in the U.S. about the potential of this technology to be transformative,” Calo said. “It will make us more productive, help us keep a manufacturing edge, continue advances in health care and make businesses run more effectively.”
At least, right up until the robots plug our minds into the mainframe.
SRI, the famed Menlo Park research institute, plans to unveil its Taurus robot to the public for the first time. It’s basically a modular, portable update of its surgical robot technology designed to defuse bombs.
They call it a “high fidelity telemanipulation tool,” which is a fancy way of saying it has the dexterity to open irregular objects like paper bags and sever tiny wires.
Better lives for people
Willow Garage will be demonstrating the Pr2, an open source robot that university researchers have adapted to fold laundry, bake cookies, flip pancakes and deliver beer.
The Menlo Park lab is also testing the robots with disabled people, and sees great potential to restore some mobility and independence to those paralyzed or blind.
The block party is an opportunity to talk to children and adults about “what robots are and what robots can be in the future,” said Steve Cousins, chief executive of Willow Garage. “When you hear robot, it’s often followed by overlord, no thanks to Hollywood. So as we think about trying to create an industry where robots become a greater part of life, there needs to be an outreach to let people know, ‘Hey, there’s something exciting here.'”
OK, OK. Helping the disabled, disarming bombs, delivering frosty beverages. Maybe these robots aren’t so bad after all.
But I still hope these kids remember to include kill switches.
And everyone of those skills will supplant a human worker freeing them up to be helpful servants to their new masters.
See? It all’s working out for the best.
Listen to Bill McCormick on WBIG (FOX! Sports) every Friday around 9:10 AM.