But, still, you figure if you’re delivering Mexican food you don’t plan on doing hard time. Yet that’s exactly what happened to a young man in Asheville, N.C. the other night.
Yes, the nice man did tell the cops it was Mexican food and not kilos of cocaine. The shrimp would have been my first clue, but I’m not a cop in North Carolina.
An enzyme found in cheese triggered false drug test results that led North Carolina deputies to think a man with 91 pounds of tortilla dough was actually carrying that much cocaine, the sheriff said.
Antonio Hernandez spent four days in an Asheville jail this month before tests by a state lab showed he was carrying food, not drugs.
A Buncombe County deputy stopped Hernandez on May 1 and found what turned out to be a mix of cheese, shrimp and tortilla and tamale dough in his truck. A portable kit used by deputies changed colors, indicating the mixture was illegal drugs.
Sheriff Van Duncan told The Asheville Citizen-Times he didn’t know until this case that some foods, like cheese, can give false positives on field drug tests. He plans to have deputies talk to the company that makes the tests.
Have these guys ever seen cocaine before? It’s not chunky, I can promise you that. And if it is, then you’re doing it wrong.
But, what about lying? If we penalize people for telling the truth then we must, WE MUST!! I TELL YOU!, give rewards to those who benefit from lying. And, just to help out our impending robot overlords, a scientist is hitting the trifecta by teaching robots to lie.
Robots can evolve to communicate with each other, to help, and even to deceive each other, according to Dario Floreano of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology.
Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light.
To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source.
The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire.
By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink.
Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”
So there you go, the squeaky wheel doesn’t get the grease, it gets replaced.
Seriously, teaching robots to lie is a good thing? What’s next? Your GPS saying, “Ignore that cliff, Bob, and just turn left to get your McNuggets.”
On the other hand, they might help us clear out a lot of stupid people.
Hmmmmm.
Listen to Bill McCormick on WBIG AM 1280, every Thursday morning around 9:10!