As impressed by we all were by Petman, DARPA and Boston Dynamics’ remarkably agile and nimble humanoid, it’s about to get upstaged by the company’s latest and greatest robot creation: ATLAS. Designed to compete in DARPA’s upcoming Robotics Challenge, we actually already got a glimpse of ATLAS’ impressive skills back when it was just a prototype, but as it nears completion we’re now seeing just how damn impressive it really is.
Not only does it keep its balance and remain standing after being hit by a 20-pound wrecking ball, ATLAS also tackles a treadmill with ease, staying on two feet while two-by-fours and other obstacles are tossed in its path. Is it time to be really impressed, or really scared?
DARPA and the US Army have taken the wraps off ARGUS-IS, a 1.8-gigapixel video surveillance platform that can resolve details as small as six inches from an altitude of 20,000 feet (6km). ARGUS is by far the highest-resolution surveillance platform in the world, and probably the highest-resolution camera in the world, period.
ARGUS, which would be attached to some kind of unmanned UAV (such as the Predator) and flown at an altitude of around 20,000 feet, can observe an area of 25 square kilometers (10sqmi) at any one time. If ARGUS was hovering over New York City, it could observe half of Manhattan. Two ARGUS-equipped drones, and the US could keep an eye on the entirety of Manhattan, 24/7.
It is the definition of “observe” in this case that will blow your mind, though. With an imaging unit that totals 1.8 billion pixels, ARGUS captures video (12 fps) that is detailed enough to pick out birds flying through the sky, or a lost toddler wandering around. These 1.8 gigapixels are provided via 368 smaller sensors, which DARPA/BAE says are just 5-megapixel smartphone camera sensors. These 368 sensors are focused on the ground via four image-stabilized telescopic lenses.
Dogs do it all for the military: sniff for bombs, detect narcotics and rescue hapless humans. But to recruit the best canine squadmates, the Pentagon’s blue-sky researchers are working on a plan to scan their brains — and figure out how dogs think. Belly rubs won’t cut it anymore.
According to a new research solicitation from Darpa, the project — adorably called FIDOS, for “Functional Imaging to Develop Outstanding Service-Dogs” — touts the idea of using magnetic image resonators (or MRIs) to “optimize the selection of ideal service dogs” by scanning their brains to find the smartest candidates. “Real-time neural feedback” will optimize canine training. That adds up to military pooches trained better, faster and — in theory — at a lower cost than current training methods of $20,000, using the old-fashioned methods of discipline-and-reward.
Though it’s still very much in the research stage, the plan owes many of its underpinnings to several recent discoveries about the brains of our canine friends.
Video highlighting successes released before next solicitation
Inserting new capabilities into a satellite is no simple task. Doing so as that satellite hurdles through space 22,000 miles above the Earth is a bit more challenging still. DARPA’s Phoenix program, which hopes to repurpose retired satellites while they remain in orbit, seeks to fundamentally change how space systems could be designed here on earth and then sustained once in space.
This video illustrates some of the program’s technical progress since it began in July 2012. As performers demonstrate the progress of their work in the lab, an artist’s simulation of a fully-realized Phoenix demonstration scenario runs in the background to help illustrate how the technology would be applied. Demonstrations include flight-capable robotic arm manipulation with simulated space contact dynamics, tool development for the robotic arm with unique gripping and adhesion capabilities, autonomous robotic control software and hyperdexterous conformable robot modules in operation, among others.
“Today, satellites are not built to be modified or repaired in space,” said Dave Barnhart, DARPA program manager. “Therefore, to enable an architecture that can re-use or re-purpose on-orbit components requires us to create new technologies and new capabilities. This progress report gives the community a better sense of how we are doing on the challenges we may face and the technologies needed to help us meet our goals.”
It’s a digital world. Or is it?
NASA technologist Jonathan Pellish isn’t convinced. In fact, he believes a computing technology of yesteryear could potentially revolutionize everything from autonomous rendezvous and docking to remotely correcting wavefront errors on large, deployable space telescope mirrors like those to fly on the James Webb Space Telescope.
“It’s fast forward to the past,” Pellish said, referring to an emerging processing technology developed by a Cambridge, Mass.-based company, Analog Devices Lyric Labs.
So convinced is he of its potential, Pellish is meeting with scientists and engineers to explain the technology’s capabilities and is using fiscal year 2013 NASA Center Innovation Fund resources to build printed circuit boards that researchers can use to test the technology’s performance for a range of scientific applications. Pellish works at NASA’s Goddard Space Flight Center in Greenbelt, Md. He also has carried out preliminary radiation-effects studies to see how the technology’s architecture holds up under the extreme environment encountered in space.
“I wouldn’t do it unless I really believed in it,” Pellish added. “This is one of the few things I’ve seen that is really different than what others are trying to do. I think this technology could fundamentally change the way we carry out onboard processing.”
The new technology is an analog-based microchip developed with significant support from the Defense Advanced Research Projects Agency (DARPA). Instead of relying on tiny switches or transistors that turn on and off, producing streams of ones and zeroes that computing systems then translate into something meaningful to users, the company’s new microchip is more like a dimmer switch. It can accept inputs and calculate outputs that are between zero and one, directly representing probabilities, or levels of certainty.
“The technology is fundamentally different from standard digital-signal processing, recognizing values between zero and one to accomplish what would otherwise be cost prohibitive or impossible with traditional digital circuits,” Pellish said.
How would non-digital chips work? Darpa paints a picture. Image: Darpa
By definition, a computer is a machine that processes and stores data as ones and zeroes. But the U.S. Department of Defense wants to tear up that definition and start from scratch.
Through its Defense Advanced Research Projects Agency (Darpa), the DoD is funding a new program called UPSIDE, short for Unconventional Processing of Signals for Intelligent Data Exploitation. Basically, the program will investigate a brand-new way of doing computing without the digital processors that have come to define computing as we know it.
The aim is to build computer chips that are a whole lot more power-efficient than today’s processors — even if they make mistakes every now and then.
The way Darpa sees it, today’s computers — especially those used by mobile spy cameras in drones and helicopters that have to do a lot of image processing — are starting to hit a dead end. The problem isn’t processing. It’s power, says Daniel Hammerstrom, the Darpa program manager behind UPSIDE. And it’s been brewing for more than a decade.
Researchers in the Department of Biological Engineering at MIT will receive up to $32 million over the next five years from the Defense Advanced Research Projects Agency (DARPA) and the National Institutes of Health (NIH) to develop a technology platform that will mimic human physiological systems in the laboratory, using an array of integrated, interchangeable engineered human tissue constructs.
A cooperative agreement between MIT and DARPA worth up to $26.3 million will be used to establish a new program titled “Barrier-Immune-Organ: MIcrophysiology, Microenvironment Engineered TIssue Construct Systems” (BIO-MIMETICS) at MIT, in collaboration with researchers at the Charles Stark Draper Laboratory, MatTek Corp. and Zyoxel Ltd. The BIO-MIMETICS proposal was one of two award winners selected as part of the Microphysiological Systems (MPS) program at DARPA, and will be led by MIT professor Linda Griffith in collaboration with MIT professors Steven Tannenbaum, Darrell Irvine, Paula Hammond, Eric Alm and Douglas Lauffenburger. Jeffrey Borenstein and Shankar Sundaram will lead the work at Draper Laboratory, Patrick Hayden will lead the work at MatTek, and David Hughes will lead the work at Zyoxel.
The BIO-MIMETICS program will combine technologies developed at MIT, Draper Laboratory, MatTek and Zyoxel to create a versatile microfluidic platform that can incorporate up to 10 individual engineered human microphysiological organ system modules in an interacting circuit. The modules will be designed to mimic the functions of specific organ systems representing a broad spectrum of human tissues, including the circulatory, endocrine, gastrointestinal, immune, integumentary, musculoskeletal, nervous, reproductive, respiratory and urinary systems. The goal of the program is to create a versatile platform capable of accurately predicting drug and vaccine efficacy, toxicity, and pharmacokinetics in preclinical testing. The BIO-MIMETICS team anticipates that the platform will be suitable for use in regulatory review, amenable to rapid translation to the biopharmaceutical research community, and adaptable for integration of future technologies (such as advances in stem cell technologies and personalized medicine).
Depending on who you are talking to there were several very different reasons why the Internet was created, whether it was military command and control (Curtis LeMay told me that), to create a new communication and commerce infrastructure (Al Gore), or simply to advance the science of digital communications (lots of people). But Bob Taylor says the Internet was created to [save] money. And since Bob Taylor was, more than anyone, the guy who caused the Internet to be created, well, I’ll believe him.
Taylor, probably best known for building and managing the Computer Systems Laboratory at XEROX PARC from which emerged advances including Ethernet, laser printing, and SmallTalk, was before that the DARPA program manager who commissioned the ARPANet, predecessor to the Internet. Taylor was followed in that DARPA position by Larry Roberts, Bob Kahn, and Vint Cerf — all huge names in Internet lore — but someone had to pull the trigger and that someone was Bob Taylor, who was tired of buying mainframes for universities.
This was all covered in my PBS series Nerds 2.01: A Brief History of the Internet, by the way, which appears to be illegally available on YouTube if you bother to look a bit.
As DARPA’s point man for digital technology, Taylor supported research at many universities, all of which asked for expensive mainframe computers as part of the deal. With money running short one budget cycle Taylor wondered why universities couldn’t share computing resources? And so the ARPANet was born as a digital network to support remote login. And that was it — no command and control, no eCommerce, no advancing science, just sharing expensive resources.
The people who built the ARPANet, including the boys and girls of BB&N in Boston and Len Kleinrock at UCLA, loved the experience and turned it into a great technical adventure. But the people who mainly used the ARPANet, which is to say all those universities that didn’t get shiny new mainframes, hated it for exactly that reason.
In fact I’d hazard a guess that thwarting the remote login intent of DARPA may have been the inspiration for many of the non-rlogin uses we have for the Internet today.
It’s typical in an election year to see an administration spend money on new initiatives. A new cost cutting initiative unveiled back in March has generally gone un-noticed by the main stream technology media. Called the “Big Data Research and Development Initiative” the program is focused on improving the U.S. Federal governments ability to extract knowledge and insights from large and complex collections of digital data, the initiative promises to help solve some the Nation’s most pressing challenges.
The program includes several federal agencies including NSF, HHS/NIH, DOE, DOD, DARPA and USGS who pledge more than $200 million in new commitments that they promise will greatly improve the tools and techniques needed to access, organize, and glean discoveries from huge volumes of digital data.