The U.S. government is developing new computer weapons and driving a black market in “zero-day” bugs. The result could be a more dangerous Web for everyone.
Every summer, computer security experts get together in Las Vegas for Black Hat and DEFCON, conferences that have earned notoriety for presentations demonstrating critical security holes discovered in widely used software. But while the conferences continue to draw big crowds, regular attendees say the bugs unveiled haven’t been quite so dramatic in recent years.
One reason is that a freshly discovered weakness in a popular piece of software, known in the trade as a “zero-day” vulnerability because the software makers have had no time to develop a fix, can be cashed in for much more than a reputation boost and some free drinks at the bar. Information about such flaws can command prices in the hundreds of thousands of dollars from defense contractors, security agencies and governments.
This trade in zero-day exploits is poorly documented, but it is perhaps the most visible part of a new industry that in the years to come is likely to swallow growing portions of the U.S. national defense budget, reshape international relations, and perhaps make the Web less safe for everyone.
Zero-day exploits are valuable because they can be used to sneak software onto a computer system without detection by conventional computer security measures, such as antivirus packages or firewalls. Criminals might do that to intercept credit card numbers. An intelligence agency or military force might steal diplomatic communications or even shut down a power plant.
Officers in Mildura, Victoria, said they had had to assist drivers stranded after following the software’s directions.
Some of the drivers had had no food or water for 24 hours.
“Tests on the mapping system by police confirm the mapping systems lists Mildura in the middle of the Murray Sunset National Park, approximately 70km [45 miles] away from the actual location of Mildura,” she said.
“Police are extremely concerned as there is no water supply within the park and temperatures can reach as high as 46[C], making this a potentially life-threatening issue.”
The force advised travellers to use an alternative mapping service until the issues had been fixed.
Columbo would have hated the latest trend in crime-fighting. And it definitely would have made Dirty Harry even more unhinged.
But Sherlock Holmes, now he would have been impressed. The logic, the science, the compilation of data-all the stuff of Holmesian detective work.
I’m talking about something known as predictive policing-gathering loads of data and applying algorithms to deduce where and when crimes are most likely to occur. Late last month, the Los Angeles Police Department announced that it will be expanding its use of software created by a California startup named PredPol.
For the past six months, police in that city’s Foothill precinct have been following the advice of a computer and the result, according the the LAPD, is a 25 percent drop in reported burglaries in the neighborhoods to which they were directed. Now the LAPD has started using algorithm-driven policing in five more precincts covering more than 1 million people.
PredPol’s software, which previously had been tested in Santa Cruz-burglaries there dropped by 19 percent-actually evolved from a program used to predict earthquakes. Now it crunches years of crime data, particularly location and time, and refines it with what’s known about criminal behavior, such as the tendency of burglars to work the neighborhoods they know best.
Before each shift, officers are given maps marked with red boxes of likely hot spots for property crimes, in some cases zeroing in on areas as small as 500 feet wide. They’re told that whenever they’re not on calls, they should spend time in one of the boxes, preferably at least 15 minutes of every two hours. The focus is less on solving crimes, and more on preventing them by establishing a high profile in crime zones the computer has targeted.
What is Apple at heart: a software company, or hardware company?
This is a perennial question. The truth, of course, is that Apple is neither. Apple is an experience company. That they create both hardware and software is part of creating the entire product experience.
But, as a thought experiment, which is more important to you? What phone would you rather carry? An iPhone 4S modified to run Android or Windows Phone 7? Or a top-of-the-line HTC, Samsung, or Nokia handset running iOS 5?
What computer would you rather use? A MacBook running Windows 7, or, say, a Lenovo ThinkPad running Mac OS X 10.7?
For me, the answers are easy. It’s the software that matters most to me. I’d pick a Nokia Lumia running iOS 5 over an iPhone 4S running any other OS, and I’d pick the ThinkPad running Mac OS X over a Mac running Windows. No hesitation.
What do you think Steve Jobs would have chosen, facing the same choices?
Well worth reading for it’s insightful analysis of the creative thinking that drives Apple.
Symantec Corp took the rare step of advising customers to stop using one of its products, saying its pcAnywhere software for accessing remote PCs is at increased risk of getting hacked after blueprints of that software were stolen.
The announcement is the company’s most direct acknowledgement to date that a 2006 theft of its source code put customers at risk of attack.
Symantec said it was only asking customers to temporarily stop using the product, until it releases an update to the software that will mitigate the risk of an attack.
It acknowledged that some customers would need to continue using the software for “business critical purposes,” saying they should make sure they were using the most recent version of the product and “understand the current risks,” which include the possibility that hackers could steal data or credentials.
Still, it is highly unusual for a software maker to advise customers to disable a product completely while engineers develop an update to fix bugs. Companies typically recommend mitigating factors that will reduce the risk of an attack.
“That’s crazy. That’s pretty much unheard of to just say ‘Stop using it.’ Especially a vendor as large as Symantec,” said H.D. Moore, chief architect of Metasploit, a platform that security experts use to test whether computer systems are vulnerable to attack.
PcAnywhere is a software program that is also bundled with some titles in Symantec’s Altiris line of software for managing corporate PCs, Symantec said in a white paper and note to customers released on its website overnight where it disclosed the warning.
Company spokesman Cris Paden said that Symantec has fewer than 50,000 customers using the stand-alone version of pcAnywhere, which was available for sale on its website for $100 and $200 as of early Wednesday afternoon.
The company last week warned customers of the 2006 theft of the source code, or blueprints, to pcAnywhere and several other titles: Norton Antivirus Corporate Edition, Norton Internet Security, Norton Utilities and Norton GoBack.
It made the announcement after a hacker who goes by the name YamaTough released the source code to its Norton Utilities PC software and had threatened to publish its widely used anti-virus programs. Authorities have yet to apprehend that hacker.
At the time, Paden said that the theft of the code posed no threat as long as customers were using the most recent versions of Symantec’s software, with one exception: users of pcAnywhere might face “a slightly increased security risk.”
In the white paper published early on Wednesday morning, the company indicated the situation was more serious.
“At this time, Symantec recommends disabling the product until Symantec releases a final set of software updates that resolve currently known vulnerability risks,” it said in the white paper. (bit.ly/wPzX7v)
The company also reiterated its previous guidance that users of its other software titles were not at heightened risk because of the breach in 2006.
“The code that has been exposed is so old that current out-of-the-box security settings will suffice against any possible threats that might materialize as a result of this incident,” it said on its website. (bit.ly/wqtxTI)
This is great news, I’ve been taking advantage of free online/open course ware for a few years.
A decade after MIT began to put its teaching materials and lectures online via the OpenCourseWare platform, the university has announced that it will leverage these materials to provide an online certification program, currently termed MITx. Although these certificates won’t have the same weight as an MIT degree, they will indicate mastery of specific subject areas. The whole system will be built on top of an open-source software platform, which may enable other universities to follow in MIT’s footsteps.
The current method for preventing users and unauthorised individuals from obtaining information to which they should not have access in data programs is often to have code reviewers check the code manually, looking for potential weaknesses. Niklas Broberg of the University of Gothenburg has developed a new programming language which automatically identifies potential information leaks while the program is being written.
The most common causes of security issues in today’s software are not inadequate network security, poor security protocols or weak encryption mechanisms. In most cases, they are the result of imperfectly written software that contains the potential for information leaks. Users are able to exploit leaks and loopholes that are unintentionally introduced during programming, to obtain more information than they should have access to.
Unauthorised users may also be able to manipulate sensitive information in the system, such as that contained in a database. Currently, the most common method of preventing leaks, loopholes and manipulation is to rely on so-called code reviewers, who “proof-read” the code manually in order to identify errors and deficiencies once the programmers are finished with the code.
Paragon identifies potential information leaks while the program is being written
As a solution to these problems, Niklas Broberg has developed the programming language Paragon. The methodology is presented in his thesis “Practical, Flexible Programming with Information Flow Control” which was written in August 2011.
“The main strength of Paragon is its ability to automatically identify potential information leaks while the program is being developed,” says Niklas Broberg. “Paragon is an extension of the commonly-used programming language Java and has been designed to be easy to use. A programmer will easily be able to add my specifications to his or her Java program, thus benefiting from the strong security guarantees that the language provides.”
Two-stage security process
Niklas Broberg’s method has two stages. The first stage specifies how information in the software may be used, who should be allowed access to it and under what conditions. Stage two of the security process takes place during compilation, where the program’s use of information is analysed in depth. If the analysis identifies a risk for sensitive information leaking or being manipulated, the compiler reports an error, enabling the programmer to resolve the issue immediately. The analysis is proven to provide better guarantees than all previous attempts in this field.
“Achieving information security in a system requires a chain of different measures, with the system only being as secure as its weakest link,” says Niklas Broberg. “We can have completely effective methods for guaranteeing the authentication of users or encryption of data, but which can be circumvented in practice due to information leaks. Security loopholes in software are currently the most common source of vulnerabilities in our computer systems and it is high time we take these problems seriously.”
A data-logging software company is seeking to squash an Android developer’s critical research into its software that is secretly installed on millions of phones, but Trevor Eckhart is refusing to publicly apologize for his research and remove the company’s training manuals from his website.
Though the software is installed on millions of Android, BlackBerry and Nokia phones, Carrier IQ was virtually unknown until the 25-year-old Eckhart analyzed its workings, recently revealing that the software secretly chronicles a user’s phone experience, from its apps, battery life and texts. Some carriers prevent users who actually find the software from controlling what information is sent.
Eckhart called the software a “rootkit,” a security term that refers to software installed at a low-level on a device, without a user’s consent or knowledge in order to secretly intercept the device’s workings. Malware such as keyloggers and trojans are two examples.
He also mirrored the Mountain View, Calif. company’s training manuals he’d found on Carrier IQ’s publicly available website. The manuals provide a limited roadmap for how Carrier IQ works, Eckhart said in a telephone interview.
When Carrier IQ discovered Eckhart’s recent research and his posting of those manuals, Carrier IQ sent him a cease-and-desist notice, saying Eckhart was in breach of copyright law and could face damages of as much as $150,000, the maximum allowed under U.S. copyright law per violation. The company removed the manuals from its own website, as well.
On Monday, the Electronic Frontier Foundation announced it had came to the assistance of the 25-year-old Eckhart of Connecticut, whom Carrier IQ claims has breached copyright law for reposting the manuals.
“I’m mirroring the stuff so other people are able to read this and verify my research,” he said. “I’m just a little guy. I’m not doing anything malicious.”
Fascinating article on how the complex Stuxnet virus was discovered and decoded.
When you’ve seen as many viruses and worms as O Murchu has, you can glance at a piece of malware and know instantly what it does — this one is a keystroke logger, that one is a banking Trojan — and whether it was slapped together sloppily, or carefully crafted and organized. Stuxnet was the latter. It contained multiple components, all compartmentalized into different locations to make it easy to swap out functions and modify the malware as needed.
What most stood out, though, was the way the malware hid those functions. Normally, Windows functions are loaded as needed from a DLL file stored on the hard drive. Doing the same with malicious files, however, would be a giveaway to antivirus software. Instead, Stuxnet stored its decrypted malicious DLL file only in memory as a kind of virtual file with a specially crafted name.
It then reprogrammed the Windows API — the interface between the operating system and the programs that run on top of it — so that every time a program tried to load a function from a library with that specially crafted name, it would pull it from memory instead of the hard drive. Stuxnet was essentially creating an entirely new breed of ghost file that would not be stored on the hard drive at all, and hence would be almost impossible to find.
O Murchu had never seen this technique in all his years of analyzing malware. “Even the complex threats that we see, the advanced threats we see, don’t do this,” he mused during a recent interview at Symantec’s office.
Clues were piling up that Stuxnet was highly professional, and O Murchu had only examined the first 5k of the 500k code. It was clear it was going to take a team to tackle it. The question was, should they tackle it?
“Everything in it just made your hair stand up and go, this is something we need to look into,” he said.
Four months after launching the alpha version, CERN1 has today issued version 1.1 of the Open Hardware Licence (OHL), a legal framework to facilitate knowledge exchange across the electronic design community.
In the spirit of knowledge and technology dissemination, the CERN OHL was created to govern the use, copying, modification and distribution of hardware design documentation, and the manufacture and distribution of products. Hardware design documentation includes schematic diagrams, designs, circuit or circuit-board layouts, mechanical drawings, flow charts and descriptive texts, as well as other explanatory material.
Version 1.0 of the CERN OHL was published in March 2011 on the Open Hardware Repository (OHR), the creation of electronic designers working in experimental-physics laboratories who felt the need to enable knowledge-exchange across a wide community and in line with the ideals of “open science” being fostered by organizations such as CERN.