We live in a Web 2.0 world, but when it comes to fighting Internet-borne security threats, many enterprises are armed with only Web 1.0 weapons, according to a study released this week by San Jose, Calif.-based Secure Computing.
Web 2.0 usage is prevalent in enterprises, the study found, and as a result “the traditional boundary of external versus internal is quickly disappearing.”
“Internet-borne threats that threaten individual consumers are now posing the same threat to enterprise users,” contended the study, which was conducted by Forrester Research.
Perception Gap
The report maintained that there is a gap between perception and reality when it comes to many enterprises’ ability to combat Web 2.0 threats.
“Nearly 97 percent of those we surveyed consider themselves prepared for Web-borne threats, with 68 percent conceding room for improvement,” the researchers found.
“However,” they continued, “when asked how often they experience malware attacks, 79 percent reported more than infrequent occurrences of malware, with viruses and spyware leading the pack.”
“Perhaps more astoundingly,” the researchers added, “46 percent of the organizations we surveyed reported that they spent more than (US)$25,000 in the last fiscal year for malware cleanup exclusively.”
Transient Mischief
The mainstays of enterprise security in the past — anti-virus software and the firewall — are inadequate to deal with the new wave of Web threats, maintained Secure Computing Vice President of Product Marketing Ken Rutsky.
“These attacks are using pretty sophisticated social engineering and short-lived Web sites,” he told the E-Commerce Times.
One technique commonly used to thwart mischief spread by malicious Web sites is URL filtering. It identifies the Web address of a malevolent site and blocks access to it. That, however, takes time — too much time in the Web 2.0 world.
“These Web sites can pop up, create attacks and disappear so quickly now that traditional URL filtering classification just can’t keep up with the threat,” Rutsky asserted.
What enterprises need is a real-time ability to scan and evaluate Web traffic, asserted Adam Swidler, senior marketing manager for Postini in San Carlos, Calif.
“Today, most enterprises have a real-time ability to scan e-mail traffic that’s coming in, but do not have an ability to scan, in real time, the Web traffic that’s coming in, particularly for malicious code that’s seeking to infect PCs,” he told the E-Commerce Times.
Poking Holes in Firewalls
Old security approaches that depend exclusively on keeping intruders out of the enterprise are insufficient to counter attacks originating in the electronic ether, maintained Roger Thompson, chief technology officer for Exploit Prevention Labs in Atlanta, Ga.
“When you start a Web browser, it creates an instant tunnel through your firewall, whether it’s a personal firewall or corporate firewall or both, because you started from a trusted place,” he told the E-Commerce Times.
“The old firewall, which is great at keeping out worms and hackers, is instantly breached,” he continued, “and if you go to a Web site of hostile intent, it’s able to return malicious code through the firewall straight to the desktop.”
Where Is the Line?
At the core of Web 2.0 are applications that use code written in Ajax and JavaScript, which make them behave in ways that traditional security technologies can’t cope with, explained Alfred Huger, vice president of engineering for Symantec in Santa Monica, Calif.
“Ajax blurs the line between the Web and my applications on the desktop,” he told the E-Commerce Times. “I’m starting to run applications on the Web versus on my desktop and sometimes, it’s a little bit of both.”
“When we start to blur those lines,” he continued, “security products are not always calibrated in such a way that they’ll be able to catch threats as well as they might have before.”
More Sophistication
Making matters worse is that black hat hackers are surfing closer to the top of the wave of technology than in the past, he added.
Buffer overflows, for instance, is one of the most common ways to compromise a PC, he explained. That technique was first used in 1988. It wasn’t commonly used by hackers until 1994.
“There’s a significant gap there of six years,” he noted. “On the other hand, as soon as we started to see Ajax deployed, we began to see it exploited.”
“Every new generation of Internet user that appears is more technically apt than the one before it,” he said. “So the types of skills to perform these attacks aren’t rare any more.”
Social Media
See all Social Media