What Happened in 2014
2014 will be remembered for many things; it’s the year HTML5 has been given the green light and the year JavaScript has been used to provide dynamic content more than ever before. We have also seen major version releases in important technologies such as WordPress, Google Web Toolkit, and the leading web servers.
Looking back, we can all agree that, in terms of security, 2014 was a disaster. SSL, the protocol designed to protect and encrypt data as it passes through networks, including the public internet, has been successfully hacked twice. Making the headlines as HeartBleed, and POODLE, these two vulnerabilities were big enough to get a fancy name, a fancy logo, and everyone scrambling to find and fix the vulnerability before hackers could exploit it on their servers.
As if that wasn’t enough, a vulnerability which had been lying hidden for over 20 years was discovered in the Unix Bash Shell. Nicknamed ShellShock, this vulnerability was another blow to the chest for IT Security and the open source community.
2014 was also plagued with the exposure of DOM based Cross Site Scripting (XSS) on websites of various large corporations such as Yahoo, Paypal and LinkedIn.
Closing off the year, we are seeing Sony Pictures’ confidential data being disclosed following a hack on their systems. While American officials have concluded that that North Korea was behind the attack, Sony Pictures first cancelled the movie called “The Interview” but eventually released it both online and in theatres, so the win for the hackers was short-lived.
Taking all this into consideration, we can only hope for a better 2015.
Corporate Challenges in Website Security
Administrators face new challenges; they need to be constantly on the ball to keep systems and servers up-to-date with the latest releases. They must also be quick to patch for any new vulnerabilities discovered. Once a vulnerability is released, hackers are quick to start their attack. The admin’s speed to rectify the vulnerability is crucial.
Contrary to popular belief, it isn’t simply credit card details that hackers are interested in, but personal identity theft: name, address, ID numbers, Social Security numbers, medical history – in order to create fake IDs. The credit card details are of course an added bonus but with financial institutions having increasingly sophisticated security measures, credit card details are no longer as valuable as they have been in the past. Identity theft is a lucrative business in the underworld – a trend which will increase in 2015.
We’ve also recently seen cases where hack attempts might have been politically motivated, but so far the impact of the hacks has mostly been public leaks of data on Pastebin. Attacks originating from Iran and North Korea have both made headlines this year. The financial losses, disclosure of information and damage to reputation can have severe consequences.
Corporations therefore have two core challenges: speedy detection of exploitable vulnerabilities, and effective management.
Security and Development teams first should focus on finding vulnerabilities and then tracking them efficiently; vulnerability detection followed by vulnerability management. Corporate security staff need to ensure that their vulnerability scanner is able to support many of the technologies that they will be using in 2015, while at the same time have the confidence that the scanner can detect new vulnerabilities as soon as they are released.
Corporations with big online presences who ignore this process run a huge risk of being attacked and all the negative press and data loss that comes with it. We’ve seen this happen again and again this year to companies as big as Sony Pictures and American retailer Target.
A system administrator cannot, however, prevent zero-day attacks (A zero-day attack or threat is an attack that exploits a previously unknown vulnerability in a computer application or operating system, one that developers have not had time to address and patch.) It is called a ‘zero-day’ because the programmer has had zero days to fix the flaw. It is common for individuals or companies who discover zero-day attacks to sell them to government agencies for use in cyber warfare – a trend which we predict will increase in 2015.
Another challenge for information security this year is the looming PCI DSS 3.0 deadline which will drive a lot of demand for regular penetration testing and scanning of public-facing web apps, network perimeter servers, and network platforms. Updates have been made to these compliance requirements and therefore updates to applications and security processes will be required.
The method for detecting such vulnerabilities which can lead to security breaches is using penetration testing tools including a web application vulnerability scanner and scanning the perimeter servers.
The Technical Challenge for Website Security Scanners
We asked Bogdan Calin, Acunetix Chief Technical Officer, why he thought effective vulnerability detection is becoming such a challenge.
The biggest issue for vulnerability detection in 2015 and moving forward is the difficulty in scanning modern web applications that are heavily JavaScript based. To be able to scan such applications you need to combine classic crawling and scanning with a web browser engine. Most web scanners cannot do this or, if they can, it is only in a very limited way.
Another big issue is obtaining a good coverage of such web applications. Basically you need to know (from the browser) what web elements to click on and in what order to click them. If you click on an ‘edit’ link after you’ve already deleted the element by clicking on the ‘delete’ icon it’s too late. It sounds simple but it’s very complicated to automate.
When I’m talking about modern web applications I’m referring to two technologies that are rapidly gaining popularity. These are SPA (Single-page application) and their use of REST interfaces when not using AJAX calls for instance. These are web applications where everything fits on a single web page. The resources are dynamically loaded and added to the page as necessary, usually in response to user actions. A classic web scanner will completely fail when scanning such web apps. It will see an HTML page without seeing any inputs. REST interfaces are also invisible to classic scanners unless you have a modern web browser engine and a good technology to be able crawl modern web applications.
We are better now and getting even better on scanning SPA and the REST interfaces between a front-end client and the back-end server with which Single-page applications are usually built, so we feel that we are ready for the challenge.
Get the latest content on web security
in your inbox each week.