Tuesday 22 July 2008

Why Security Bugs Are Different

There is a couple of good reasons why security bugs are worse than the 'boring normal' (non-security) ones.
  • Security bugs are profitable, casual bugs are not. Nobody needs to reproduce 'a random spectacular crash due to bad locking' intentionally — that does not make any sense. Functional and reliability issues may happen occasionally. Often, they happen predictably. But none of them happen with intention (unless you're a software tester). So, whenever a casual bug appears, some part of users are affected (that depends on the feature popularity). Whenever a security hole exists, the chances are high, that most of the users are under the threat.
  • Casual bugs are visible, security bugs are not. When a casual bug appears, it affects how system works, otherwise, nobody would report the bug. It breaks the user's explicit expectations. With security, the expectations are usually implicit or are entirely connected with what they call 'security features' (authentication, authorization, cryptography). Nobody complains about security bugs, system continues to work.
Well, that's it.

Friday 18 July 2008

Torvalds' Plans Revealed

It is widely discussed now how Torvalds called OpenBSD developers "a bunch of masturbating monkeys". Yesterday he also called Digg users a bunch of "wanking walruses".
Besides that, we know that there is a new kernel version naming system is coming.

Now, do you see the pattern?
  • Masturbating Monkeys
  • Wanking Walruses
Not very original after Ubuntu, but nice anyway.

Thursday 17 April 2008

OWASP supports malware

Beyond what I'm doing to live, I'm a proud contributor of a nice open source websec scanner w3af. Guys recently applied for the OWASP Summer of Code 2008 to improve the GUI and they were selected! Well done!

There is a bizarre thing in it, though. OWASP still lists w3af as malware (see the corresponding section). The only reasonable explanation is that w3af is evil, but its GUI is not.

Monday 24 March 2008

Other domain for the OOB confirmation

In my recent post about CSRF I suggested to introduce an additional "Approve" button to the form which would play a role of an out -of-band confirmation mechanism. Now I'll try to improve that slightly.

First, we have a page with an original form. Also we add a hidden (as yet) IFRAME which is deployed on an other domain. The trick is in what happens at the moment of the form submission. After the "Submit" button is pressed, two things happen consequently:

  1. The form data is submitted asynchronously and the form is made invisible.
  2. The previously hidden IFRAME appears instead of the form on the top of the other content. This frame displays a confirmation warning and suggests the user to click somewhere inside. Then the user clicks and the confirmation token is sent to the server. The transaction is commited.
To prevent the possibility of relaying the frame to an adversary's site, I'd suggest to use a watermark logo on the background of the frame.
Also, I think that confirming the transaction after the data is sent must be better.

Meta refresh vs. HTTP Redirect

There is a well-known advice not to give a direct link to outer resources if there is a chance that URL sessions are used. The session ID would just leak in the Referer header.
The well-known alternative is to use a jump page: you publish a link to yourself (without a session in it), and then redirect the user out. I've suddenly found out (never thought about that before) that when you use the standard HTTP redirection mechanism, the original Referer is retained. I mean, if you're on the page http://site1.com/a and click on the link to http://site1.com/b which then redirects you to http://site2.com/, the Referer which site2 receives is http://site1.com/a.

However, if you use <meta equiv="refresh" content="0;http://site2.com">, Referer is not sent. Strange, I could not find that anywhere in the web...

Friday 21 March 2008

No CSRF in the presence of an XSS

Jeremiah Grossman published a list of some interesting unsolved websec problems.
Among them is how to protect a site against CSRF without having to deal with XSS stuff. That reminds me an interesting paper by Martin Johns which considers a JavaScript deferred loading mechanism. The session cookie is stored on the separate domain and thus unavailable for an XSS payload. There were some unsolved issues with that approach discussed by kuzza55 which I don't remember now (see the discussion).

My 5 cents

This night I got an idea I'll explain now. Night ideas are often silly so this one needs a review. Let's go step by step.

The first approximation does not consider usability issues; the possible improvements (as well as threats) are discussed later (in an other post).

Let us have a form on a site which is placed on the domain http://site.com and is vulnerable to XSS. Let's now split form in two parts. The first part is the form itself with an old good token. The second part is hosted at another domain, say, http://a.site.com. It consists of a button saying "Approve" and a hidden field with an other token (which is coupled with the first on on the server). This approvement form is injected into the first one using IFRAME.

User enters the data, then clicks "Approve". Token is sent (possibly asynchronously) to the server and the form identified by the coupled token is marked as trusted. Then the user clicks "Submit" and the request is accepted.

Hello, world!

Subj.