TL;DR

In this post, I present an XSLeak technique that allows an active network attacker to observe, from an insecure Web origin, the presence or absence of some Secure cookie that may have been set by the origin’s secure counterpart.

Cookies’ crumbly beginnings

Netscape (Lou Montulli, more precisely) invented cookies in 1994 in order to introduce persistent client state in the otherwise stateless Hypertext Transfer Protocol (HTTP). Back in the day, the Web was much more static than it is today. But with the advent of scripting capabilities in browsers shortly afterwards, new rules, which became collectively known as the Same-Origin Policy (SOP), had to be built into browsers in order to protect Web origins from one another.


Fabian Fäßler (a.k.a. LiveOverflow) recently released a video retrospective about the SOP, which is well worth a watch.


However, because cookies predated the SOP and were already in common use, they never played by the SOP’s rules. More specifically, cookies are not (yet?) origin-bound: a cookie jar keys cookies by their (name, domain, path) triple, but a Web origin is a (scheme, host, port) triple. The IETF HTTP Working Group has since been hard at work to incrementally improve cookies’ security model and bridge the gap to the SOP, careful to minimise breakage of existing Web applications in the process.

A plate of cookies

Strict Secure Cookies

One such incremental change, nicknamed “Strict Secure Cookies” by the Chromium team, addressed some issues related to the integrity of cookies marked Secure. The Secure cookie attribute, from its inception, prevented insecure Web origins (e.g. origins whose scheme is http) from accessing cookies that were set with that attribute. However, there was a time when insecure origins could still create, delete, or indirectly evict Secure cookies; and the browser would send cookies created by an insecure context to secure origins, which had no way of determining whether those cookies where created in a secure or an insecure context. Therefore, attacks like session fixation from an active network attacker remained a concern.

The Strict Secure Cookies proposal (Mike West, 2015) remedied the situation by forbidding insecure origins to

  1. create cookies marked Secure, and
  2. overlaying (i.e. masking or shadowing) an existing Secure cookie with a non-Secure cookie.

The second restriction, which I’ll call the overlaying restriction, is pivotal in the XSLeak technique discussed in the rest of this post. Because its mechanics are a bit technical, I’m including the relevant step of the storage algorithm from RFC 6265 bis (version 01) here for completeness:

If the cookie’s secure-only-flag is not set, and the scheme component of request-uri does not denote a “secure” protocol, then abort these steps and ignore the cookie entirely if the cookie store contains one or more cookies that meet all of the following criteria:

  1. Their name matches the name of the newly-created cookie.
  2. Their secure-only-flag is true.
  3. Their domain domain-matches the domain of the newly-created cookie, or vice-versa.
  4. The path of the newly-created cookie path-matches the path of the existing cookie.

Nowadays, this behaviour is supported in all modern browsers; and this change was undoubtedly welcome, because it solved some of cookies’ integrity issues. However, while perusing the whole series of cookie-related RFCs, I realised that Strict Secure Cookies’ overlaying restriction sacrificed some confidentiality for integrity.

Consider a cookie

  • named foo
  • whose domain field is example.com,
  • whose path field is /,
  • marked Secure.

The overlaying restriction indeed allows insecure origin http://example.com to determine whether such a cookie exists in the browser’s cookie jar. How? The insecure origin can attempt to set such a cookie (albeit without a Secure attribute) and then immediately test whether it’s present in the cookie string returned by document.cookie.

Therefore, an active network attacker, despite being unable to decrypt his victim’s traffic, may be able to observe the existence of a Secure cookie in his victim’s browser.

When I asked Mike West himself about this on Twitter, he confirmed that the overlaying restriction poses a confidentiality problem but hinted that we shouldn’t expect a proper fix any time soon:


Under the condition that the target website be vulnerable to cross-site scripting, a similar technique can actually be used to detect the existence of a HttpOnly cookie, because browsers ignore attempts to update a HttpOnly cookie via a “non-HTTP” API (e.g. document.cookie). For more details, refer to step 22 in section 5.5 of RFC 6265 bis (version 10).


Proof of concept

A proof of concept is worth a thousand words. For this example, I’ll use the actual https://example.com website and a cookie named “foo”. In order to simulate an active network attacker, I’ll use Burp Suite Community’s intercepting Web proxy.

  1. Start Burp and the associated proxied browser.
  2. On Burp’s Proxy tab, make sure the intercept functionality is turned off.
  3. Visit secure origin https://example.com.
  4. Optional: Open your browser’s Console and run the following JavaScript code:
    document.cookie='foo=; Secure';
    
    There should now be a cookie whose triple is (foo, example.com, /) and that is marked Secure in your browser.
  5. On Burp’s Proxy tab, turn the intercept functionality on.
  6. Visit insecure origin http://example.com.
  7. On Burp’s Proxy tab, right-click on the request resulting from step 6 and select Do intercept » Response to this request.
  8. Forward the request unaltered.
  9. As the active network attacker, replace the entire response to that request by the following:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    HTTP/1.1 200 OK
    Content-Type: text/html; charset=UTF-8
    Set-Cookie: foo=
    Connection: close
    Content-Length: 156
    
    <script>
      const found = /(^|; )foo=/.test(document.cookie);
      if (found) {
        document.cookie="foo=; Max-Age=-1";
      }
      alert(!found);
    </script>
  10. Forward the response spoofed at step 9.

An alert modal will pop up and indicate the truth of the following statement:

A Secure cookie named foo exist in the victim’s browser.

If you created a Secure cookie named “foo” at the optional fourth step, the creation of a non-Secure cookie named “foo” (see line 3) fails, thanks to the overlaying restriction. Therefore, no cookie named “foo” can be found in the cookie string produced for the insecure origin, and the alert modal shows “true”.

Now clear the Secure cookie named “foo” and repeat all those steps (bar step 4). This time, unencombered by a Secure counterpart, a non-Secure cookie named “foo” can be created. Therefore, it is present in the cookie string produced for the insecure origin, is immediately expired (see line 10) for maximum stealth, and the alert modal shows “false”.


Edit: Actually, as @Haxatron reminded me on Twitter, Strict Secure Cookies didn’t actually solve all of Secure cookies’ integrity problems; more specifically, the overlaying restriction doesn’t prevent an insecure origin from beating its secure counterpart in the race to create a specific cookie (albeit without a Secure attribute), as this PoC demonstrates. Cookie name prefixes do help address this concern, though; for more on this topic, see this Security Stack Exchange Q&A.


Impact

Active network attackers (such as shady coffee-shop owners or even ethically challenged ISPs) may be able to weaponise this technique for login oracles. By targeting websites that rely on a Secure cookie for identifying sessions, attackers may indeed be able to determine whether their victims are logged in to that website.

The victims may only ever willingly interact with the target over HTTPS; attackers would only need to spoof the response to their victims’ first request to any insecure origin and redirect them to the target insecure origin before spoofing the response (as described in step 7 of my PoC) to the resulting request.

In the grand scheme—no pun intended—of things, this technique could be abused by nefarious actors in order to profile people on the basis of which websites (news outlets, dating sites, etc.) they are logged in to.

Defences

The best defence against this privacy attack (and many more!) is to set up HTTP Strict Transport Security (HSTS) on your website and, while you’re at it, submit your domain name for HSTS preload. HSTS, if preloaded, effectively prevents browser access to resources on your domain over an insecure channel. However, in case you need (for some questionable reason) to maintain browser access to insecure origins on that domain, HSTS is clearly not an option for you.

An alternative, more granular defence consists in changing the name(s) of the cookie(s) that you want to protect and use some cookie name prefix. That would do the trick because browsers do not allow insecure origins to set prefixed cookies. The __Secure- cookie name prefix would be enough but, you may also opt for the confusingly named yet more secure __Host- cookie name prefix, if you don’t find it too restrictive. However, if the cookie in question is used for authentication, bear in mind that changing its name will effectively log all your users out.

Conclusion

A trope of information security is the CIA (Confidentiality, Integrity, Availability) triad. Striking a balance between the three aspects is a difficult exercise, as none of the them can typically be changed without detrimental effects on the other two. Strict Secure Cookies is no exception: it traded some confidentiality for integrity. Until a distant future where cookies have truly become origin-bound, the Web will likely remain haunted by cookies’ infelicities.

Acknowledgments

Thanks to Ankur Sundara, who kindly agreed to review a draft of this post before publication.