Can you use StackOverflow to find bugs at scale?

Initial publication: August 4rd, 2022

There are some questions that get independently thought up and pondered almost universally amoung security engineers at some point.

Of course there's the classic: "does dropping 0days have a material effect on stock prices?".

But another such question is: "can you find bugs at scale by identifying a vulnerable StackOverflow answer?". Since I recently stumbled upon an interesting lead for this, I decided to actually investigate. The goal was just to see if by starting from a vulnerable answer I could find at least one vulnerable server.


Rationale

Black box targets, like game servers, can be very challenging targets to try to find bugs in.

Even when they implement open standards/protocols which we have documentation and RFCs for, these only attempt to describe how a theoretical fully compliant implementation would work, which can differ significantly from the average implementation. For example, according to standards, "()<>[]:,;@\\"!#$%&'-/=?^_{}| ~.a"@[IPv6:2001:db8::1], should be a valid email address, but in practice most servers would not accept it.

In contrast, code snippets from StackOverflow are often directly copied into codebases. The opportunity of potentially being able to perform some source code level analysis on a codebase would be tremendously more efficient than the inherently guessy "black box testing" alternative.

Put another way: the identification of vulnerable StackOverflow code reduces the problem of black box testing game servers down to just finding affected servers, which may be significantly easier, especially if the snippet has been widely copied.


The candidate

For this to work, you'd need to find a topic about a very specific attacker facing feature, which can be implemented in relatively little code so isn't just offloaded to a library, and niche enough that the answers haven't been reviewed by too many people.

The thing that caused me to look into this was the atrocity of "login with Game Center" advice.


How should a server handle Game Center login requests?

Here's a snippet from some old Apple documentation describing the steps a game server should take to handle login with Game Center requests (as I understand, this flow still exists in many old games to provide a way for users to migratate old accounts to the current Apple authentication flow):

2. (Receive) the publicKeyURL, signature, salt, and timestamp parameters (...)

3. Use the publicKeyURL on the third party server to download the public key.

4. Verify with the appropriate signing authority that the public key is signed by Apple.

5. Retrieve the player’s playerID and bundleID.

6. Concatenate into a data buffer the following information, in the order listed:

    The playerID parameter in UTF-8 format

    The bundleID parameter in UTF-8 format

    The timestamp parameter in Big-Endian UInt-64 format

    The salt parameter

7. Generate a SHA-256 hash value for the buffer.

8. Using the public key downloaded in step 3, verify that the hash value generated in step 7 matches the signature parameter provided by the API.

If the generated and retrieved signatures match, the local player has been authenticated.



What does StackOverflow say?

There are 3 relevant threads on Game Center authentication (1, 2, 3).

Step 4 is a common point of deviation from the documentation, with the vast majority of code examples failing to include it entirely.

Authors who even bother to mention they've skipped this step at all present it as being wholly unnecessary unless you have "extra-paranoia", or in one case as being just a "minor flaw".

"It has minor flaw - Apple certificate is not verified"


"note that verification of the public certificate after downloading is not done here, similar to the python answer above using OpenSSL. Is this step really necessary anyway?"


"Ideally, we would also check that the certificate came from a trusted source. Extra-paranoia to check that it is Apple's Game Center certificate would be good, too."


The last comment proposes a new step of checking the requested certificate URL, which is commonly thought of as a kind of mitigation used as justification for skipping step 4:

"What are the security implications of not verifying the Apple certificate before using? I'm wondering if it would be enough to check that the domain of the public key url is coming from Apple (sandbox.gc.apple.com or production server)"


"If we check that the public key URL is going to an apple domain and it's using ssl (https), doesn't that mean that it's protected from man-in-the-middle attacks?"



Implications of missing step 4

Step 4 is a crucial step, and skipping it is not just a "minor flaw". Without this step, there is essentially no authentication. Anyone can sign Game Center login requests for any Apple player ID with their own certificates, which would be accepted.

These player IDs are not secret and can easily be found through leaderboard / friend list APIs (this is partly what motivated the need for the current authentication system, which scopes player IDs per-game, and then provides further context scoped IDs for public leaderboards, etc).

Checking the URL before fetching the certificate does act as a mitigation against the missing authentication step, but leaves your authentication exploitable to any open redirects on any Apple subdomain, and this is the best case scenario if you implement the URL checking correctly in the first place.


Bad URL checks

Two of the samples which rely on the URL-checking mitigation for their incomplete implementations were bypassable:

    #Verify the url is https and is pointing to an apple domain.
    parts = urlparse.urlparse(gc_public_key_url)
    domainName = "apple.com"
    domainLocation = len(parts[1]) - len(domainName)
    actualLocation = parts[1].find(domainName)
    if parts[0] != "https" or domainName not in parts[1] or domainLocation != actualLocation:
        logging.warning("Public Key Url is invalid.")
        raise Exception


func IsValidCertUrl(fullUrl string) bool {
    //https://sandbox.gc.apple.com/public-key/gc-sb-2.cer
    uri, err := url.Parse(fullUrl)
    if err != nil {
        log.Printf("not a valid url %s", fullUrl)
        return false
    }

    if !strings.HasSuffix(uri.Host, "apple.com") {
        log.Printf("not a valid host %s", fullUrl)
        return false
    }

    if path.Ext(fullUrl) != ".cer" {
        log.Printf("not a valid ext %s, %s", fullUrl, path.Ext(fullUrl))
        return false
    }
    return true
}


In each case, the logic is bypassable by registering domains like endswithapple.com.


Finding a vulnerable server

Given that the only public example code of Game Center authentication written in Go was vulnerable, I figured there was a pretty good chance I could use this knowledge to find a vulnerability against a real game server.

By searching the App Store for multiplayer games, and cross referencing job postings in those companies to determine which languages were in use, I was able to find a company using Go, Lockwood Publishing, which also has a game server old enough that it should still support "login with Game Center" (Avakin):


Proof-Of-Concept

We can verify the presence of the URL check mitigation due to the fact that a Game Center request pointing to a certificate at https://wikipedia.org/cert.cer will be rejected instantly, whilst one pointing to https://apple.com/cert.cer will take half a second longer due to the additional request the server has to make.

To test the bypass, I then registered the domain endswithapple.com, configured some free web hosting, and uploaded a rudimentary IP address logging PHP script to www.endswithapple.com/c.cer/index.php (so it would trigger on requests to wwww.endswithapple.com/c.cer):

<?php

$myfile = fopen("/tmp/test.txt", "a");
fwrite($myfile, $_SERVER['REMOTE_ADDR']);
fwrite($myfile, "<br>");
fclose($myfile);

$connections = file_get_contents('/tmp/test.txt');
echo $connections;

?>

Sure enough, the bypass worked and I was seeing requests from an IP address provided by Amazon Data Services ISP and registered to the game company's hostnames (gw1.lkwd.net, gw2.lkwd.net).

The final step would be to verify the original bug (missing Apple certificate checking), but since I didn't actually want to try to log in as other users, the rest of the PoC from this point is just theoretical. Exploiting it would just require generating a certificate, and then signing the request with its associated private key:

openssl req -x509 -sha256 -nodes -newkey rsa:4096 -keyout example.com.key -days 730 -out example.com.pem
openssl x509 -inform PEM -in example.com.pem -outform DER -out certificate.cer

The full PoC script to generate signed login requests to this game is provided here.

I emailed Lockwood Publishing, and they fixed the vulnerability. They did not offer a bug bounty.


Conclusion

Since web/mobile games don't typically award bug bounties (my last web vulnerability didn't get one either), I didn't bother to cover a large sample of games, so we can't really draw broad conclusions.

But by finding at least 1 affected game server from this, I've at least anecdotally demonstrated that you can indeed identify vulnerabilities in StackOverflow answers, and then use that knowledge against black box servers to successfully find instances of those vulnerabilities.


What could StackOverflow do?

In my opinion, there needs to be some way to mark a comment or revision as being security related, to distinguish it from regular chatter.

These security related updates are important enough that by-default, you should be automatically subscribed to them whenever you copy code from an answer (by registering the browser's oncopy event).

Under this system, any logged in user who copied the first vulnerable domain check would have been notified both when the comment pointing out the vulnerability was made, and when the revision that fixed it was made, but not from the other chatter under the answer.

For the second vulnerable domain check, I did not have enough reputation to leave a comment as a new account, but could propose a suggested edit. I never heard back on its approval/rejection.