GitHub and Git in general is designed for text. Text deltas, text compression, and so on.
A distributed version control where everyone gets a copy of everything in history works well as the typical text is a few megabytes compressed.
Large Git repositories reach many megabytes. It's rare but sometimes big systems hit a gigabyte, needing manual approval and tripping safety precautions. Clones take ages.
This model is horrible for games.
Games are almost all data. The entire code base can be smaller than a single image. The entire code history of a AAA game is likely smaller than the source of the game's startup movie clips.
Not even considering history, source data for large games is often better measured in terabytes than gigabytes. My last project we had a 10TB volume for the working copy of the latest built (not source) data. Artists work with only their photoshop docs, which themselves can sometimes hit hundreds of megabytes when layers are thrown around. Source audio uses lossless compression and audio streams reach gigabytes quickly. One small change results in tremendous data changes, especially when the only diff tool assumes diffs are text, rather than images, compressed audio, or compressed video.
A distributed version control where everyone gets a copy of everything in history is awful.
People have hacked together workarounds where the version control systems like Git or Mercurial, which has no actual history, instead just a history of where the data (hopefully) resides. If people are amazing with storage and data persistence you might even be able to reproduce an old build, like the ones generated a few days or weeks back.
Subversion is sometimes used by novice groups who do not realize SVN keeps both a pristine copy and a huge body of metadata on each machine, and the work is often done by the clients rather than the server. So even a few gigabytes ends up with new disk drives for everyone on the team.
The industry mostly uses Perforce. For small projects there is a free license, I think up to five people but it was up to 20 at one point. It is designed from the beginning around large data assets. It is more server intensive, but allows you to get the entire history of the entire project, allows limiting views rather than the “everyone gets everything” model, and can be configured to hold tremendous volumes of data in archive, as many terabytes or petabytes as the company wants to maintain. Access control also allows contractors and third-party developers as big or small a view as needed.
If you want, it has an interface that let's programmers map the tiny sections of code - - tiny compared to the assets - - and access the mapping via Git. I have only worked with a few people who wanted to actually do that, or they had tools that needed it, but they are the exception.