What are sprites ?
That really depends on the hardware you are talking about. On old systems, like NES or Sega Genesis, sprites were a hardware thing. The video processor would construct the image by tracing the scanlines of the screen. It had a list in memory that contained the information about all the sprites on the screen (a relatively low number, usually, like 64 on the NES). Then it would scan this list on each scanline to find out which of the sprites are touching this scanline, then load these into some special hardware register and produce the color while moving the cathode ray across the scanline. Sprites were very limited in both dimension, color and total number (per screen AND per scanline: the NES for example only allowed 8 sprites per line, the 9th would be invisible). Usually sprites had one or two possible sizes, could not be rotated or scaled (sometimes they could be flipped vertically or horizontally though, for example on the NES).
Later, framebuffers were "invented" or rather became usable for mainstream games. Here, the whole image is constructed in memory before it is being sent to the monitor. The basic principle is the same: The graphics processor would read the memory for the current pixel location on screen and output the signal needed to produce the right color. With this, it became the job of the programmer to define what a sprite really is and what you could do with it. In the early days, these sprites were just "blitted" onto the background by copying memory around. That meant, they usually didn't support scaling or rotation either. This is mostly due to all the calculations being done on the CPU (the mixing, blending, scaling, or whatever effects you allowed on your sprites).
As CPUs became faster, people started adding more features, like rotation, scaling, transparency and all that nice stuff.
Then, GPUs happened. These are good at drawing textures triangles, and as it turns out, you can represent almost everything you did with the old sprite systems by just creating two textured triangles. So instead of writing the sprite functions to do all the memory operations themselves, people wrote them such that the GPU did the job for them. The basic principle is still the same though: In the end, you fill a big memory area with the byte representations of the pixels.
On today's games we see mostly the last two options: Either you use software blitting routines or you let the GPU fill the memory for you.
In GL 3.3, you just create a texture, upload your sprite data to it, then you create a vertex buffer containing the corners of your sprite and the UV coordinates (which tell your shader how to attach the texture to the triangles), a vertex array where you define the layout of your vertex buffer (i.e. where in the data can it find the world location of the sprites' corners and where to find the UV coordinates), then you create a fragment and vertex shader (which are probably exactly what you'd find when searching for some basic textured shader tutorials).
Once you've uploaded all this data to GL, you can draw. It's a bit much to explain all that here, but every beginner level GL 3.3 core tutorial should cover these topics.
I hope this helps!
Say you're hired by Nintendo to make the next 2D Mario game.
Your artist sits down and creates an image file that represents a goomba as you'd like it to appear in the game. In terms of rendering, this image is referred to as a "texture". You load the image into memory so that you can use it in your program.
Now you run into an issue: you want to have more than one goomba on the screen at once. Should you load the goomba into memory in two places? That doesn't really make sense. It's the same image, you just want to draw it to more than one location at a time. You don't need to load it twice in order to draw it twice, you can just refer to the same loaded texture for each draw command.
So you set one command to draw the texture in the first position and one command to draw it in the second position. You realize that both commands are pretty similar, but hey have different information that they need to keep tabs on, such as where on the screen to draw the texture, the sorting order of the texture (behind/in front), what frame of animation to draw, maybe some rotation or scaling, and which texture you're drawing from... You really ought to store all of this information in a struct or class. You could modify each instance conveniently and then just iterate over them and draw all the instances in one pass.
Name that class "Sprite".
Note that this is a somewhat loosely used term. Sometimes people will refer to a single frame of 2D character animation as a sprite. Sometimes people will refer to a character on the screen as a sprite. Often people will refer to a texture that contains several frames of 2D character animation as a "sprite sheet". It depends on the context of the discussion and the whim of the speaker, so if their meaning is unclear just ask for clarification.
There are ten kinds of people in this world: those who understand binary and those who don't.
That really depends on the hardware you are talking about. On old systems, like NES or Sega Genesis, sprites were a hardware thing. The video processor would construct the image by tracing the scanlines of the screen.
Thanks! This actually clarifies things for me quite a bit. I had some vague notion of sprites from the NES/Sega days and that they were a big deal, but I didn't really learn how to do them until the modern OGL age. So, I never really understood how the definition had changed as hardware had changed. This clarified it quite a bit.