I'd definitely recommend starting with D3D11. IMHO it really is the best all around graphics API. All the concepts that you learn in pretty much any GPU API will translate to every other API, so learning the "wrong one" not a waste of time. GL would be my second choice, and then Vulkan/D3D12 in tied third place.
My main points would be something like:
|D3D9 |D3D11|D3D12 |Vulkan| GL |
Easily draw a cube | Yes | No | No | No | Yes |
Validation Layer | No | Yes | Yes | Yes | No* |
Validated Drivers | MS | MS | MS | Open | No |
Legacy APIs mixed in| Yes | No | No | No | Yes |
Vendor extensions | No^ | No^ | No^ | Yes | Yes |
CPU/GPU concurrency |Auto |Auto |Manual|Manual|Auto |
Can crash the GPU | No | No | YES | YES | No* |
HLSL | Yes | Yes | Yes | Yes# | No$ |
GLSL | No$ | No$ | No$ | Yes | Yes |
SPIR-V | No | No$ | No$ | Yes | No* |
Windows | Yes | Yes | Yes | Yes | Yes |
Linux | No$ | No | No | Yes | Yes |
MacOS | No$ | No | No | No$ | Yes@|
* = available with vendor extensions
^ = not officially, but vendors hacked them in anyway
# = work in progress support
$ = DIY/Open Source/Middleware can get you there...
@ = always a version of the spec that's 5 years old...
D3D10 useless now -- D3D11 lets you support D3D10-era hardware and do all the same things -- so we'll ignore it.
The one good thing with ancient APIs (e.g. GL v1.1, D3D9) is that very simple apps are very simple. In comparison, modern APIs make you do a lot of legwork to even get started. When I was starting out, writing simple GL apps with glBegin, glVertex, etc, was great fun
If you came across any readable tutorials or books for these old API versions, they could still be a fun learning exercise.
Having a validation layer built into the API is really useful for catching your incorrect code. Of course you want to check all of your function calls for errors, but having the debugger halt execution and a several-sentence-long error message appear describing your coding mistake is invaluable. D3D does a great job here.
D3D9 used to have a validation layer but MS has broken it on modern Windows (got a WinXP machine handy?
)
GL2/3/4 tries to clean up their API every version and officially throws out all the old ways of doing things... but unofficially, all the old ways of doing things still hang around (except on Mac!), making it possible to end up with a horrible mixture of three different APIs. It can also make tutorials a bit suspect when you're not quite sure if you're learning core features from the version you want or not ![:| :|](https://uploads.gamedev.net/emoticons/mellow.png)
D3D9 also suffers from this, with it supporting both an ancient-style fixed-function drawing API and a modern shader-based drawing API...
Vendor extensions are great -- they allow you to access the latest features of every GPU before those features become standard, but for a beginner they just add confusion. D3D made the choice of banning them. They're actually still there, but you have to download some extra vendor-specific SDKs to hack around the official D3D restrictions ![:D :D](https://uploads.gamedev.net/emoticons/biggrin.png)
D3D12 and Vulkan code has to be perfect. If you've got any mistakes in it, you could straight up crash your GPU. This isn't too bad, as Windows will just turn it off and on again... but it can be a nightmare to debug these things. That doesn't make for a good learning environment. This would make them unusable, except that they've got the great validation layers to help guide you!
D3D9/D3D11/GL present an abstraction where it looks like your code is running in serial with the GPU -- i.e. you say to draw something, then the GPU draws it immediately. In reality, the GPU is often buffering up several frames of commands and acting on them asynchronously (in order to achieve better throughput), however, D3D11/GL do a great job of hiding all the awful details that make this possible. This makes them much easier to use.
In D3D12/Vulkan, it's your job to implement this yourself. To do that, you need to be competent at multi-threaded programming, because you're trying to schedule two independent processors and keep them both busy without either ever stalling/locking the other one. If you mess this up, you can either instantly halve your performance, or worse, introduce memory corruption bugs that only occur sporadically and seem impossible to fix ![:( :(](https://uploads.gamedev.net/emoticons/sad.png)
D3D is a middle layer built by Microsoft -- there's your app, then the D3D runtime, then your D3D driver (Intel/NVidia/AMD's code). Microsoft validates that the runtime is correct and that the drivers are interacting with it properly. Finding out that your code runs differently on different GPU's is exceedingly rare.
GL is the wild west -- your app talks directly to the GL driver (Intel/NVidia/AMD's code), and there's no authority to make sure that they're implementing GL correctly. Finding out that your code runs differently on different GPU's is common.
Vulkan is much better -- your app still talks directly to the Vulkan driver (Intel/NVidia/AMD's code), but there's an open source suite of tests that make sure that they're implementing Vulkan correctly, and the common validation layer written by Khronos.
For shading languages, GLSL and HLSL are both valid, but I just have a personal preference for HLSL. There's also a lot of open source projects designed at converting from HLSL->GLSL, but not as many for GLSL->HLSL.
Also note, the above choices are valid for desktop PC's. For browsers you have to use WebGL. On Android you have to use GL|ES, and on iOS you can use GL|ES or Metal. On Mac you can use Metal too. On game consoles, there's almost always a custom API for each console. If you end up doing graphics programming as a job, you will learn a lot of different APIs!