I think this is one of those cases where analogies just aren't the best way to answer. Better to get your hands dirty. Please bear with me through the parts you are already familiar with.
Create three C files like so:
[source lang="C"]
// fun1.c
#include <stdio.h>
void fun1(void) {
puts("Running fun1.");
}
// fun2.c
#include <stdio.h>
void fun2(void) {
puts("Running fun2.");
}
// main.c
extern void fun1(void);
extern void fun2(void);
int main(int argc, char **argv) {
fun1();
fun2();
}
[/source]
You could put the prototypes of fun1 and fun2 in a header file and include it in main.c. Were you to do so, the C preprocessor would just replace the #include directive with the content of the header file anyway. The preprocessor is just an elaborate text substitution machine. Headers allow you to reuse the same declarations across multiple files without typing them over and over and over. Anyway, no need for one here.
Given a GCC compiler (I'm using the tdm64 distribution of MinGW on Windows), you can compile all three files into an executable with the following command line:
gcc main.c fun1.c fun2.c -odemo
Doing so results in an executable, but no intermediate files. The compiler has taken all three source files, translated them to the final output format, and passed everything directly to the linker to create the executable. This approach requires all of the source files. We can split it up, though, like this.
gcc -c fun1.c
gcc -c fun2.c
gcc main.c fun1.o fun2.o -odemo
The -c flag tells the compiler to compile the source file and store the output in an intermediate file (called an object file with a .o extension in this case), so the first two lines produce the files fun1.o and fun2.o. The third line creates the executable using the object files instead of the source files. Now, you can distribute fun1.o and fun2.o to other people (in which case you would want to create a header with the function prototypes) and they never need to see your original source.
Distributing two object files isn't a big deal, but if you've got several of them, then it can be a bit of an annoyance for the users of your project to have to pass all those object files to the compiler. So you can bundle them up into a library (or archive) to make it simpler. Given that you've already created fun1.o and fun2.o, you can then do this.
ar rcs libfuns.a fun1.o fun2.o
With this, ar (the GNU archiver, or librarian) takes the two object files and packs them together in a library file, much as you would add files to a zip archive (except they aren't compressed in this case). When you have dozens of object files, it's much more convenient to pack them into a library like this. Then the user can do the following.
gcc main.c libfuns.a -odemo.exe
Or, more commonly (given that a library is usually not in the same directory as the source)
gcc main.c -L. -lfuns -odemo.exe
-L tells the linker to search a specific directory, in this case the current directory (specified via the '.') for any libraries. The -l (lowercase L) in this case says to link with libfuns.a.
So, to directly answer your question, you don't need the YAML source to build your project, you only need the library and the YAML headers. However, to get the YAML library, you first have to compile the YAML source if it isn't available for download somewhere. Then you can link the library with your project. Alternatively, you could add the YAML source directly into your build system and compile it together, but then you have to worry about how to configure the source, which compiler options to use and so on. It's typically a much better choice to use the build system that ships with any library project to compile the library separately from your project, then link with the resulting binary. That way, you let the library maintainers worry about how best to configure the compiler output for the library and you can focus on configuring the output of your own project.