I did music and sfx for quite a lot of of Genesis, SNES games back in the day ... I'll see if I can answer your questions--sorry for the very lengthy reply!
-> So from what I understand, on those consoles, they had to "code" the music/sound while programming the game, typing in certain codes to make certain sound.
A game system would include a custom-written "sound operating system." That is some computer code that gets incorporated into the game that would read lists of "musical" commands and then send them, at the right time, to the synthesizer chips to make them make sound. These musical commands were generally stored as ascii text files that woudl look like this
track1
patch Bass91
volume 22
note C2,30
note c3,30
rest 60
glis c3,g4,60
endtrack
track2
patch BrassSynth22
volume 3
note g3,15
slur e3,15
...
etc.
So the "coding" (actually writing computer programs) was done once--to write the Sound system. AFter that, anyone would create these "notelist" files could write the music, given some semi-technical instruction.
-> But I heard even in those days, actual instruments were involved in the development, mainly synth keyboards.
Sometimes. It depended on the composer. SOme games didn't use notlists like I did, but used MIDI data. (I personally found MIDI data to not be very space efficient, so I didn't use it. it also didn't match the way I happen to compose). Those composers would compose in MIDI, save the midi file, and be incorporated into the game which would then be used to drive the synth chip
-> So the question there was did they just have composers make music with those instruments then record them (Tapes or on computers?) and hand them to whoever was writing the game's code for them to listen to it so they could translate it step by step, matching the notes played in the recorded music into the best batching sound making code lines the console could do?
Not really. As I mentioend above, once you have a programmer write a system, then the composer/sfx person doesn't necessarily need to be able to write comptuer code to write the music for the game.
That said, there was one game where I was hired to do just that-- I was given someone elses music and asked to implement it into a game system. That was actually for a playstation game, but the developers didn't realize until they had all their music composed that they coudn't fit the music AND the 3000 lines of dialog all on the CD, so they had to use the PS synthesizer chip (which was very similar to the SNES one, actually).
-> Or did they somehow connect their keyboards to a workstation computer to map the keyboard keys to play different sounds from certain console codes on the computer?
Not quite sure I understand that quesiton. But, for example, for SNES games I did, we designed a special MIDI interface to the SNES (it went to the cartridge slot). That would let me play a note on the keyboard, and have it play from the synth chip in the SNES itself.
-> It must of been hard if they had to translate the music's notes step by step, were they able to write their own software to help convert the music?
It actually wasn't that bad. I actually used to compose my music long-hand, on music manuscript paper (with a pencil and eraser). Then when I was happy with it, I'd start up a text editor (my fav was called 'brief') and transcribe the music from my music paper into the notelist format I gave an example of above. I got to be pretty fast at it.
-> I'm also wondering if the case was any different on Genesis, or other consoles with FM chips. Was it possible to use any of those FM synths like Yamaha keyboards from the 80's to compose straight to the Genesis's sound chip since they were both FM based, or would that composed FM music still need to be translated indifferently?
For a number of practical reasons, it didn't work that way. The synthesizer in the Genesis was differnet from the most popular keyboard (the DX7).
FOr the Genesis, we woudl have special "Genesis sound chip" editing progrmas that would let us play with some of the parameters of the sound chip. when we found a sound we liked, we'd be able to save all the parameters that defined that sound, and then give it a name (for example, BrassSynth22, like in my example above).
For the SNES, since it used a "sample playback" engine, we could create our own sounds by recording actual instrument sounds, and then 'looping' them (makign them so they would play for an arbitrarily long time, even though thte sound itself was only a few tens of milliseconds long). Then we'd translate those sounds into the particular SNES format, again using a custom written software tool.
-> As you can tell, I'm probably overthinking this so hopefully this wouldn't be too hard for anyone to explain. I'm all ears here, If you happen to have stores of composing music and sound for video games taking place in the time frame I described, I'd be really interested in hearing them and it would probably help me gain the idea.
You're not overthinking at all.
The composition and sound design workflow was really quite an issue back in those days. A lot of software developoment work often went into trying to make it easy for a non-programmer to be able to write music for those games. We would spend a lot of time refining the sound system software to add features and improve workflow. Back then a lot of us who were writing the music and doing the SFX were also the people who would program these sound systems, so we had a pretty good idea of where the pipeline bottlenecks were..