Hey folks, glad to finally join this forum, great posts in here.
I am an audio producer/sound designer, and would like to start implementing my music and synthesis into game design.
Although I am slowly becoming familiar with Visual C# Express, XBL Creators Club and Direct X, I am not a programmer. I am building a basic understanding of production piplelines and software needed to program audio and effects for games, but Im not sure of the best methods to learn how all these tie together.
More specifically, can anyone point me in the right direction on how to get started implementing audio into a game dev production pipeline? IE: what's a solid go-to program that I need to learn if I want to do this on a professional level?
Do you have to have a hardcore coding background to implement sound design, or should I just dive into a more graphic or dragNdrop environment? What programs should I be learning, or can I get into this by building my chops in Visual Express 2008?
Great to be here, thanks for your time!
Greetings and Pleadings
I may be misunderstanding your question, but creating music and sound effects for games doesn't typically involve coding or working directly with game authoring tools; creating the assets and integrating them into the game are typically orthogonal concerns, and as a sound designer you'd be more concerned with the former than the latter.
In other words, the 'go-to' tools would be the same as for any composer or sound designer: Pro Tools, Logic, Reason, Audacity, etc. Basically, whatever you prefer to use.
You would then deliver the assets as compressed or uncompressed audio files (.ogg and .wav are commonly used formats), and the game designers would integrate them into the game.
Under the hood, the audio will be played back using some audio API or other; which API will depend on the platform and developer preference. Again though, that's not something you as an audio designer would typically need to worry about. (Aside perhaps from needing to consider what features are available, e.g. surround sound and so forth.)
If I'm misunderstanding your question and the above is entirely irrelevant, my apologies :)
In other words, the 'go-to' tools would be the same as for any composer or sound designer: Pro Tools, Logic, Reason, Audacity, etc. Basically, whatever you prefer to use.
You would then deliver the assets as compressed or uncompressed audio files (.ogg and .wav are commonly used formats), and the game designers would integrate them into the game.
Under the hood, the audio will be played back using some audio API or other; which API will depend on the platform and developer preference. Again though, that's not something you as an audio designer would typically need to worry about. (Aside perhaps from needing to consider what features are available, e.g. surround sound and so forth.)
If I'm misunderstanding your question and the above is entirely irrelevant, my apologies :)
jyk nailed some of the points. You're still going to be doing a lot of design in your standard audio tools. Most the sound connections are going to be setup by programmers in the code for the game. But, there are some high level systems that you may want to be aware of. Like WWise and FMod, both of which have sound designer programs for games.
Those designers give you some higher level connections to the game. You'll be designing "events" like "gun_fire" for the programmers to trigger. You'll also be designing and attaching variables to your sounds like "speed" "throttle" "gun_charge" that a programmer can then attach to a gameplay variable. The WWise designer gives you some nice graphs you can setup to adjust the blending between all your source sounds as related to those variables. It also gives you the ability to tie different sound sets to different events. So you could turn down music while "special attack" is playing, or tie volumes to events like "VO_Start" and "VO_End". You can also specify synthesis functions, like making a sound that is generated from randomly choosing between 6 different source noises (for gunfire, or footsteps, etc.)
Implementing the sounds is still up to the programmers and the specifics of the APIs, but the designer, with the proper tools from the sound API, should be able to have a lot of control over how the sounds are exposed to the programmers.
Those designers give you some higher level connections to the game. You'll be designing "events" like "gun_fire" for the programmers to trigger. You'll also be designing and attaching variables to your sounds like "speed" "throttle" "gun_charge" that a programmer can then attach to a gameplay variable. The WWise designer gives you some nice graphs you can setup to adjust the blending between all your source sounds as related to those variables. It also gives you the ability to tie different sound sets to different events. So you could turn down music while "special attack" is playing, or tie volumes to events like "VO_Start" and "VO_End". You can also specify synthesis functions, like making a sound that is generated from randomly choosing between 6 different source noises (for gunfire, or footsteps, etc.)
Implementing the sounds is still up to the programmers and the specifics of the APIs, but the designer, with the proper tools from the sound API, should be able to have a lot of control over how the sounds are exposed to the programmers.
Folks thank you for the responses!
KulSeran you are spot on with yours, this is the type of stuff I am looking for. I will check out FMOD and WWISE post-haste!
Trying to get a sense of the overall production pipeline with regard to some of the software you mention. Whats just a general workflow look like after delivery of some audio files?
For example, would an audio designer put them in FMOD or WWISE to generate usable code, and then give that to the folks programming the overall game? Attempting to figure out what the production chain looks like from a software perspective, and what plays nice with what, etc.
Again, thanks for the great responses and your continued patience with my noobness!
Cheers, enjoy your weekend...
KulSeran you are spot on with yours, this is the type of stuff I am looking for. I will check out FMOD and WWISE post-haste!
Trying to get a sense of the overall production pipeline with regard to some of the software you mention. Whats just a general workflow look like after delivery of some audio files?
For example, would an audio designer put them in FMOD or WWISE to generate usable code, and then give that to the folks programming the overall game? Attempting to figure out what the production chain looks like from a software perspective, and what plays nice with what, etc.
Again, thanks for the great responses and your continued patience with my noobness!
Cheers, enjoy your weekend...
Quote:
For example, would an audio designer put them in FMOD or WWISE to generate usable code, and then give that to the folks programming the overall game? Attempting to figure out what the production chain looks like from a software perspective, and what plays nice with what, etc.
I've had to work with good and bad audio tools from the programming side. Bad ones involve a lot of coding to get sounds in the game, and are highly inflexable to sound designer needs. On the other hand, working with WWise is a breeze. The sound designer makes sounds, saves them in the WWise project folders, and imports them into the WWise project for the game. He then has full control of what audio 'banks' hold what sounds, and what events trigger sounds to start or stop. There is NO code involved on the sound designers side, and the tools don't generate code either. The coders put in the event triggers, and only have to know about the sound bank names, sound event names, and when to trigger those events. This was really nice for our sound designer, as in his own words, he didn't even have to play the game to know how it was going to sound. The tools could render the audio as it would sound in game assuming the coder hooked up the correct events.
When it comes to "what plays nice with what", one thing to consider is sitting down and having a good talk with the programmers about the sound design. Its often very useful to have conventions for the event names, like starting with "begin_" and "end_" as well as including specifically correlated tags like "_zombieA_". It not only helps keep the banks easy to manage, but gives the programmers options to generalize event generation for tags like "footstep_zombieA_wood". Where the program would just have tables of "<creatureName>" and "<surfaceType>" and could combine them to make any event name on the fly.
There's also the posibility of having sound "widgets" in the game. We usually had tools for the sound designers that they could just wonder in and place into levels or animations. The "widget" would take a bank name, and event name, and would play it under specific conditions. This let the sound designers place ambient noises or music changes into the world based on location triggers. It also let him add animation driven events like "footstep". The animation widgets themselves could then compose at runtime the proper event names for that creature's animations.
Wow, thanks so much, this is drilling down to where I want to be...
So I'd like to get started creating my own very basic code to trigger audio. Do you know if I can I use WWise folders or project banks with C# Visual Express or XBL game studio software?
Or would you recommend a different software in order to get started in using basic coding to create events that will trigger audio from WWise?
Thanks again, you have all been extremely helpful!
So I'd like to get started creating my own very basic code to trigger audio. Do you know if I can I use WWise folders or project banks with C# Visual Express or XBL game studio software?
Or would you recommend a different software in order to get started in using basic coding to create events that will trigger audio from WWise?
Thanks again, you have all been extremely helpful!
EDIT: folks thank you for the great insite most especially KulSeran.
Now that I have perused the audiokinetic website and learned a bit more about how WWise works, I will start a new thread regarding WWise implementation.
Thanks again!
[Edited by - Delorian Tokes on September 14, 2010 3:05:33 PM]
Now that I have perused the audiokinetic website and learned a bit more about how WWise works, I will start a new thread regarding WWise implementation.
Thanks again!
[Edited by - Delorian Tokes on September 14, 2010 3:05:33 PM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement