Setting up a cross-platform project
Hi all!
My experience so far with games development (or any development for that matter) has been Windows-only. For my next project, I'm writing a networked game and it's one that that I'd actually like to get hosted which means that at least the server needs to run on Unix / Linux.
I have searched high and low on Google and I can find lots and lots of articles about cross platform development but they all seem to be focusing on the code, and not the setup. I'll be using SDL for graphics and sound, the new boost::asio library (only just been accepted) for networking, boost::filesystem and various other bits and bobs. These libraries, are not the problem - I don't know how to set things up and get started!
The last project I did had a server, a client and an editor. Server and client were both C++ and the editor was C#. This next project will be similar but I'll also have projects for tests. In the last project I just used Visual Studio. I had a project for each application and they were all loaded into a solution. As I built the software I just added files to my respective projects and building was a matter of just pressing F7!
So, I have dug out an old PC, installed Fedora Core 5 on it, and have the Gnome and KDE desktops available. I've seen that KDE even comes with a shiny copy of KDevelop which looks quite yummy. Currently I have subversion installed on my Windows box which I access locally. I want to develop from a single code base but be able to build and test under both environments.
So, I assume that I need to setup a pserver thingy for subversion so I can access it on both machines but what I am really confused about, is how to I setup and arrange the actual code in my projects so that I can build them under two completely different environments? Do I put all the source code in one place, and then have separate directories for the os-specific project files? So, have a visual studio directory which contains the project and solution files, and a linux directory which has the make (?) files?
And on the subject of make, tell me I don't need to write make files, please! Under Visual Studio, I just add new files to the projects, and press a button to build. Can KDevelop do something similar? I had to use make files at Uni and I'm sure I aged 10 years! :-)
Any help much appreciated,
Caroline M.
Caroline M
Well, if you've grown up using IDEs like Visual Studio you're likely to be most comfortable with KDevelop. There are also cross-platform IDEs like Code::Blocks and Eclipse that let you use apparently the same environment on each platform.
KDevelop does maintain the makefiles for you, so you don't need to look at them if you don't want to.
The disadvantage of relying on IDEs is that they tend to disallow build automation and tend to require a lot of manual labour for supporting multiple platforms, even more than doubling the amount of work if you've got to use multiple IDEs (try getting the same piece of software to build using Visual Studio, KDevelop, and X Code). You spend a lot of time clicking around, opening windows, and tweaking stuff instead of focusing on programming.
What the professionals usually end up using (and I speak from experience as a profressional multiplatform developer) is some kind of command line utility. The most powerful and best supported is make (and its friends, the autotools: autoconf, automake, libtool, etc). Like many powerful weapons, they take a long time to master but give you undreampt of power (and wealth -- experts can demand a decent salary). There are more limited alternatives, such as a-a-p, ant, jam, cmake, and so on. They are to make and the autotools as Java and C# are to C++: they try to make things easier by removing functionality. It's a trade-off.
What we use at work for cross-platform development is the autotools. They run natively on Linux and under Cygwin on Windows (we don't do Macintosh here, but in my last job we did and they work flawlessly). We support multiple Linux platforms (not all distros put stuff in the same place) and multiple architectures (i386, amd64). Builds can be automated to run every night while people are sleeping, and can be configured to be started remotely (edit/test/compile in Visual Studio on Windows, check in to svn, click a button on a web page and a build on Linux is kicked off).
I preach the strong advantages of the autotools, but I do admit the price of entry is very high. There is definitely a steep learning curve (but the view is magnificent at the top). The choice is yours: steep learning curve for more power, or easy learning and comfort with increased rote mechanical task repetition.
Many people choose to target only Windows because they don't want to pay either price. I say jump in to the deep water and learn to swim.
KDevelop does maintain the makefiles for you, so you don't need to look at them if you don't want to.
The disadvantage of relying on IDEs is that they tend to disallow build automation and tend to require a lot of manual labour for supporting multiple platforms, even more than doubling the amount of work if you've got to use multiple IDEs (try getting the same piece of software to build using Visual Studio, KDevelop, and X Code). You spend a lot of time clicking around, opening windows, and tweaking stuff instead of focusing on programming.
What the professionals usually end up using (and I speak from experience as a profressional multiplatform developer) is some kind of command line utility. The most powerful and best supported is make (and its friends, the autotools: autoconf, automake, libtool, etc). Like many powerful weapons, they take a long time to master but give you undreampt of power (and wealth -- experts can demand a decent salary). There are more limited alternatives, such as a-a-p, ant, jam, cmake, and so on. They are to make and the autotools as Java and C# are to C++: they try to make things easier by removing functionality. It's a trade-off.
What we use at work for cross-platform development is the autotools. They run natively on Linux and under Cygwin on Windows (we don't do Macintosh here, but in my last job we did and they work flawlessly). We support multiple Linux platforms (not all distros put stuff in the same place) and multiple architectures (i386, amd64). Builds can be automated to run every night while people are sleeping, and can be configured to be started remotely (edit/test/compile in Visual Studio on Windows, check in to svn, click a button on a web page and a build on Linux is kicked off).
I preach the strong advantages of the autotools, but I do admit the price of entry is very high. There is definitely a steep learning curve (but the view is magnificent at the top). The choice is yours: steep learning curve for more power, or easy learning and comfort with increased rote mechanical task repetition.
Many people choose to target only Windows because they don't want to pay either price. I say jump in to the deep water and learn to swim.
Stephen M. Webb
Professional Free Software Developer
Something that can really help a project move to another platform is making the code base cross-compiler as well. That means having a header that holds compiler specific macros and typedefs. I've typedefed all the primitive types and it's really made moving from the visual studio 2005 compiler to gcc a lot easier. Granted there are still some things that gcc complains about that visual studio lets slide, but they usually aren't that major.
Another tip is to wrap all of your platform specific code into a set of virtual classes. An example is file system access, my code uses a NativeFile class to interface with the file system. I have a Win32NativeFile that handles 32-bit windows file access for me, and if Win32NativeFile is derived from NativeFile then all you need is a factory to get the file pointer from :).
Another tip is to wrap all of your platform specific code into a set of virtual classes. An example is file system access, my code uses a NativeFile class to interface with the file system. I have a Win32NativeFile that handles 32-bit windows file access for me, and if Win32NativeFile is derived from NativeFile then all you need is a factory to get the file pointer from :).
-----------------------------------Indium Studios, Inc.
Thanks for all the replies, really helpful!
Another question on the build tools - if I went with the easy option to start with (!), would I then be able to learn the more powerful tools in the background and move over at some later point or would that be too difficult to do?
I'd like to be able to get a simple system (hello world?) up and running as quickly as possible just to test the system and then grow from there. I dont mind learning something new as long as I can integrate that knowledge as I go along. What I dont want is to have to spend 3 months learning make before I can build a simple program!
Any thoughts?
Thanks again,
Caroline M
Another question on the build tools - if I went with the easy option to start with (!), would I then be able to learn the more powerful tools in the background and move over at some later point or would that be too difficult to do?
I'd like to be able to get a simple system (hello world?) up and running as quickly as possible just to test the system and then grow from there. I dont mind learning something new as long as I can integrate that knowledge as I go along. What I dont want is to have to spend 3 months learning make before I can build a simple program!
Any thoughts?
Thanks again,
Caroline M
Caroline M
Subversion shouldn't be hard to setup. I split my development between a desktop and laptop, each of which runs both Linux and Windows (32-bit on one machine, 64-bit on another). To move stuff around between those 4 locations I have a separate machine in my office which hosts Subversion. In my case, it's important it be on a separate machine from either of my development boxes so I can use it to switch between Windows & Linux on the same machine. If you have yours setup so the Windows and Linux environments are on different machines, then you could just host the SVN server on the Linux box.
You need to decide which server - you can either run a proper server (through Apache), just run a local repository and access it through SSH, or use a sort of halfway-server which doesn't look to do anything as well as the other two alternatives. Apache is considerably more hassle to set up but also much more flexible. I think TortoiseSVN will support both HTTP and SSH access modes but I'm not totally sure about that. For myself, I have an Apache server hosted on my colo machine downtown, which gets used to coordinate work with other people (graphics, mostly), and an SSH-based repository on my internal network for my own coding.
With SVN set up something like that there's no need to worry about separating object files for the Win/Linux versions because on each machine the code will be extracted from SVN to a separate new directory. For source files there are a bunch of ways you can do it. I do everything with make, personally, so I have two different make files for each architecture and a master one which automatically chooses the right one to build so I can just type "make" anywhere and build it.
For code files which really just aren't portable between the architectures (socket-related stuff, for example) I create alternative versions of the particular files and store them under separate directories. Then the makefiles just pull in the right version for their architecture automatically. For smaller architecture-dependent sections I just wrap them in #ifdef and #endif blocks.
You need to decide which server - you can either run a proper server (through Apache), just run a local repository and access it through SSH, or use a sort of halfway-server which doesn't look to do anything as well as the other two alternatives. Apache is considerably more hassle to set up but also much more flexible. I think TortoiseSVN will support both HTTP and SSH access modes but I'm not totally sure about that. For myself, I have an Apache server hosted on my colo machine downtown, which gets used to coordinate work with other people (graphics, mostly), and an SSH-based repository on my internal network for my own coding.
With SVN set up something like that there's no need to worry about separating object files for the Win/Linux versions because on each machine the code will be extracted from SVN to a separate new directory. For source files there are a bunch of ways you can do it. I do everything with make, personally, so I have two different make files for each architecture and a master one which automatically chooses the right one to build so I can just type "make" anywhere and build it.
For code files which really just aren't portable between the architectures (socket-related stuff, for example) I create alternative versions of the particular files and store them under separate directories. Then the makefiles just pull in the right version for their architecture automatically. For smaller architecture-dependent sections I just wrap them in #ifdef and #endif blocks.
Steve Bougerollehttp://www.imperialrealms.com | http://www.sebgitech.com | http://www.bougerolle.net
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement