I was in the same trouble when reworking my basic module of our SDK. Looking around on the internet I found the HAL in Unreal's Source Code on GitHub and some interesting files that are called the “minimal windows api”. The problem I often have faced is that if you are including windows.h, there is a full list of hundrets of headers also included and build-complexity increased. Sure you can via macro define what parts of the header you want but honestly, this feels odd because you have to dig through the header files and have a look at the macro definitions tested for. So I included windows.h into my platform layer only once for using interlocked memory barrier (as this is itself a macro I can't reproduce that easy)
My solution to handle both, a small flat API and dismiss all the windows.h stuff I don't want to include was inspired by Unreal's minified windows API, to define anything I needed from WINAPI to be an extern include. So I added a header file in a special namespace (to not clash with any global stuff)
#pragma once
#if WINDOWS
#ifndef WindowsDataTypes_h
#define WindowsDataTypes_h
namespace Runtime
{
typedef unsigned char BYTE;
typedef int32 BOOL;
typedef unsigned short WORD;
typedef unsigned long DWORD;
…
#pragma warning ( push )
#pragma warning( disable : 4201)
union LARGE_INTEGER
{
struct
{
DWORD LowPart;
LONG HighPart;
};
LONGLONG QuadPart;
};
#pragma warning ( pop )
}
Because it is very unlikely that anything of the legacy WINAPI changes in the future (and if it does, I could adapt my code anyways), I copied the definition of the windows types I need into my code like Unreal did.
Then I have for example my Chrono.h file
#pragma once
#if WINDOWS
#ifndef WindowsChrono_h
#define WindowsChrono_h
#define SE_WINAPI stdcall
#define static_link extern ‘C’
#include <Windows/DataTypes.h>
namespace Runtime
{
static_link dll_import BOOL SE_WINAPI QueryPerformanceCounter(LARGE_INTEGER *lpPerformanceCount);
static_link dll_import BOOL SE_WINAPI QueryPerformanceFrequency(LARGE_INTEGER *lpFrequency);
}
force_inline int64 SE::System::GetHighResolutionTime()
{
Runtime::LARGE_INTEGER result;
Runtime::QueryPerformanceCounter(&result);
return static_cast<int64>(result.QuadPart);
}
force_inline int64 SE::System::GetClockFrequency()
{
Runtime::LARGE_INTEGER result;
Runtime::QueryPerformanceFrequency(&result);
return static_cast<int64>(result.QuadPart);
}
#undef SE_WINAPI
#endif
#endif
And defined the general API in my System layer like so
#pragma once
#ifndef SystemChrono_h
#define SystemChrono_h
#include <Numerics.h>
namespace SE
{
namespace System
{
/**
Gets current CPU tick counter value
*/
int64 GetHighResolutionTime();
/**
Gets the ticks per second constant
*/
int64 GetClockFrequency();
}
}
#include <Android/Chrono.h>
#include <Linux/Chrono.h>
#include <Windows/Chrono.h>
#endif
What happens now is the following:
- Preprocessor includes windows specific code on windows platforms
- Compiler forwards the public declaration in System to the implementation in the platform defined files
- Linker detects the extern ‘C’ dll_import definitions and tries to bind the function to a known function either in code (which not exists) or in the VC Runtime/ WINAPI
And finally I build my classes using SE::System functions without worrying about how it is implemented. If I need to have different platform specific data types or parameter filled into the function, I try to either have those parameter unified for the system API, have some kind of switch (via an enum) that my internal implementation could handle or define a data type in the System API that will be transpiled into the OS dependent one.
The later did not happen yet and my API is rather completed for whatever I need.
The switch processing happens very often, for example in my file API I have a utility function to translate such a switch into windows specific file access modifers
inline void TranslateFileFlags(DWORD& openMode, DWORD& shareMode, SE::FileAccessFlags::EnumType flags)
{
if (flags == SE::FileAccessFlags::Default || HasFlag(flags, SE::FileAccessFlags::Read))
{
openMode |= SE_GENERIC_READ;
if (HasFlag(flags, SE::FileAccessFlags::Shared))
shareMode |= SE_FILE_SHARE_READ;
}
if (HasFlag(flags, SE::FileAccessFlags::Write))
{
openMode |= SE_GENERIC_WRITE;
if (HasFlag(flags, SE::FileAccessFlags::Shared))
shareMode |= SE_FILE_SHARE_WRITE;
}
}
I didn't encounter any huge performance issues yet rather than for accessing stuff by name (what shouldn't happen such often in a game), for example opening a file. This is because windows supports ANSI and Unicode versions of it's WINAPI functions and I use the Unicode ones to be UTF8 compilant. This requires calling MultiByteToWideChar in order to pass UTF8 strings properly
[EDIT]
Oh and I forgot to mention that whenever I have something to be stored in my higher level code, I pass it by void pointer. Most C++ devs around here will now be triggered but you won't work with that pointers by your own rather than the forwarded API calls will and they know what they expect and how to handle it. You are only responsible for storing and passing that pointer into the right parameter