Advertisement

Questions about the Event Manager

Started by August 19, 2021 05:57 PM
20 comments, last by h8CplusplusGuru 3 years, 3 months ago

Hi everyone,

I'm starting to plan how to do the Event Manager, I have been reading, and my idea is to create a system that uses Fast Delegates C++ (from 2005) to handle all the events, so I won't need to have a huge enum plus other system don't need to know anything about that, it's abastract, anyone can Register by itself to the event, and it will wait until the event is triggered.

The problem is that Fast Delegates from 2005 is too old, and use some hack tricks, I assume that now in 2021 with C++17 should be something that fits the job. I was thinking about Boost:Signals2, but I'm not sure if it's fastest than Fast Delegates, or the best idea.

So, if anyone know any good library that fits the job ( event or delegate, etc ), tip, or anything related to the Event Manager, I'm glad to read.

PD: Also, I was thinking reading a file in another thread diff than the main one, and when the file is read send an event, but it needs to be thread-safe I assume, any document or something about that?

Thanks!!

I don't know about “fast” delegates but as we're using delegates in our engine as well and everything needs to be C99 standard compliant, our delegates are working with every C++ standard and have “only” 1 indirection which makes them slower than usual function calls. This is from my perspective, the most clean and fast way of doing it.

We differentiate between 2 delegate implementations, the static one for everything which isn't a class member function pointer and a dynamic one which can be used for both, class member function and static calling convention.

The static one doesn't even has an indirection as we don't need to cover dynamic calls requireing a this-pointer as well, so I'll show the dynamic one here:

/**
 A dynamic typed struct providing anonymous calling context
*/
template<typename ret do_if(ORDER, _separator) variadic_decl(typename Args, ORDER)> struct DynamicDelegate<ret(variadic_decl(Args, ORDER))>
{
    public:
        typedef ret ReturnValue;
        typedef ret (*FunctionPointer) (variadic_decl(Args, ORDER));
        typedef ret (*ContextPointer) (void* do_if(ORDER, _separator) variadic_decl(Args, ORDER));

        /**
         A pointer to the internal call context
        */
        force_inline ContextPointer Context() { return context; }
        /**
         An instance pointer when initialized to a member function, null_ptr otherwise
        */
        force_inline void* Target() { return target; }

        /**
         Copy constructor
        */
        force_inline DynamicDelegate(DynamicDelegate const& delegate) : context(delegate.context), target(delegate.target)
        { }
        /**
         Default constructor
        */
        force_inline DynamicDelegate() : context(se_null), target(se_null)
        { }
        /**
         Class constructor initializes this context with given values
        */
        force_inline DynamicDelegate(ContextPointer context, void* target) : context(context), target(target)
        { }
        /**
         Class destructor
        */
        force_inline ~DynamicDelegate()
        { }

        force_inline DynamicDelegate<ret(variadic_decl(Args, ORDER))>& operator=(DynamicDelegate<ret(variadic_decl(Args, ORDER))> const& delegate)
        {
            Bind(delegate.context, delegate.target);
            return *this;
        }

        force_inline operator bool() const { return context != se_null; }
        force_inline bool operator!() const { return !(operator bool()); }

        force_inline ret operator()(variadic_args(Args, a, ORDER)) const 
        { 
            return Invoke(variadic_decl(a, ORDER));
        }

        /**
         Binds the delegate to a new target
        */
        force_inline void Bind(ContextPointer ctx, void* instance)
        {
            context = ctx;
            target = instance;
        }
        /**
         Binds the delegate to a new target
        */
        force_inline void Bind(ContextPointer ctx)
        {
            Bind(ctx, se_null);
        }

        /**
         Calls the function this context is bound to
        */
        force_inline ret Invoke(variadic_args(Args, a, ORDER)) const
        { 
            return context(target do_if(ORDER, _separator) variadic_decl(a, ORDER));
        }

        /**
         Returns the parameter count of this delegate signature
        */
        force_inline static int ParameterCount() { return ORDER; }

    private:
        ContextPointer context;
        void* target;
};

We expect two function pointers, one points to “the real” function we want to invoke and the other is the proxy call. The proxy call will be invoked together with a third pointer, the object instance required to call the function if it is a class member function. On a static function, this value is zero.

The variadic stuff is a set of macros which generate template and function arguments depending on the ORDER value specified. We generate overloads of this template for 0 up to 10 arguments in the target function pointer.

The real “magic” happens in the call proxies, we have different proxy implementations which can be used:

/**
 Class Member Context Utility
*/
template<typename ret do_if(ORDER, _separator) variadic_decl(typename Args, ORDER)> struct InstanceCallContext<ret (variadic_decl(Args, ORDER))>
{
    public:
        /**
         Provides a class member function call context to use in dynamic delegate
        */
        template<class T, ret (T::*type) (variadic_decl(Args, ORDER))> static force_inline ret Functor(void* target do_if(ORDER, _separator) variadic_args(Args, a, ORDER))
        { 
            T* ptr = static_cast<T*>(target);
            return (ptr->*type)(variadic_decl(a, ORDER));
        }
        /**
         Provides an anonymous class member function call context to use for dynamic calling
        */
        template<class T, ret (T::*type) (variadic_decl(Args, ORDER))> static force_inline void AnonymousFunctor(void* target, void** args)
        { 
            T* ptr = static_cast<T*>(target);
            *reinterpret_cast<typename SE::TypeTraits::Const::Remove<typename SE::TypeTraits::Reference::Remove<ret>::Result>::Result*>(args[ORDER]) = (ptr->*type)(variadic_deduce(Args, args, ORDER));

            (void)args;
        }

        /**
         Provides a const class member function call context to use in dynamic delegate
        */
        template<class T, ret (T::*type) (variadic_decl(Args, ORDER)) const> static force_inline ret ConstFunctor(void* target do_if(ORDER, _separator) variadic_args(Args, a, ORDER))
        { 
            T* ptr = static_cast<T*>(target);
            return (ptr->*type)(variadic_decl(a, ORDER));
        }
        /**
         Provides an anonymous const class member function call context to use for dynamic calling
        */
        template<class T, ret (T::*type) (variadic_decl(Args, ORDER)) const> static force_inline void AnonymousConstFunctor(void* target, void** args)
        { 
            T* ptr = static_cast<T*>(target);
            *reinterpret_cast<typename SE::TypeTraits::Const::Remove<typename SE::TypeTraits::Reference::Remove<ret>::Result>::Result*>(args[ORDER]) = (ptr->*type)(variadic_deduce(Args, args, ORDER));

            (void)args;
        }
};

You see, everything it does is to cast the instance argument to the class type and simply calls the member function along the instance and returns the result. The static one looks similar:

/**
 Static Context Utility
*/
template<typename ret do_if(ORDER, _separator) variadic_decl(typename Args, ORDER)> struct StaticCallContext<ret (variadic_decl(Args, ORDER))>
{
    public:
        typedef ret (*FunctionPointer) (variadic_decl(Args, ORDER));

        /**
         Provides a static function call context to use in dynamic delegate
        */
        template<FunctionPointer type> static force_inline ret Functor(void* target do_if(ORDER, _separator) variadic_args(Args, a, ORDER))
        {
            (void)target;
            return type(variadic_decl(a, ORDER));
        }
        /**
         Provides an anonymous static function call context to use for dynamic calling
        */
        template<FunctionPointer type> static force_inline void AnonymousFunctor(void* target, void** args)
        {
            *reinterpret_cast<typename SE::TypeTraits::Const::Remove<typename SE::TypeTraits::Reference::Remove<ret>::Result>::Result*>(args[ORDER]) = type(variadic_deduce(Args, args, ORDER));

            (void)target;
            (void)args;
        }
};

We also have one of those for member access (e.g. variable/property getter/setter) but that's negotiable for this.

The usage is btw as simple as

DynamicDelegate<void (const char*)> log;
Console instance;

log.Bind(&InstanceCallContext<void (const char*)>::Functor<Console, Console::WriteLine>, instance);

Our current event system is a template class as well, we don't use a single manager for it but have typed events depending on the event's data. For example we have an input event which is invoked from the input system whenever data arrives

Events<InputData>::Add(myInputCallback);
Events<InputData>::Invoke(...);
Events<InputData>::Remove(myInputCallback);

Operator overloads also exist for getting somehow the luxury of the C# events syntax; += callback to add and -= callback to remove.

The event system itself is just collecting event data and does nothing except bookkeeping. Events are fired from our thread pool when we register the Events<Args>::Process function. This is running asynchronously on whatever thread is currently available.

The inner function performs some important operations before callbacks are iterated. First a spin lock is acquired which protects the collection of subscribers, then subscribers are cloned to another list also present in the event manager's internal data. We're doing this because subscribers can unsubscribe from the event while processing the event and cause a data mismatch in the subscriber list. Then the list of collected event data is iterated and every callback is called for each item stored. This is spin locked as well but only if accessing the next item, this way some threads can interrupt the dispatch and those events are processed in the same run as well. This isn't the case for subscibers perhaps.

This is a quiet naive but functional approach, but as we're using the reactive programming pattern really extensively in our C# code, I'm thinking about adding this to the engine and replace the old event system as well. Reactive programming has some advantages, since you're working with streams and can chain event streams to each other. However, I have to think about this even more since our current event system only fires whenever a new frame is created and RC is an on-demand approach

Advertisement

I've created an updated version of the original fast delegates, using c++11-features to drastically reduce the amount of code and remove some of the “hacks” previously used. I posted it as an article on this site, but cannot find it on this page for the life of me. Unfortunately, my current version is not really something that I can reasonably share -for once, it depends on lots of system-headers from my engine, as well as generally only working on MSVC as far as I'm aware (which is the result of me rewritting the inner logic, to support lambdas with small captures).

But truthfully, you could still use the 2005 version, if you don't want to make modifications to the source yourself. It should run on all platforms even with all the “hacks”, I've never encountered any bugs with it. The only real downside to my more modern solution is that it doesn't use variadic templates (which didn't exist then) and thushas to duplicate the declarations and limit the number of parameters. If this doesn't bother you, feel free to use it.

Also, I should mention I personally use a few different delegate-style types, depending on the situation:

Delegate: This one is my modified 2005-fast delegate version. Allows binding of free/static-functions, member functions, and lambdas (with a capture of at max sizeof(void*), only trivially destructible data).
Signal: Also modified from the 2005-delegate original, simply a list of Delegates (similar to how C#-delegates work, allowing to bind multiples)
Function: For storing more complicated lambda captures, I made a move-only variant of std::function. A lot smaller, and way faster (at least for creation/destruction than std::function), but still almost the same flexibility (if you actually need to copy the capture, use std::function).
FunctionRef: Allows capturing anything, but as the name suggests, it doesn't copy the function, but only references it. So for passing free/member functions, there is no difference, however when you pass a lambda you need to make sure the lambda outlives the scope of the FunctionRef (similar to how a std::string_view works in conjunction with std::string). Pretty much both the fastest and most flexible type, with that one obvious restriction. So I use this exclusively to pass into a function, and never to hold on to a function for long.

In terms of an event-manager, supporting all of those obviously would be overkill, so you'd have to decide what you want. Being able to use lambda-captures is a real benefit, even with the limitations in my modified delegate-version. But now that I though about it, if you want the most flexible interface, you are probably best of just using std::function or something similar, until you actually notice that it becomes a performance-issue. The call-performance of std::function is actually pretty on par with everything else, its just creation and specially destruction that is costly.

Thank you both @shaarigan & @juliean for the info.

Forget to mention, client engine is single threaded (for some reasons we can't use multi-threading for physics & render)

I'm checking and doing some benchmarks, Boost:Signal2 isn't bad at all, and it's all purpose system, accepting static, local & fn from instances. If I see that it's too slow I will try to check something else, prob Fast Deletages.

I have 2 concern more, 1st is, how about Event data is allocated? Should I create an allocator like memory pool for the event data?

2nd we can have multi-threading for reading files, I need to think a way of sending the event in a safely way to main thread.

All I can think is some Atomic bool, or use the SignalSafe with mutex option.

Rewaz said:
I have 2 concern more, 1st is, how about Event data is allocated? Should I create an allocator like memory pool for the event data?

It depends. If you create events like we did, everything is allocated in the event data buffer as it's templated and size is known at compile time. Otherwise you have to provide a well defined way to allocate memory and to free it after the event was processed.

Management can happen in the event itself, on something like C#'s Dispose or you inform the sender about completion and let it do that on their own. The later has the pro that sender doesn't need to allocate stuff but can provide a pointer to some stack data while it has the con that you're responsible and interconnected in a more detailed fashion.

Also read about memory allocation in this blog post http://bitsquid.blogspot.com/2010/09/custom-memory-allocation-in-c.html

Rewaz said:
2nd we can have multi-threading for reading files, I need to think a way of sending the event in a safely way to main thread.

You can't as file I/O is single process, single thread per definition. You can however try to get different portions of the data in your own stream class but need to synchronize those streams over the FILE handle. Streams usually read parts of the file's data into a small 128 byte buffer to speed up jumping forth and back some bytes.

The only way to truely read a file multithreadded is memory mapped I/O. The OS copies the file contents (single threaded) into a virtual memory page and you just obtain a pointer to the portion you requested. Since this pointer is plain byte data, you can access it from different threads without blocking each other. You have however to make sure that the mapping isn't released while there are still pointer instances in use

1Shaarigan said:

Rewaz said:
I have 2 concern more, 1st is, how about Event data is allocated? Should I create an allocator like memory pool for the event data?

It depends. If you create events like we did, everything is allocated in the event data buffer as it's templated and size is known at compile time. Otherwise you have to provide a well defined way to allocate memory and to free it after the event was processed.

Management can happen in the event itself, on something like C#'s Dispose or you inform the sender about completion and let it do that on their own. The later has the pro that sender doesn't need to allocate stuff but can provide a pointer to some stack data while it has the con that you're responsible and interconnected in a more detailed fashion.

Also read about memory allocation in this blog post http://bitsquid.blogspot.com/2010/09/custom-memory-allocation-in-c.html

Rewaz said:
2nd we can have multi-threading for reading files, I need to think a way of sending the event in a safely way to main thread.

You can't as file I/O is single process, single thread per definition. You can however try to get different portions of the data in your own stream class but need to synchronize those streams over the FILE handle. Streams usually read parts of the file's data into a small 128 byte buffer to speed up jumping forth and back some bytes.

The only way to truely read a file multithreadded is memory mapped I/O. The OS copies the file contents (single threaded) into a virtual memory page and you just obtain a pointer to the portion you requested. Since this pointer is plain byte data, you can access it from different threads without blocking each other. You have however to make sure that the mapping isn't released while there are still pointer instances in use

1) Nono, I have something like that for my allocators, but I don't like doing “Allocator::allocate()” or “myAllocator.allocate( MyClass )” I prefer to override the new & delete operators, it's cleaner. And how event data it's done, I have the main class and all of the other events are child of it like “class EventData” → “class EventObjectCreate : public EventData”.

2) Nono, I meant I have one thread exclusive for I/O, my idea is to read the whole file into a stream. Once I have a complete stream send it (with event or another way I don't care how honestly, the fastest way possible), to the main thread, and check it like “IsRead” and start using the stream.

Advertisement

Rewaz said:
I have something like that for my allocators, but I don't like doing “Allocator::allocate()” or “myAllocator.allocate( MyClass )” I prefer to override the new & delete operators, it's cleaner

It is cleaner to have some allocator calls instead of overriding the new operator in my opinion. Looking at those two calls (which may literally do the same)

myClass* myInstance = Allocator::Default::Allocate<myClass>(myArgs);

myClass* myInstance = new (Allocator::Default::Allocate(sizeof(myClass), 16)) myClass(myArgs);

and I know, the placement new is also using the allocator (which is because we have different allocators which offer different allocation strategies like Stack, Heap and MemoryPool), it looks way cleaner to call it directly without the placement new. I don't know how your allocator is looking so this may be different from your implementation but I don't see a way at the moment to have different strategies without fetching the memory pointer from some kind of manager class, or else you don't need an allocator at all. The use-case of an allocator is that everything is properly aligned, managed and tracked, where managing can for example happen in different buckets for different sized memory requests or you have a garbage collection strategy behind.

I btw. prefer to have classes manage their memory needs on their own so constructor/destructor calls should do that and if you need to allocate something on the heap, then having the security of being informed immediately if you're leaving an object scope and have a memory leak is improving development a lot.

Rewaz said:
I have the main class and all of the other events are child of it

I'm torn about this approach, OOP in C++ should be well chosen in my opinion. Even if it looks well designed, having pure static functions in a namespace can be better than declaring a pure static class or POD structs for events can be better when you don't need inheritance features.

Our data containers are for example classes which set on OOP and inheritance because while everything inherits from Array, it can be treatened like one. This is useful if you want to iterate the container for example or use such a container to manage memory (like a JSON DOM).

On the other hand if you use POD structs, you can allocate a big block of memory and just place those structs into that block without worrying about the size of the object you allocate. Staying at the JSON DOM example, our JSON nodes are managed in this way, have a Document which inherits from Array and keeps all the nodes in a coherent block of memory for fast access.

Rewaz said:
my idea is to read the whole file into a stream

Getting pain from reading this ?

The use-case for streams is that you don't want to have the entire contents in it but load it from the source whenever needed and available. You mean a buffer but not a stream.

An argument against this approach is the size of the files you want to read, it can easily exceed your RAM and you should avoid this unless your are really really really sure that those files will never exceed a certain limit and even then, you want to avoid this!

If you don't have multithreaded file I/O, why reading the file in a second thread? If you use a real streaming approach, you don't need that and even if we assume the file I/O is that slow, that it'll block your entire thread for several seconds, why don't you have the thread fill the stream then up to the buffer limit of lets say 128 byte instead of reading the entire file?

I don't know what you want to achieve but I guess there are some faults in your design which can be solved in a much cleaner and safe way ¯\_(ツ)_/¯

@Shaarigan

Shaarigan said:
I btw. prefer to have classes manage their memory needs on their own so constructor/destructor calls should do that and if you need to allocate something on the heap, then having the security of being informed immediately if you're leaving an object scope and have a memory leak is improving development a lot.

Hmmm, that's true, I was also thinking, what if for whatever reason I want to use another allocator, and I already have the new operator overriden, I willn't be able to do that. So, maybe I will try to find a clean way to allocate memory, maybe using a macro.

Shaarigan said:
I'm torn about this approach, OOP in C++ should be well chosen in my opinion. Even if it looks well designed, having pure static functions in a namespace can be better than declaring a pure static class or POD structs for events can be better when you don't need inheritance features.

The OOP approach is because each “EventData” will contain the variables needed for the event, so that memory is only for the data. Example:

class EventEntityCreate : public EventData
{
	protected:
		uint8  m_type;
		uint32 m_entityId;
};

Shaarigan said:
If you don't have multithreaded file I/O, why reading the file in a second thread? If you use a real streaming approach, you don't need that and even if we assume the file I/O is that slow, that it'll block your entire thread for several seconds, why don't you have the thread fill the stream then up to the buffer limit of lets say 128 byte instead of reading the entire file?

I don't know what u really mean by multithreaded file I/O, but, I can read / write from that second thread, I don't have many threads reading from a HDD, since it can only handle one operation at a time.

Sorry about calling it Stream, my mistake, I meant a buffer, I use a static buffer of 20MB, and I make sure no file reaches that capacity, then I move those 20MB (ofc not all files are 20mb) to the main thread, where those are used for loading map or resources. Otherwise some files will block the main thread (and since game is single threaded game will freeze). For some HDD user, we notice there was some serious freezes loading map files.

Rewaz said:
The OOP approach is because each “EventData” will contain the variables needed for the event, so that memory is only for the data. Example:

If you inherit from EventData then the memory is allocated for EventData, the vtable pointer for virtual function calls which is added to inherited types even if you don't use any inheritance per se, the compiler creates an implicit call to the base class constructor and finally the space needed for the overloaded class type as well.

You can easily check this with a sizeof call, just saying, in the end it is your implementation

Rewaz said:
we notice there was some serious freezes loading map files

Did you read the file all at once when loading from the main-thread as well?

Rewaz said:
I use a static buffer of 20MB, and I make sure no file reaches that capacity, then I move those 20MB (ofc not all files are 20mb) to the main thread, where those are used for loading map or resources.

What games usually do is to put everything into a huge archive kind of file and then use memory mapped I/= to the file in order to have fast access to different portions of it. It is quiet simple, you just need some kind of chunking which padds the file to 64k boundaries and some indexing header. Then you load the mapping and create views (in Windows, Unix doesn't need them) to the chunks you need and read everything as a plain byte pointer.

I wrote a simple tool for our engine to test that and it took me half a day for a working solution. It is however some more complex now since I added compression and asset encryption as well as digital signing over elliptic curves.

TL;DR memory mapped I/O saves you from doing things with a fixed 20MB buffer which can lead to trouble in your later development, like every random chosen magic numbers could

Shaarigan said:
If you inherit from EventData then the memory is allocated for EventData, the vtable pointer for virtual function calls which is added to inherited types even if you don't use any inheritance per se, the compiler creates an implicit call to the base class constructor and finally the space needed for the overloaded class type as well.

Thats only true if EventData contains virtual functions. No vtable will be created if neigther base nor the class itself has any virtuals. Now, if you intend to store EventData in a container for deferred processing (as I think OP intends), then you'll probably need it to have virtual destructor, which yes will mandate a vtable for all derived types. If you were however to only create events on the stack for immediate notifications, then no virtual destructor would be needed.

This topic is closed to new replies.

Advertisement