Heap Fragmentation
I''ve heard that some people use memory management
so they don''t have to constantly call new and delete
which is slow. I''ve also been told that it stops
fragmentation of the heap which some how affects performance.
I was wondering if someone could explain this in detail.
Whether it is right or wrong and why.
Dave.
"I am a pitbull on the pantleg of opportunity."George W. Bush
Ok, right now I''m only going to post a small reply, and I''ll start by saying that this is all theory to me since I''ve never actually implemented it ... but I think I understand the issues.
The primary issue is usually related to the fact that the Windows and C++ memory management systems are completely general purpose, and therefore cannot be optimized for each of the different usage senerios that exist. As well as being slow since they have many cases to check / handle. So the deal is this ... if you have a dynamic memory situation that you feel you understand very well, and can think of an allocation scheme that would be more efficient than the normal scheme ... such as you are constantly allocating and deallocating identical sized chunks (which means recycled chunks are guaranteed to hold future allocations correctly), then you can allocate one large area of memory from the C++ or Windows heap, and then manage it''s contents yourself ... via your own logic code.
The issue with heap fregmentation is that ... since memory allocations / dealocations vary in size ... then when an allocation is freed it may or may not be sized correctly to be reused ... and even if it can be reused, usually only partially. So what you get is a situation in which, when you look at memory sequentially, then not all of the memory is being used. This is EXTREMELY bad when you begin to think of how cache works on modern computer systems. Say your cache lines are 32 bytes. And so your CPU fetches data into cache in sequential 32 bytes blocks. Well ... if only an average of 24 out of every 32 bytes of data is still being actively used ... then basically you are throwing away 25% of your CPU''s cache .. .which is like saying that all of the memory you allocate actually uses 33% more space than you think. So it''s kind of a dual loose-loose situation: you loose main memory due to allocated sections that are not being used ... even very large sections ... you loose cache memory due to small holes in your allocated memory. Oh well ... gotta go ... so i hope this helps as a start.
The primary issue is usually related to the fact that the Windows and C++ memory management systems are completely general purpose, and therefore cannot be optimized for each of the different usage senerios that exist. As well as being slow since they have many cases to check / handle. So the deal is this ... if you have a dynamic memory situation that you feel you understand very well, and can think of an allocation scheme that would be more efficient than the normal scheme ... such as you are constantly allocating and deallocating identical sized chunks (which means recycled chunks are guaranteed to hold future allocations correctly), then you can allocate one large area of memory from the C++ or Windows heap, and then manage it''s contents yourself ... via your own logic code.
The issue with heap fregmentation is that ... since memory allocations / dealocations vary in size ... then when an allocation is freed it may or may not be sized correctly to be reused ... and even if it can be reused, usually only partially. So what you get is a situation in which, when you look at memory sequentially, then not all of the memory is being used. This is EXTREMELY bad when you begin to think of how cache works on modern computer systems. Say your cache lines are 32 bytes. And so your CPU fetches data into cache in sequential 32 bytes blocks. Well ... if only an average of 24 out of every 32 bytes of data is still being actively used ... then basically you are throwing away 25% of your CPU''s cache .. .which is like saying that all of the memory you allocate actually uses 33% more space than you think. So it''s kind of a dual loose-loose situation: you loose main memory due to allocated sections that are not being used ... even very large sections ... you loose cache memory due to small holes in your allocated memory. Oh well ... gotta go ... so i hope this helps as a start.
October 12, 2000 07:01 PM
Here is an example of heap fragmentation.
Consider we have 10 bytes of memory: ..........
Now allocate 5 bytes: 11111.....
Now allocate 1 byte: 111112....
Now free the first 5 byte allocation: .....2....
Now allocate 1 more byte: 3....2....
At this point we have 8 bytes free total. However the largest chunk that we''ve got is only 4 bytes. If somebody comes along and wants 5 they''re out of luck because the heap is fragmented and we don''t have a chunk that large.
-Mike
Consider we have 10 bytes of memory: ..........
Now allocate 5 bytes: 11111.....
Now allocate 1 byte: 111112....
Now free the first 5 byte allocation: .....2....
Now allocate 1 more byte: 3....2....
At this point we have 8 bytes free total. However the largest chunk that we''ve got is only 4 bytes. If somebody comes along and wants 5 they''re out of luck because the heap is fragmented and we don''t have a chunk that large.
-Mike
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement