Memory Fragmentation Problem?
Hello,
I am getting segfaults for simply including this line in my code:
double A[(IMAX-2)*(JMAX-2)][(IMAX-2)*(JMAX-2)];
which currently breaks down to
double A[29*39][29*39];
which then becomes
double A[1131][1131];
I don't even try to access the array or anything.
Now, according to my calculations this is just about 10 megabytes of storage space assuming a 64 bit double. If I change the 1131 in the brackets to a smaller number like 50 everything works just grand. But, I really need the 1131x1131 array since I am basically solving a very large system of simulateneous equations in matrix form Ax=b using the Gauss-Seidel Point Iteration method.
I assume there is some sort of memory fragmentation problem such as the program not being able to allocate 10 megabytes of contiguous storage space.
My system has 256 megabytes of physical memory and 512 megabytes of virtual memory running Linux 2.4.20 kernel.
If you could help me out, I would greatly appreciate it.
EDIT: The GD.net forum program didn't like me making two brackets surrounding a capital letter A representing a matrix A, so I just removed it.
[edited by - Floppy on June 2, 2003 1:18:22 AM]
Perhaps a more direct way to phrase this question would be
What is the maximum number of elements an array can hold in C++ (on Linux 2.4.20 compiling with g++-2.95.4)?
What is the maximum number of elements an array can hold in C++ (on Linux 2.4.20 compiling with g++-2.95.4)?
Ok,
I figured out (by guess and check methods) that the maximum number of elements an array can hold is less than 1024^2 or 2^20. So, for example, this works
double A[1023][1023];
but this does not work
double A[1024][1024];
So, now my question is, is there any possible way to extend this size (maybe a compile time option? or some STL vector method?).
Ideally, I would like the program to be able to make an array whose size is that of the memory on the system. I did some searching around and found that the Intel C++ compiler apparently supports up to 2 GB size arrays, but I would prefer not to have to purchase that software.
Any other ideas would be appreciated
I figured out (by guess and check methods) that the maximum number of elements an array can hold is less than 1024^2 or 2^20. So, for example, this works
double A[1023][1023];
but this does not work
double A[1024][1024];
So, now my question is, is there any possible way to extend this size (maybe a compile time option? or some STL vector method?).
Ideally, I would like the program to be able to make an array whose size is that of the memory on the system. I did some searching around and found that the Intel C++ compiler apparently supports up to 2 GB size arrays, but I would prefer not to have to purchase that software.
Any other ideas would be appreciated
Is this array on the stack? The stack is obviously limited more than the total process memory.
Either way, dynamic allocation seems to be the way to go, as this can make it more flexible as well.
cu,
Prefect
Either way, dynamic allocation seems to be the way to go, as this can make it more flexible as well.
cu,
Prefect
Widelands - laid back, free software strategy
Hello,
Yup, dynamic memory allocation does work since the original array was stored on the stack while the new array is being stored on the heap that has more physical memory allocated to it. I tried to avoid this option since I thought it would be cumbersome to dynamically allocate an array that was going to be used throughout the entire program.
But, is this the general "correct" way to dynamically allocate a multidimensional array?
Thanks for your help
EDIT: Found that above code is a correct way to do that. Also, added the dynamic deallocation statements for a more complete example. I figure the same process can be applied to three-dimensional arrays or higher.
[edited by - Floppy on June 2, 2003 7:21:06 PM]
Yup, dynamic memory allocation does work since the original array was stored on the stack while the new array is being stored on the heap that has more physical memory allocated to it. I tried to avoid this option since I thought it would be cumbersome to dynamically allocate an array that was going to be used throughout the entire program.
But, is this the general "correct" way to dynamically allocate a multidimensional array?
#include <iostream>using namespace std;int main(){ double **pA; pA = new double*[10000]; for(int i=0; i < 10000; i++) pA[i] = new double[10000]; cout << "\nHELLO WORLD\n"; for(int i=10000; i > 0; i--) delete [] pA[i-1]; delete [] pA; return 0;}
Thanks for your help
EDIT: Found that above code is a correct way to do that. Also, added the dynamic deallocation statements for a more complete example. I figure the same process can be applied to three-dimensional arrays or higher.
[edited by - Floppy on June 2, 2003 7:21:06 PM]
That's a correct way, but not the most efficient. Here is a more efficient way (it uses less 'overhead' memory, and is faster to allocate and delete):
Of course, you could do it like this for only halfway dynamically sized arrays:
[edited by - Null and Void on June 2, 2003 12:22:15 AM]
#include <cstddef>#include <new>template <typename Type>Type **my_new_2d(std::size_t a, std::size_t b) throw(std::bad_alloc) { Type **ref = new(std::nothrow) Type *[a]; Type *mem = new(std::nothrow) Type[a*b]; // We don't want the latter to throw an lose the other, and this is simplest // way of handling that without using some destructor or catching an exception if(!ref || !mem) { delete [] ref; delete [] mem; throw std::bad_alloc(); } for(std::size_t ai = 0; ai < a; ++ai) ref[ai] = &mem[ai * b]; return ref;}template <typename Type>void my_delete_2d(Type **ref) { delete [] ref[0]; delete [] ref;}
Of course, you could do it like this for only halfway dynamically sized arrays:
std::size_t count = 1131;double (*A)[100] = new double[count][100];delete [] A;
[edited by - Null and Void on June 2, 2003 12:22:15 AM]
Thanks Null and Void!
I do have one problem with the last code statement you put there. I'm not entirely sure if this is a part of the ANSI/ISO C++ standard or not, but the code
gives an error in g++-2.95.4. The error is
test.cpp:8: initialization to `double (*)[100]' from `double (*)[((count - 1) + 1)]'
In fact, g++ gives me an error anytime when two pairs of brackets are used in a new statement.
If I remember correctly, that code will compile correctly in Visual C++ 6.0, but I don't know if this is a standard C++ feature or not. Visual C++ does have a tendency to make quite a few "illegal" assumptions/changes/extensions in the standard; then again, quite a few compilers do this too.
I would prefer to code in the most portable way as possible, so which way is correct according to the standard the new double[][] or the new double*[] then do the for loop thing?
EDIT: I spelled "too" as "to."
[edited by - Floppy on June 2, 2003 11:11:35 PM]
[edited by - Floppy on June 2, 2003 11:12:01 PM]
I do have one problem with the last code statement you put there. I'm not entirely sure if this is a part of the ANSI/ISO C++ standard or not, but the code
double (*A)[100] = new double[100][count];
gives an error in g++-2.95.4. The error is
test.cpp:8: initialization to `double (*)[100]' from `double (*)[((count - 1) + 1)]'
In fact, g++ gives me an error anytime when two pairs of brackets are used in a new statement.
If I remember correctly, that code will compile correctly in Visual C++ 6.0, but I don't know if this is a standard C++ feature or not. Visual C++ does have a tendency to make quite a few "illegal" assumptions/changes/extensions in the standard; then again, quite a few compilers do this too.
I would prefer to code in the most portable way as possible, so which way is correct according to the standard the new double[][] or the new double*[] then do the for loop thing?
EDIT: I spelled "too" as "to."
[edited by - Floppy on June 2, 2003 11:11:35 PM]
[edited by - Floppy on June 2, 2003 11:12:01 PM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement