While scribbling up a design for a general-purpose file class, I noticed that some of the operations I'm implementing often involve really large files (upwards of a few dozen megabytes). This got me wondering--in the ancient days of DOS, there were limits on file sizes using certain file I/O methods, and I'd freeze my PC frequently if I tried to work with really big files.
(If you've ever worked with 90% of DOS hex editors, for example, you'll see what I mean.)
So, anyway, I looked through my options, and I came out with three possibilities, though I'm probably missing many others:
1) FILE -- Ex:
FILE* infile = fopen("foo.dat","rb");
2) fstream -- Ex:
fstream infile("foo.dat",ios::in | ios::binary);
3) Platform SDK stuff -- Ex:
OFSTRUCT ofs; int filehandle = OpenFile("foo.dat",&ofs,OF_READ);
What are the pros/cons of each method (or any method I've missed)? Do any of them have limits regarding file size? To my knowledge, #3 isn't portable to other operating systems, which might be a disadvantage depending on how resigned theprogrammer is to Windows' dominance in the OS arena. My main concern is that my program doesn't hang just because I'm reading/writing a really big movie file or something.
Comments appreciated!
SkyDruid
Edited by - skydruid on 7/16/00 6:49:49 PM
Edited by - skydruid on 7/16/00 6:51:01 PM