I didn't mean to store them all. That's just the worst case scenario in case the clients are connected and requesting these files at the same time. After you downloaded them from s3, wouldn't these files, at some point in time, reside in memory, even for a short while?
Only if you keep the entire file there. But, as I said, you could probably stream them to/from disk so you don't have to keep the entire file in memory, or as others said if you forward them directly to the user while you download them yourself.
What Bob and everyone is saying is....
std::fstream f;
f.open(...);
f.seekg(0,f.end);
size_t m = f.tellg();
f.seekg(0,f.beg);
size_t i, l;
i = 0;
while( i < m )
{
l = 1024;
if(l < m - i )
l = m - i;
f.read( buff, l )
send( skt, buff, l, 0 );
}
f.close(...)
aka stream it from file to the network.
Notice how only a tiny part of the file ever resides in memory (only 1024 bytes in that code).