2GB is't much though, there is maybe an argument to be had kept a consumer drive running 24/7 without idle, but even then I believe the things are actually pretty reliable (but actual NAS/Server drives more so).
That said what is the data? If its basically a file download I'd be inclined to leave that too existing solutions. Use say HTTP with Apache/Nginx/IIS on the server, and a pick of the various HTTP client solutions. And take advantage of range quests (Range
request header, 207 Partial Content
response with Content-Range
header), no one wants to start over on a slow internet connection (or server) if the socket disconnects at 1.9GB.
For download resuming you can take the size of the file downloaded/written so far then use an open ended range, such as Range: bytes=1000-
.
EDIT:
fleabay said:
I hope there are safety measures built into most operating systems (ie the ones I use) to eliminate file damage caused by bad programmers. (not calling you a bad programmer)
Like what though? On most systems if a program wants to overwrite/break one of that users file nothing much will stop them, unless that user had some sort of backup enabled. How is the OS meant to know between Word saving a docx and your broken download program trashing one?
What the OS does generally do is protect the overall file system though, you shouldn't be able to create/remove/rename/delete/etc. your way into a broke filesystem, and getting direct block level access should need root/admin (probably more because it bypasses any file/folder level permissions).