Advertisement

High-bandwidth IPC

Started by August 05, 2003 05:33 PM
4 comments, last by gph-gw 21 years, 3 months ago
If one process opens a socket to 127.0.0.1, and another process picks it up, what kind of bandwidth limits might there be? We''ve got to send giant amounts of data back and forth at high speeds. I''d assume that it''s just a queue in memory so it''s as fast as the processes can send/recieve them, but I want to make sure so we won''t waste time trying to make this work. I want to use sockets because at some point we could change the program to communicate between two systems. Oh, BTW, it''s running on a dual proc P4 Xeon system.
It''s a massive waste of resources, but if it is too slow on the same system, it''s never going to work across the wire.

The preferred IPC is a dual memory-mapped region - on NT you can use memory-mapped files to accomplish this. On Linux I''m not sure how (yet, I need to do something similar), but it really wouldn''t surprise me if there is no mechanism to create a region that is memory-mapped into two processes (from user-mode code).
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara
Advertisement
Using sockets with loopback address (localhost, i.e. 127.0.0.1), typical includes some protocol overhead (i.e. TCP). It is unclear what optimizations are made within a protocol stack if loopback address is used.

If you eventually want a distributed system, then use loopback.

Otherwise, consider using Unix domain sockets, which incur no protocol overhead. This effectively places socket read/write semantics around a chunk of shared memory.

As another poster suggested, memory mapped files are an option. On Linux, a memory mapped file is accessed via "mmap". Pretty simple. Another option to shared memory is System V shared memory (shmget), be warned though that shared memory such as this is a system wide resource and often doesn''t get cleaned up if you program crashes or doesn''t clean it up (mmapped files however will get cleaned up).
quote: Original post by Magmai Kai Holmlor
It''s a massive waste of resources, but if it is too slow on the same system, it''s never going to work across the wire.

The preferred IPC is a dual memory-mapped region - on NT you can use memory-mapped files to accomplish this. On Linux I''m not sure how (yet, I need to do something similar), but it really wouldn''t surprise me if there is no mechanism to create a region that is memory-mapped into two processes (from user-mode code).


Hmmmm...maybe I didn''t really put it the right way. The system needs to be fairly flexible, so that if, for example, I end up only needing maybe average 20MB/s data flow, then I could distribute that data over a high-speed network(i.e. gigabit) and free up the sending system''s processing load. However, if we get unlucky and the data flow turns out to be more like 150MB/s, we would need to switch the system around a bit so that both processes are running on the same (dual-proc) machine.

If we knew the total amount of data now there wouldn''t be a problem, but as the project goes forward and the design changes(as it always seems to do), I want to make a system that works now, when we have time, and not later, when we''re going to be worrying about debugging & optimization and all that. Once the system is in place, the data flow likely won''t change, so we can fix the system at compile-time.

I''ll check out both UNIX domain sockets and memory-mapped regions. UNIX sockets might be the way we go because then it''ll be easier to switch to TCP and/or UDP later. If it''s just a FIFO in memory then it should be pretty fast. As for memory, well, I''d have to work with one of the other programmers for that, I''m not too experienced with things like race conditions and stuff like that, but it might be required.

Or, I can bite the bullet and write a small library which does both so we''re safe either way, but that seems like a lot of time investment for not much.

I guess my question, then, was can two processes communicate with each other at very high rates?
Unix domain sockets I reccomend. X uses the same thing, and despite what some crackheads claim, it doesn''t slow down the protocol.

Just out of curiousity, what are you developing that could pump out 150MB/s ?
What sort of unix is this? Virtually all Linux IPC mechanisms are fast, but shared memory segments are the most efficient for large amounts of data, of course. Unless the memory needs to be copied from one process to another anyway.

Information on the various options is available here.
---New infokeeps brain running;must gas up!

This topic is closed to new replies.

Advertisement