I was thinking about posting this in the networking forum, but it doesn't really have to do with games.
Anyway, I just started a project for the discussion and development of a new, open-source networking infrastructure and interface. One of its main goals is to eliminate the need for service providers by sharing the relay of data over devices, wirelessly, instead of a physical infrastructure like Comcast.
Please look at the model to see what I'm talking about: [size=2]<a href='"http://www.chroud.com/chroudnet/"'>http://www.chroud.com/chroudnet/</a><br><br> I feel eventually, wireless technologies like WiMAX and the like will be able to provide fast wireless transfer over miles-wide radii - won't we get to a point where we can be our own infrastructure?<br><br> So that's my question - will we always need service providers? Other questions this project brings up:<br><br> 1.) Can we design a networking infrastructure that utilizes parallel relay - similar to a multi-core processor? <br> 2.) Can we design a better network for future technologies like full-blown cloud computing?<br><br> Also, if you're interested in joining the project - it just started today! Head over to <a href='"http://www.chroud.com"'>http://www.chroud.com</a>
Will we always need service providers?
the need for service providers by sharing the relay of data over devices, wirelessly
Ah, but air is not infinite.
Today anyone can build a radio. Yet very few can operate radio stations.
You can extrapolate from there.
Even today, depending on country, operating a communication node might be subject to laws and regulation. There might also be legal implications on traffic that goes through such node. This is one of aspects ISPs currently cover.
So that's my question - will we always need service providers?No, only until there is need for them to provide a service.Can we design a networking infrastructure that utilizes parallel relay - similar to a multi-core processor?
What precisely would that be?2.) Can we design a better network for future technologies like full-blown cloud computing?
Better in what regard?
Cloud computing (currently) means primarily ubiquitous access. Decentralization was never a strong point, nor does it need to be royalty free.
Unless you develop ground-breaking physics that allow you to build a wireless communication device that doesn't use the electromagnetic spectrum, there will always be a trade-off between wired and wireless communication: Wired will be faster and generally more reliable, assuming a peaceful society, while wireless will allow more freedom of movement.
Having a kind of autonomous self-organizing wireless network would be awesome for after the collapse of civilization, though. The obvious question then becomes where you get your replacement parts from. Some super-advanced RepRap?
Having a kind of autonomous self-organizing wireless network would be awesome for after the collapse of civilization, though. The obvious question then becomes where you get your replacement parts from. Some super-advanced RepRap?
Widelands - laid back, free software strategy
It might be an alternative for another internet, like an Internet 3. But I don't think it will ever be the main source of connections.
As soon as some major countries make some internet legislation I'm expecting to see an underground p2p internet pop up outside of ISP's. Kind of like old school bbs' but with wi-max instead. This is assuming said internet does not already exist and I just don't know about it.
As soon as some major countries make some internet legislation I'm expecting to see an underground p2p internet pop up outside of ISP's. Kind of like old school bbs' but with wi-max instead. This is assuming said internet does not already exist and I just don't know about it.
Unless you develop ground-breaking physics that allow you to build a wireless communication device that doesn't use the electromagnetic spectrum, there will always be a trade-off between wired and wireless communication: Wired will be faster and generally more reliable, assuming a peaceful society, while wireless will allow more freedom of movement.
I don't know. I could see a world where a wireless service could be faster than wired. I think wireless will always be less secure and probably less efficient, but I think it could reach a point where the speed difference is negligible even with an entirely ISP-less world.
I'd think a much larger problem would be security without ISPs, as an individual with little legal obligation is a lot harder to place blame on than an ISP with a laundry list of legal obligations to fulfill.
Most of these issues are addressed with the model on the website : http://www.chroud.com/chroudnet/
Speed: The model is based on distributed transfer, similar to how .torrents work. Small amounts of information are passed over many devices, multiplying the speed by the number of users in the network. (In a peak theoretical model) 50 devices relaying a distributed request, with a conservative local up and down of 20 mb/s, would be passing information at 1 gb/s per jump. If half of those packets fail and have to be resent, you're still looking at 500 mb/s. The future of WiMAX is looking at over 100 mb/s over many miles distance.
Security: Because each device is only responsible for relaying a fraction of the original data, no single device will be operating with anything meaningful. And with encryption that could rely on the collection of data, the actual request couldn't be understood unless all the fractional pieces were met in collection on a single device. To me this seems more secure than the current model where everything runs through a single pipeline.
All in all, the project isn't intended to replace anything. Just an alternative, fun, open-source networking model that might some day in the future prove to be better.
Speed: The model is based on distributed transfer, similar to how .torrents work. Small amounts of information are passed over many devices, multiplying the speed by the number of users in the network. (In a peak theoretical model) 50 devices relaying a distributed request, with a conservative local up and down of 20 mb/s, would be passing information at 1 gb/s per jump. If half of those packets fail and have to be resent, you're still looking at 500 mb/s. The future of WiMAX is looking at over 100 mb/s over many miles distance.
Security: Because each device is only responsible for relaying a fraction of the original data, no single device will be operating with anything meaningful. And with encryption that could rely on the collection of data, the actual request couldn't be understood unless all the fractional pieces were met in collection on a single device. To me this seems more secure than the current model where everything runs through a single pipeline.
All in all, the project isn't intended to replace anything. Just an alternative, fun, open-source networking model that might some day in the future prove to be better.
<a href='"http://en.wikipedia.org/wiki/OLPC_XO-1#Wireless_mesh_networking"'>this?</a><br> Well, not really. Like that but a distributed model, using modern technology (not WiFi), parallel transfer, and optimized for remote processing (cloud computing). Also, not connected to the "internet". ChroudNet services will be much different in software, so they won't be in touch with the internet. Data has to be packed differently for distributed relay, and use an exclusively streaming model.<br><br> It would in all senses of the term be a new internet. But obviously referred to currently as an alternative to the internet - a replacement won't come to fruition for decades.
Since we don't need service providers today the answer is obviously no.
However we do want service providers since they make our lives easier, building a mesh network isn't all that difficult in populated areas but it will greatly restrict our ability to send data across unpopulated areas, (Getting data from the US to Europe with a mesh netowork won't be easy due to the low number of people living in the atlantic ocean)
However we do want service providers since they make our lives easier, building a mesh network isn't all that difficult in populated areas but it will greatly restrict our ability to send data across unpopulated areas, (Getting data from the US to Europe with a mesh netowork won't be easy due to the low number of people living in the atlantic ocean)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
The voices in my head may not be real, but they have some good ideas!
Speed: The model is based on distributed transfer, similar to how .torrents work. Small amounts of information are passed over many devices, multiplying the speed by the number of users in the network. (In a peak theoretical model) 50 devices relaying a distributed request, with a conservative local up and down of 20 mb/s, would be passing information at 1 gb/s per jump. If half of those packets fail and have to be resent, you're still looking at 500 mb/s. The future of WiMAX is looking at over 100 mb/s over many miles distance.
That is assuming the cross-cut capacity is capable of that. Weakest link and all that. The internet today is already limited in this manner. The trans-oceanic and similar connections determine maximum capacities. Without buffering, wireless network will suffer from similar issues.
Then there is the proverbial problem of routing. How does device A know which of X1, ..., Xn known peers to send data to in order for entire message to arrive at device B? Internet solves this via routing which is painfully centralized. Even today's "distributed" systems silently piggyback on this mechanism without mentioning it.
How does router X receive IP routing information for ad-hoc unreliable network? It's possible, but introduces huge overheads, potentially exponential.
Security: Because each device is only responsible for relaying a fraction of the original data, no single device will be operating with anything meaningful. And with encryption that could rely on the collection of data, the actual request couldn't be understood unless all the fractional pieces were met in collection on a single device. To me this seems more secure than the current model where everything runs through a single pipeline.
*Seems*.
What is stopping someone from simply snooping all traffic. It's just a matter of hardware resources. Apply some network analysis and determine narrowest points. Again, attacking weakest link.
Also, Tor proves why this fails. Don't monitor the cross section - monitor the source. Actual services will remain centralized. There was a recent study that illegal movies on torrent are uploaded by ~100 people total. Stop those people and the system loses the sources.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement