Advertisement

Will we always need service providers?

Started by January 30, 2011 10:18 PM
19 comments, last by chroud 14 years ago

Security: Because each device is only responsible for relaying a fraction of the original data, no single device will be operating with anything meaningful. And with encryption that could rely on the collection of data, the actual request couldn't be understood unless all the fractional pieces were met in collection on a single device. To me this seems more secure than the current model where everything runs through a single pipeline.

That works the further you get from the original sender/receiver, but someone within the range of the original transmission should still be able to pick up the entire transmission. It is probably a pain in the ashe to decrypt though.

The problem comes when you look at the legal system in single pipeline vs air. If you mess with the pipeline, you are in legal trouble. The current law on a network like the one described is murky at best. Not that that couldn't be easily remedied, but that would be what I'd be most worried about. I'd also imagine it would be a lot harder to track someone sniffing packets anywhere in a 2 mile radius than someone plugged in to a specific network.
So on top of Antheus's very practical arguments there are just a couple of even lower level things that will affect maximum data transmision rate.

The wavelength of light is shorter than the wavelength of WiFi or any other radio band. Since with your data transmission rate is proportional to the time between subsequent wave peaks, fiber will always have better data transmission speeds than WiFI.

Further, the transmission of light through fiber avoids EM interference which is not avoided over WiFi. 50 local nodes trying to transmit on the same frequency will likely cause enough interference problems to significantly slow bandwidth (there's a reason your WiFi router can choose between ~12 "channels": those are slightly different segments of spectrum so the routers can lessen interference problems). In short: the bandwidth of the air is not infinite as you seem to be assuming.

It's bounded by physics and that bound is just slower than fiber.

For WiFi you also need way more jumps between IP addresses to transmit packets since WiFi range is very small compared to fiber range. Consumer routers are not anywhere close to as fast at the giant network switches that backbone providers use. More hops + slower devices is going to cap your maximal bandwith to much slower than the theoretical physical bound. Network bottlenecks would be a real problem for any WiFi net that extends beyond a fairly local range.

The system is definitely possible to set up. It's just definitely not going to be faster than wired internet because of physics and the practicalities of the physical network architecture that would be required.

-me
Advertisement
Thanks for the critiques! That is the point of the project. To think about if we could make something better than the decades old networking models.

In response to routing, since we would be starting from scratch, don't we have some alternative technologies we can consider today for routing packets? GPS? We don't exactly have to conform to IP-style routing if we aren't abiding by an internet protocol to begin with. I do agree that routing would be difficult though - it would require research and expansion of existing mesh network algorithms - but that's the fun of an open-source project right?


What is stopping someone from simply snooping all traffic. It&#39;s just a matter of hardware resources. Apply some network analysis and determine narrowest points. Again, attacking weakest link.<br><br> Well then we&#39;re talking about someone who is willing to create an entire hardware infrastructure that is capable of reliably intercepting, channeling, processing, and decoding collections of packets within a miles-wide radius. I would think it would be much easier to patch in to the current single-channel network than it would be to set up the extremely involved aforementioned process.<br><br><blockquote>The problem comes when you look at the legal system in single pipeline vs air. If you mess with the pipeline, you are in legal trouble.<br><br> Legally, I would think the only requirement would be owning the operation frequency. The network is completely detached from current pipelines - all it uses is frequency space<br><br><blockquote>Since with your data transmission rate is proportional to the time between subsequent wave peaks, fiber will always have better data transmission speeds than WiFI.<br><br> First of all, we&#39;re not talking about WiFi - we&#39;re talking about the newer technologies that use higher frequencies, such as WiMAX.<br><br> Also, certainly a single channel wireless transmission will be slower than single channel fibre, but what about data transmitting over 100 devices in parallel? Each channel is transmitting 1/100th of the original data, with local transmission speeds. Let&#39;s say we resend half of the packets, and lose half of the theoretical peak to interference… we&#39;re still talking about 500 mb/s relay per jump for the collection of data.<br><br> Won&#39;t we eventually reach a point in wireless technology where the distance and speed we can do wirelessly, will out perform the overhead of the jumps?
Wait, you think it would be easier to 'patch into' a hard wired network? What? You think you can just wander up to your local fiber optic main, cut it, open, splice in some record-repeater hardware, and no one would notice?


If you did that to a secured network, the two nodes you just interrupted would detect the interruption, send up flags, and shut down their part of the network and sing out damage/intruder alarms. The most advanced methods of intrusion detection will even do the math and calculate where along the line it was stopped. (Actually isn't that hard of a problem, just a matter of precision timing and secondary communication channels.)
You don't happily restart a secured connection after a fault without first securing it first. (If you didn't, it wouldn't exactly be secure, would it?)

For large amounts of data, contained transmission methods (Cable/directed open air like Microwave Transmission) will always be superior or undirected/loosely directed transmission. Consider How many people you can have standing in a room and shouting to each other. How much information they can pass to each other. Now consider how many more people could reliably transmit the information if they were whispering into a microphone, so they couldn't interfere with other people's transmission?
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.
What I'm talking about are security features inherent to the infrastructure, not implemented.

Yes it is easier to patch into a single channel, than setup the hardware to intercept many wireless channels over a multiple miles-wide radius.

Obviously you can't just cut open into a wired network and get away with it, but that has everything to do with the security features, not the infrastructure, and security features can be implemented into any network model.

For large amounts of data, contained transmission methods (Cable/directed open air like Microwave Transmission) will always be superior or undirected/loosely directed transmission.<br><br> That is true. The overhead of a mesh topology usually resides in multi-directional transmission and the &quot;hops&quot; &quot;skips&quot; or &quot;jumps&quot; to traverse the network. The question though, is whether or not the advancement of wireless technology, paired with a distributed model, would overcome this overhead and potentially surpass it.

Why do we need such a system?

- Anti-government (Egypt and all that)
- Anti-capitalist (free internet)
- Rural environments with no land lines
- ...

Answer this, identify the problems, devise a solution.

But as of right now, we simply don't need alternative.
Advertisement
I don't think we need an alternative, but I think it's a good exercise to look at alternatives. I list all the advantages on the website. It in no way is meant to be anti-government (unless anti-government means not wanting government to regulate our networks) or anti-capitalist (is wanting things better and cheaper anti-capitalist?)

My reasons for this model?

- Non-regulated. Neutral networking should be a given in a free society.
- My high priced bills for both internet and phone make me angry. I would love to see a system that doesn't need providers.
- No wires and full mobility. The network would be a giant "hotspot" where access is widely available.
- Green. Without a physical infrastructure, there are no repairs, data centers, or service trucks - no cities being torn up to accommodate the network.
- Progressive with technology: an infrastructure upgrade is as simple as releasing new devices.
[color="#1C2837"]- No wires and full mobility. The network would be a giant "hotspot" where access is widely available.
[/quote]

While it would be cool if the whole Planet would be one huge Free Hotspot, this is sadly not possible with current technology as you would need to do one of these things:

- The use of Ground stations -> Require Care & Power = Service Providers -> won't be free.
- A huge Ad-Hoc Network, this would mean however, that you'd need to rely on other Computers as Relays to transmit information across a large distance. Furthermore, transmitting Data over the Atlantic/Pacific will be a pain. -> unsafe and unreliable
- Extreem High/Energetic Frequencies to transmit long distance -> Dangerous to your health & too power Consuming to be viable
- Satellites -> Require Care (Low Earth Orbit Satellites don't have enough fuel to maintain their Orbit forever) -> Expensive

In theory, Quantum computing could solve long-distance Communication and get you your Global Hotspot. But we would be facing whole new Problems then in a Huge Free Hotspot:

- Addressing Computers -> No Service Provider, No centralised Addressing Unit.
- DNS & Verifying the website you're looking at is actually that website (A problem that derives from the first)
=> Anarchy


To be honest, I believe we will always have service Providers. As long as we wish to use the Internet the way we use it today, I can't think of any way around this.
I agree. The idea behind this project is to be a huge ad-hoc network, and this would only be realistic over land - not water.

I don't think that it loses its purpose however. Free voice, video, and texting nationwide would be worth it. Free streaming services, gaming, and websites nationwide would be worth it.

All in all I would agree that a direct replacement for the internet, without service providers, is unlikely. However, an alternative network for unregulated file transfer, free communications, and nation-wide services is certainly worth the effort. Wouldn't you agree?

I don't think we need an alternative, but I think it's a good exercise to look at alternatives. I list all the advantages on the website. It in no way is meant to be anti-government (unless anti-government means not wanting government to regulate our networks) or anti-capitalist (is wanting things better and cheaper anti-capitalist?)

My reasons for this model?

- Non-regulated. Neutral networking should be a given in a free society.
- My high priced bills for both internet and phone make me angry. I would love to see a system that doesn't need providers.
- No wires and full mobility. The network would be a giant "hotspot" where access is widely available.
- Green. Without a physical infrastructure, there are no repairs, data centers, or service trucks - no cities being torn up to accommodate the network.
- Progressive with technology: an infrastructure upgrade is as simple as releasing new devices.

All of this is about the details.

You and your site both list broad terms and vague generalizations.

You aren't the first to suggest this kind of living mesh network. There have been projects, both government and private, that have studied dynamic mesh networks. The first big projects were in the late 1960s as precursor to arpanet. Many of those basic recommendations regarding a dynamic mesh have been abandoned due to simple scalability issues. You will need to solve a huge list of open problems regarding dynamic routing. When it comes to bandwidth and "fast" there are similar established bodies of knowledge that you simply gloss over.

Glossing over the details is much like saying "Someday people will live on Mars, and that is my idea because I just wrote about it."


The only thing I've really seen on your site and your posts is that you don't want to pay for it. Sure, if the rest of it happens, let's enjoy free bandwidth.

What EXACTLY do you have in mind for these?

You say a consolidated network for data and voice. How is this different than the real world today? You comment about all devices receiving all data streams, this is exactly how the network behaves today through the first four layers of the OSI model; there is already no data-specific hardware beyond knowing the format of the streams. What exactly do you intend to change? You talk about no regulation. How will you find anything? How will you know you are communicating to the source you expect? Regulatory bodies like IANA provide services that are extremely difficult to solve, such as assigning names. You mention "No Wires". That is just replacing one physical media (wires) with another (radio waves). The current OSI model has been implemented with many different physical media, including bongo drums and pigeon --- both are wireless. What exactly do you hope to gain by specifically excluding specific media? You mention "Green" as a selling point, in that it somehow magically requires no physical maintenance. How do you propose to overcome basic physical issues of oxidation, erosion, and wear, as well as damage from animals, accidents, vandalism or intentional targeted destruction? You say "Fast", but WiMAX isn't that great in the Grand Scheme of Things. Fast is completely subjective. Our current "fast" with fiber is around 20GB/s, limited by the computers attached. But even that is slow enough that we use parallel lines of them. For secure you mention an n-way subdivision for security. While that can help with a small number of attacks, by itself it does not translate to "secure" for any serious meaning of the word. Your use of 'security' never mentions secure against what, secure from who, and secure for who? You mention Adaptable, saying you mean it to not require road replacements. Unfortunately that precludes extensibility without replacing a large number of existing devices, or maintaining backwards compatibility among all prior editions of the hardware back to the beginning. How would you implement infrastructure updates that are universal on all hardware? Or how would you implement them without the updates while not shackling the protocols to the past? You say optimized for cloud, but how? Who is paying for this cloud storage, or cloud computing devices, or cloud communications devices? Obviously it isn't paid by the service provider because there are none in this utopia, so who pays for that equipment and software and maintenance? You mention self healing in terms of rerouting for damage. Exactly what are you proposing that isn't available with the current infrastructure?

That's just my very short browsing of your site. It looks like a re-hash of just a few of the items of the earliest research spikes that are now core requirements of our existing infrastructure.

Sure, it is possible you've thought of something entirely new. I'm just struggling to see it.

This topic is closed to new replies.

Advertisement