August 10, 2013

Reinventing Usenet

Glenn linked to this article about "meshnets".

Which, for all of the hype, really sound like an updated, wireless, version of Usenet, back when UUCP ruled the world.

Their dreams are to create an entirely independent world-wide edifice, but I can't see it happening. For one thing, how do they do long distance hops? Anyone old enough to have used Usenet knows that long distance transmission through a lot of short-haul hops is anything but fast.

Especially when they're using a store-and-forward protocol. That means each intermediate node must completely receive the packet before it transmits it to the next node.

I worked on the backbone of the old Arpanet, which was a BBN product called the C-30 Packet Switch, and that's how it worked. And it meant that the end-to-end latency could be many seconds. For UUCP, on the other hand, it could be days and usually was hours.

Another problem with this kind of volunteer ersatz network is routing. Usenet handled that manually; people would post their route from one or more major backbone sites.

Automatic routing is a surprisingly tough problem. One problem with it is that overhead traffic grows non-linearly, and at a certain point it begins to chew up a substantial percentage of the entire network bandwidth. We had all kinds of problems with that on the C-30.

The network can also ring. Remember when USS Stark got hit by an Exocet? The Milnet rang like a gong for a couple of days because of routing oscillation. Most of the connections in the Milnet were 56Kbaud (that was fast, in those days) but the European section of the Milnet and the CONUS section were connected by just three links, all 9600 baud.

One of those links would get overloaded, and its switch would send out a routing packet telling everyone to lay off. So all the traffic would route to another, and it would then do the same. Those three nodes were constantly getting hammered, and it eventually took manual intercession by the NOC to make it stop. (What they actually did was to kill two of those links, leaving only one. Since it was the only way across the Atlantic, everyone ignored all the "leave me alone" messages from that node, and the network stopped ringing.)

These kinds of dynamic-designed networks, with nodes coming and going unexpectedly, can end up with extremely complicated behaviors. I wonder of the geniuses behind Meshnet have bothered to consult the early history of the internet, or have instead decided to repeat it and relearn all the lessons?

UPDATE: The reason that ARPAnet and the Milnet suffered from ringing sometimes and had a problem with overhead costs was that they had dynamic routing. Nodes periodically announced their connectivity and how busy each of their links were. When a node had a packet to forward, it knew the eventual destination and decided where to send it based on the current traffic loading of the network.

Which is why the Milnet was ringing that one day: one of the 9600 baud links would announced that it was fine, and everyone would try to pile onto it. Then it would send out an emergency traffic announcement that it was overloaded, so everyone would pile onto the next one, and so on.

Usenet didn't suffer from this, but that's because it used source routing. The initiating computer (and usually the human using it) would include the exact route the packet was supposed to take. Given that a lot of the connections were intermittent, if an intermediate computer received a packet which said it was supposed to go to (for instance) BSDVAX as its next hop, then if there wasn't currently a connection to BSDVAX then it would hold the packet until the connection was reestablished. A lot of the intermittent connections were only available late at night, to take advantage of cheap long distance rates, and it wasn't uncommon for a Usenet message to take two or three days to reach its final destination.

It also wasn't uncommon for an intermediate computer to decide that it didn't want to hold the message while waiting for a hop that didn't reappear for a couple three days, and to send it back to the origin. The problem with source routing is that conditions can change while the message is in flight, and what looked like a good route when the message launched could turn out to be terrible -- or nonexistent -- before it reached its destination.

The problem with dynamic routing is that there's a lot of overhead for network status announcements, and it's possible for the routing system to start oscillating.

Posted by: Steven Den Beste in Weird World at 04:00 PM | Comments (14) | Add Comment
Post contains 780 words, total size 5 kb.

1 And depending on how they've built this network, what's to stop a new node appearing in a parked NSA van nearby? And just how good is their encryption? These days they have brute-force decryption programs that use the GPU in your graphics card to massively parallel processing.

Posted by: Mauser at August 10, 2013 04:30 PM (TJ7ih)

2

Well, they think they're going to rely on a chain-of-confidence to prevent bad guys from getting in. But even if they don't, the NSA van doesn't learn much because the traffic being carried is encrypted and most of it doesn't go through the NSA van anyway. But yea, the chain-of-confidence doesn't actually protect you much.

Despite what you've heard, it is still possible to create encryption which is effectively unbreakable (in that it would require a mass of computers greater than all the matter which exists, and take them longer than the universe will exist). The original RSA cipher is still secure, so far as we know, as long as you use a big enough key.

But such encryption is CPU intensive at both the source and destination end. And there are issues involved in propagating keys, even with a public-key system. Someone told you that node XYZ is using thus-and-so public key, but how do you know you haven't been lied to? There needs to be a central authentication authority that everyone knows and everyone trusts. Its security has to be top-notch, because if it ever gets penetrated the entire system falls down and goes boom.

Posted by: Steven Den Beste at August 10, 2013 05:20 PM (+rSRq)

3

I'm kinda curious as to what kind of transceiver density they're expecting to set up with what looks like a volunteer network.  Linked ham repeater networks can use higher power, use spectrum that is more tolerant of obstructions (around 440 MHz) and use directional antennas, and they can get to be very expensive to keep up.

I'm assuming they plan on using 2.4 GHz, since I don't know how they'd get licensed spectrum.

Posted by: CatCube at August 10, 2013 06:15 PM (/ZhTU)

4

If they're keeping their power below 100 milliwatts to evade FCC rules, then the only way they're going to get any kind of reasonable range is to infringe certain Qualcomm patents relating to Direct Sequence Spread Spectrum.

Actually, those patents were from the early 1990's and they may have expired by now.

And even using DSS with a huge chip rate, they're not going to get more than a few miles range with 100 milliwatts.

Posted by: Steven Den Beste at August 10, 2013 06:35 PM (+rSRq)

5 Hush!  Don't let facts and science get in the way!  This is an Internet Revolution!

(Cue the scene with the priest digging an undergound city from Jeff Wayne's Musical War of the Worlds.)

Posted by: Mauser at August 10, 2013 06:51 PM (TJ7ih)

6 I had entirely too much of that during the Dot-Com boom.

Posted by: Steven Den Beste at August 10, 2013 07:03 PM (+rSRq)

7 With pretty rare exceptions, brute force attacks have almost never been the tricks used to break codes.  It's always a matter of attacking the weakest link.  It's the reason why most of the NSA work is at the fiber capture level (so much of it is unencrypted anyway) or they just send the lawyers after the heads of the companies.    Heavy encryption is generally always save you, it's just unfeasible most of the time.

I can appreciate people wanting to have a "mostly government free" system, but that's pretty much impossible in the first place.  You just have to know how to hide better.

Posted by: sqa at August 10, 2013 08:43 PM (a/IgQ)

8 Somebody should teach them about FIDOnet. Those UNIX geeks sucking university teats never knew how real guerillia networks worked.

Posted by: Pete Zaitcev at August 11, 2013 08:35 AM (RqRa5)

9 "Someone told you that node XYZ is using thus-and-so public key, but how do you know you haven't been lied to?"

In practice, trust is subjective.

Posted by: Mark A. Flacy at August 11, 2013 05:46 PM (66bg3)

10

Pixy?

Mark's link is weird, somehow. Any idea what he did?

Posted by: Steven Den Beste at August 11, 2013 06:09 PM (+rSRq)

11 It seems to work fine for me - it just has an anchor in it, so it jumps to the middle of the page.

Posted by: Pixy Misa at August 12, 2013 12:55 AM (PiXy!)

12 I was considering this idea ( fully distributed networking ), and thought of another issue:  thermodynamics.  Information is energy, and this idea looks an awful lot like trying to distribute power generation locally.  If so, even if you could make it secure and functional, you couldn't make it efficient, because your trying to pump too much information across too many small "pipes".  Granted, I'm not sure if this would cause wifis to actually light on fire, or if it would just mean the electrical bill for the operation would be ten times that for a traditional ISP.

Posted by: metaphysician at August 12, 2013 05:13 AM (3GCAl)

13 This is a subject that actually interests me. While I realize there are many challenges, decentralizing the internet would go a long way to ensuring it remains free from censorship and denial.

The nonlinear growth of routing information is one problem that I've thought of a bit: Would it be possible to solve it via some sort of "approximate location"? If each wireless router knows where it is, and where all it's neighboring routers are, and where (geologically) the packet wants to go, then it doesn't need a routing table for any but the last step. Maybe a router will maintain a list of the 10,000 closest computers and how to route there, but routs anything else to a lattitude/longtitude.

When in geological mode, you could do something like stochastic multiplication of a packet, and semi-randomize the route for redundancy.

Just throwing some ideas at the wall. For the network to be independent of a routing heirarchy, some coordinate system is necessary, and the router's physical location matches well with the topology. What do you think?

Posted by: EccentricOrbit at August 15, 2013 03:25 PM (+SGQR)

14 PS (on rereading that): Not to imply that your other writing doesn't interest me. It's just been something I've been thinking about lately.

Also, I only know the broadest outlines of how the internet currently routes. Too young to have experienced the Usenet days.

These kinds of dynamic-designed networks, with nodes coming and going unexpectedly, can end up with extremely complicated behaviors. I wonder of the geniuses behind Meshnet have bothered to consult the early history of the internet, or have instead decided to repeat it and relearn all the lessons?

When a new generation attempts to build something where the complexity of the current iteration has already grown far beyond what a learning amateur can handle, some degree of reinventing the wheel and stumbling on older, already solved problems seems inevitable to me. How else would they learn, when the solution to their problem is buried at the bottom of RFC 80,000,000? (Shades of the library of Babel problem) (Not intended to be critical. I'm just curious as to your thoughts,)

Posted by: EccentricOrbit at August 15, 2013 03:49 PM (+SGQR)

Hide Comments | Add Comment

Enclose all spoilers in spoiler tags:
      [spoiler]your spoiler here[/spoiler]
Spoilers which are not properly tagged will be ruthlessly deleted on sight.
Also, I hate unsolicited suggestions and advice. (Even when you think you're being funny.)

At Chizumatic, we take pride in being incomplete, incorrect, inconsistent, and unfair. We do all of them deliberately.

How to put links in your comment

Comments are disabled. Post is locked.
18kb generated in CPU 0.0054, elapsed 0.0169 seconds.
21 queries taking 0.0127 seconds, 31 records returned.
Powered by Minx 1.1.6c-pink.