<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_10_1830217</id>
	<title>New Router Manages Flows, Not Packets</title>
	<author>ScuttleMonkey</author>
	<datestamp>1247254200000</datestamp>
	<htmltext>An anonymous reader writes <i>"A new router, designed by one of the creators of ARPANET, <a href="http://www.spectrum.ieee.org/computing/networks/a-radical-new-router/0">manages flows of packets instead of only managing individual packets</a>. The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals. When overloaded, the router can make better choices of which packets to drop. 'Indeed, during most of my career as a network engineer, I never guessed that the queuing and discarding of packets in routers would create serious problems. More recently, though, as my Anagran colleagues and I scrutinized routers during peak workloads, we spotted two serious problems. First, routers discard packets somewhat randomly, causing some transmissions to stall. Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays, significantly reducing throughput (TCP throughput is inversely proportional to delay). These two effects hinder traffic for all applications, and some transmissions can take 10 times as long as others to complete.'"</i></htmltext>
<tokenext>An anonymous reader writes " A new router , designed by one of the creators of ARPANET , manages flows of packets instead of only managing individual packets .
The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals .
When overloaded , the router can make better choices of which packets to drop .
'Indeed , during most of my career as a network engineer , I never guessed that the queuing and discarding of packets in routers would create serious problems .
More recently , though , as my Anagran colleagues and I scrutinized routers during peak workloads , we spotted two serious problems .
First , routers discard packets somewhat randomly , causing some transmissions to stall .
Second , the packets that are queued because of momentary overloads experience substantial and nonuniform delays , significantly reducing throughput ( TCP throughput is inversely proportional to delay ) .
These two effects hinder traffic for all applications , and some transmissions can take 10 times as long as others to complete .
' "</tokentext>
<sentencetext>An anonymous reader writes "A new router, designed by one of the creators of ARPANET, manages flows of packets instead of only managing individual packets.
The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals.
When overloaded, the router can make better choices of which packets to drop.
'Indeed, during most of my career as a network engineer, I never guessed that the queuing and discarding of packets in routers would create serious problems.
More recently, though, as my Anagran colleagues and I scrutinized routers during peak workloads, we spotted two serious problems.
First, routers discard packets somewhat randomly, causing some transmissions to stall.
Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays, significantly reducing throughput (TCP throughput is inversely proportional to delay).
These two effects hinder traffic for all applications, and some transmissions can take 10 times as long as others to complete.
'"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654455</id>
	<title>p2p</title>
	<author>visible.frylock</author>
	<datestamp>1247217480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p> <tt>This capability is especially convenient for managing network overload due to P2P traffic. Conventionally, P2P is filtered out using a technique called deep packet inspection, or DPI, which looks at the data portion of all packets. With flow management, you can detect P2P because it relies on many long-duration flows per user. Then, without peeking into the packets' data, you can limit their transmission to rates you deem fair.</tt></p></div> </blockquote><p>If routers started doing this, wouldn't torrent clients just start randomizing their port numbers? According to him, different port numbers will get counted as a different "flow". I'd think, if they wanted to do this, they'd at least have to look at IPs, port numbers are easy to change.</p></div>
	</htmltext>
<tokenext>This capability is especially convenient for managing network overload due to P2P traffic .
Conventionally , P2P is filtered out using a technique called deep packet inspection , or DPI , which looks at the data portion of all packets .
With flow management , you can detect P2P because it relies on many long-duration flows per user .
Then , without peeking into the packets ' data , you can limit their transmission to rates you deem fair .
If routers started doing this , would n't torrent clients just start randomizing their port numbers ?
According to him , different port numbers will get counted as a different " flow " .
I 'd think , if they wanted to do this , they 'd at least have to look at IPs , port numbers are easy to change .</tokentext>
<sentencetext> This capability is especially convenient for managing network overload due to P2P traffic.
Conventionally, P2P is filtered out using a technique called deep packet inspection, or DPI, which looks at the data portion of all packets.
With flow management, you can detect P2P because it relies on many long-duration flows per user.
Then, without peeking into the packets' data, you can limit their transmission to rates you deem fair.
If routers started doing this, wouldn't torrent clients just start randomizing their port numbers?
According to him, different port numbers will get counted as a different "flow".
I'd think, if they wanted to do this, they'd at least have to look at IPs, port numbers are easy to change.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654235</id>
	<title>Re:Net neutrality anyone?</title>
	<author>Anonymous</author>
	<datestamp>1247216580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>What you describe (packet inspection and prioritizes traffic based on internal rules) is QoS. No one in their right mind is against that. The net neutrality debate is about ISP's throttling some traffic in order to extort money from both their customers and content providers that otherwise have no other relationship with the ISP. The debate is that all ISP's should be are the tubes the content is delivered over, not gate keepers of content.<br> <br>That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement.</htmltext>
<tokenext>What you describe ( packet inspection and prioritizes traffic based on internal rules ) is QoS .
No one in their right mind is against that .
The net neutrality debate is about ISP 's throttling some traffic in order to extort money from both their customers and content providers that otherwise have no other relationship with the ISP .
The debate is that all ISP 's should be are the tubes the content is delivered over , not gate keepers of content .
That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement .</tokentext>
<sentencetext>What you describe (packet inspection and prioritizes traffic based on internal rules) is QoS.
No one in their right mind is against that.
The net neutrality debate is about ISP's throttling some traffic in order to extort money from both their customers and content providers that otherwise have no other relationship with the ISP.
The debate is that all ISP's should be are the tubes the content is delivered over, not gate keepers of content.
That an ISP may prioritize services like VOIP over http or bittorrent is not what net neutrality is about and quite frankly is something that a good network engineer would look into and would probably implement.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654357</id>
	<title>Re:Net neutrality anyone?</title>
	<author>jd</author>
	<datestamp>1247217120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>No, it doesn't break net neutrality in and of itself, any more than a traffic light or a roundabout breaks road neutrality. The idea of routing flows, rather than packets, permits more packets to get through for the same bandwidth.</p><p>So long as all flows are treated fairly, this will actually BOOST network neutrality as network companies will have less justification to throttle back protocols which take disproportionate bandwidth - as they will no longer do so. Users will also have less cause to complain, as the effective bandwidth will move closer to the theoretical bandwidth.</p><p>The only concern is if corporations and ISPs use this sort of router to discriminate against flows (ie: ensure unfair usage) rather than to improve the quality of the service (ie: ensure fair usage).</p><p>The belief by ISPs that you cannot have high throughput unless you block legitimate users is nothing more than FUD. It has no basis in reality. It is possible, by moving away from best-effort and towards fair-effort, to get higher throughput for everyone.</p><p>Congested networks can be modeled as turbulent flow in a river. Blocking streams is like damming up some of the tributary streams. It causes a lot of grief and isn't really that effective.</p><p>On the other hand, smoothing out the turbulence will improve the throughput without having to dam up anything. QoS services are intended as smoothing mechanisms, not dams. For the most part, at least.</p><p>Most "net neutrality" advocates would be advised to focus only on the efforts to build gigantic dams, rather than to be unkind or unfair on those merely smoothing the way, with no bias or discrimination intended.</p></htmltext>
<tokenext>No , it does n't break net neutrality in and of itself , any more than a traffic light or a roundabout breaks road neutrality .
The idea of routing flows , rather than packets , permits more packets to get through for the same bandwidth.So long as all flows are treated fairly , this will actually BOOST network neutrality as network companies will have less justification to throttle back protocols which take disproportionate bandwidth - as they will no longer do so .
Users will also have less cause to complain , as the effective bandwidth will move closer to the theoretical bandwidth.The only concern is if corporations and ISPs use this sort of router to discriminate against flows ( ie : ensure unfair usage ) rather than to improve the quality of the service ( ie : ensure fair usage ) .The belief by ISPs that you can not have high throughput unless you block legitimate users is nothing more than FUD .
It has no basis in reality .
It is possible , by moving away from best-effort and towards fair-effort , to get higher throughput for everyone.Congested networks can be modeled as turbulent flow in a river .
Blocking streams is like damming up some of the tributary streams .
It causes a lot of grief and is n't really that effective.On the other hand , smoothing out the turbulence will improve the throughput without having to dam up anything .
QoS services are intended as smoothing mechanisms , not dams .
For the most part , at least.Most " net neutrality " advocates would be advised to focus only on the efforts to build gigantic dams , rather than to be unkind or unfair on those merely smoothing the way , with no bias or discrimination intended .</tokentext>
<sentencetext>No, it doesn't break net neutrality in and of itself, any more than a traffic light or a roundabout breaks road neutrality.
The idea of routing flows, rather than packets, permits more packets to get through for the same bandwidth.So long as all flows are treated fairly, this will actually BOOST network neutrality as network companies will have less justification to throttle back protocols which take disproportionate bandwidth - as they will no longer do so.
Users will also have less cause to complain, as the effective bandwidth will move closer to the theoretical bandwidth.The only concern is if corporations and ISPs use this sort of router to discriminate against flows (ie: ensure unfair usage) rather than to improve the quality of the service (ie: ensure fair usage).The belief by ISPs that you cannot have high throughput unless you block legitimate users is nothing more than FUD.
It has no basis in reality.
It is possible, by moving away from best-effort and towards fair-effort, to get higher throughput for everyone.Congested networks can be modeled as turbulent flow in a river.
Blocking streams is like damming up some of the tributary streams.
It causes a lot of grief and isn't really that effective.On the other hand, smoothing out the turbulence will improve the throughput without having to dam up anything.
QoS services are intended as smoothing mechanisms, not dams.
For the most part, at least.Most "net neutrality" advocates would be advised to focus only on the efforts to build gigantic dams, rather than to be unkind or unfair on those merely smoothing the way, with no bias or discrimination intended.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28678247</id>
	<title>Re:Been tried, and they saw it was *not* good</title>
	<author>copec</author>
	<datestamp>1247506440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>As soon as a flow-based router services more than 1000 machines (in either direction, ie. 100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router. That's not a lot. If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance. 2000 machines and packet-based switching is double as efficient.</p></div><p>Where did you get these numbers from?  In the article they <i>claim</i> their device can do a whole lot more then that.</p></div>
	</htmltext>
<tokenext>As soon as a flow-based router services more than 1000 machines ( in either direction , ie .
100 clients communicating with 900 internet hosts = 1000 machines serviced ) , it 's performance will fail to keep up with a packet-based router .
That 's not a lot .
If a single client torrents or p2p 's you will hit this limit easily , resulting in slower performance .
2000 machines and packet-based switching is double as efficient.Where did you get these numbers from ?
In the article they claim their device can do a whole lot more then that .</tokentext>
<sentencetext>As soon as a flow-based router services more than 1000 machines (in either direction, ie.
100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router.
That's not a lot.
If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance.
2000 machines and packet-based switching is double as efficient.Where did you get these numbers from?
In the article they claim their device can do a whole lot more then that.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654893</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28659103</id>
	<title>Re:Been tried, and they saw it was *not* good</title>
	<author>hitmark</author>
	<datestamp>1247318160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"-&gt; very hard to have a good qos policy this way. A pipe has a fixed bandwidth, and you almost always oversubscribe. Therefore useful policies are very hard to formulate per-flow."</p><p>"-&gt; easy, very extensive QOS is trivial to implement"</p><p>is it me or is that contradictory?</p></htmltext>
<tokenext>" - &gt; very hard to have a good qos policy this way .
A pipe has a fixed bandwidth , and you almost always oversubscribe .
Therefore useful policies are very hard to formulate per-flow .
" " - &gt; easy , very extensive QOS is trivial to implement " is it me or is that contradictory ?</tokentext>
<sentencetext>"-&gt; very hard to have a good qos policy this way.
A pipe has a fixed bandwidth, and you almost always oversubscribe.
Therefore useful policies are very hard to formulate per-flow.
""-&gt; easy, very extensive QOS is trivial to implement"is it me or is that contradictory?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654893</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654429</id>
	<title>Don't Cross The Streams</title>
	<author>BigBlueOx</author>
	<datestamp>1247217360000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Why?<br>
It would be bad.<br>
I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?<br>
Try to imagine all the packets on your network stopping instantaneously and every router on the Internet exploding at the speed of light.<br>
Total TCP reversal!!<br>
Right, that's bad. Important safety tip. Thanks, Egon.</htmltext>
<tokenext>Why ?
It would be bad .
I 'm fuzzy on the whole good/bad thing .
What do you mean , " bad " ?
Try to imagine all the packets on your network stopping instantaneously and every router on the Internet exploding at the speed of light .
Total TCP reversal ! !
Right , that 's bad .
Important safety tip .
Thanks , Egon .</tokentext>
<sentencetext>Why?
It would be bad.
I'm fuzzy on the whole good/bad thing.
What do you mean, "bad"?
Try to imagine all the packets on your network stopping instantaneously and every router on the Internet exploding at the speed of light.
Total TCP reversal!!
Right, that's bad.
Important safety tip.
Thanks, Egon.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654307</id>
	<title>No big thang</title>
	<author>Anonymous</author>
	<datestamp>1247216940000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>this sounds fancy, but the only real improvement is hash-table lookup, everything is already implemented with current generation routers.</p><p>and it starts at $30000 a model, ROFLMAO. Thanks, umm , but NO thanks!</p></htmltext>
<tokenext>this sounds fancy , but the only real improvement is hash-table lookup , everything is already implemented with current generation routers.and it starts at $ 30000 a model , ROFLMAO .
Thanks , umm , but NO thanks !</tokentext>
<sentencetext>this sounds fancy, but the only real improvement is hash-table lookup, everything is already implemented with current generation routers.and it starts at $30000 a model, ROFLMAO.
Thanks, umm , but NO thanks!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655273</id>
	<title>Re:Net neutrality anyone?</title>
	<author>Anonymous</author>
	<datestamp>1247222580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds. </p></div><p>Small correction regarding CEF and MPLS :</p><p>The CEF table is built in advance of any traffic flows, based on the contents of the IP routing table.</p><p>"Fast Switching" is the switching method that builds an on-demand forwarding table or cache entry for each flow, performing a lookup on the first packet.</p><p>The MPLS forwarding table (lfib) is also pre-consructed rather than built on demand. It uses 1) local routing table entries and 2) label information advertised by downstream neighbors.</p></div>
	</htmltext>
<tokenext>CEF ( Cisco Express Forwarding ) and MPLS [ wikipedia.org ] ( Multiprotocol Label Switching ) use flow control .
The perform a lookup on the first packet , cache the information in a forwarding table and all further packets which are part of the same flow are switched , not routed , at effectively wire speeds .
Small correction regarding CEF and MPLS : The CEF table is built in advance of any traffic flows , based on the contents of the IP routing table .
" Fast Switching " is the switching method that builds an on-demand forwarding table or cache entry for each flow , performing a lookup on the first packet.The MPLS forwarding table ( lfib ) is also pre-consructed rather than built on demand .
It uses 1 ) local routing table entries and 2 ) label information advertised by downstream neighbors .</tokentext>
<sentencetext>CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control.
The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.
Small correction regarding CEF and MPLS :The CEF table is built in advance of any traffic flows, based on the contents of the IP routing table.
"Fast Switching" is the switching method that builds an on-demand forwarding table or cache entry for each flow, performing a lookup on the first packet.The MPLS forwarding table (lfib) is also pre-consructed rather than built on demand.
It uses 1) local routing table entries and 2) label information advertised by downstream neighbors.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654291</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654263</id>
	<title>Some thoughts</title>
	<author>Anonymous</author>
	<datestamp>1247216760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>First, routers discard packets somewhat randomly, causing some transmissions to stall.</p></div><p>While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process, it is unfair to call it "random".  Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated.  In fact, that's <i>how</i> they design routers.  You pick a bunch of different situations and decide how often you can afford to drop packets, then design a queueing system to meet those requirements.  Queueing theory is a well-established field (the de-facto standard textbook was written in 1970!) and networking is one of the biggest applications.</p><p><div class="quote"><p>Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays</p></div><p>
You wouldn't expect uniform delays.  A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed.  Those sorts of systems are usually related to renewal processes and don't often show up in networking applications.  That's actually a good thing, because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems.
<br> <br>
"Substantial" is the key word here.  Effectively the concept of managing "flows" just means that the router is caching destinations based on fields like source port, source IP address, etc.  By using the cache rather than recomputing the destination the latencies can be reduced, thus reducing the number of times you need to use the queue.  In queueing theory terms you are decreasing mean service time to increase total service rate.  Note however that this can backfire: if you increase the variance in the service time distribution too much (some delays will be much higher when you eventually do need to use the queue) you will actually decrease performance.  Of course assumedly they've done all of this work.  In essence "flow management" seems to be the replacement of a FIFO queue with a priority queue in a queueing system, with priority based on caching.
<br> <br>
Personally, I'm not sure how much of a benefit this can provide.  Does it work with NAT?  How often do you drop packets based on incorrect routing as compared to those you <i>would</i> have dropped if you had put them in the queue?  If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal, not Spectrum.
<br> <br>
And of course, any time someone opens with "The Internet is broken" you have to be a little skeptical.  Routing is a well-studied and complex subject; saying that you've replaced "packets" with "flows" ain't gunna cut it in my book.</p></div>
	</htmltext>
<tokenext>First , routers discard packets somewhat randomly , causing some transmissions to stall.While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process , it is unfair to call it " random " .
Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated .
In fact , that 's how they design routers .
You pick a bunch of different situations and decide how often you can afford to drop packets , then design a queueing system to meet those requirements .
Queueing theory is a well-established field ( the de-facto standard textbook was written in 1970 !
) and networking is one of the biggest applications.Second , the packets that are queued because of momentary overloads experience substantial and nonuniform delays You would n't expect uniform delays .
A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed .
Those sorts of systems are usually related to renewal processes and do n't often show up in networking applications .
That 's actually a good thing , because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems .
" Substantial " is the key word here .
Effectively the concept of managing " flows " just means that the router is caching destinations based on fields like source port , source IP address , etc .
By using the cache rather than recomputing the destination the latencies can be reduced , thus reducing the number of times you need to use the queue .
In queueing theory terms you are decreasing mean service time to increase total service rate .
Note however that this can backfire : if you increase the variance in the service time distribution too much ( some delays will be much higher when you eventually do need to use the queue ) you will actually decrease performance .
Of course assumedly they 've done all of this work .
In essence " flow management " seems to be the replacement of a FIFO queue with a priority queue in a queueing system , with priority based on caching .
Personally , I 'm not sure how much of a benefit this can provide .
Does it work with NAT ?
How often do you drop packets based on incorrect routing as compared to those you would have dropped if you had put them in the queue ?
If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal , not Spectrum .
And of course , any time someone opens with " The Internet is broken " you have to be a little skeptical .
Routing is a well-studied and complex subject ; saying that you 've replaced " packets " with " flows " ai n't gunna cut it in my book .</tokentext>
<sentencetext>First, routers discard packets somewhat randomly, causing some transmissions to stall.While it is true that whether or not a particular packet will be discarded is the result of a probabilistic process, it is unfair to call it "random".
Based on a model of the queue within the router and estimation of the input parameters the probability of a packet being discarded can be calculated.
In fact, that's how they design routers.
You pick a bunch of different situations and decide how often you can afford to drop packets, then design a queueing system to meet those requirements.
Queueing theory is a well-established field (the de-facto standard textbook was written in 1970!
) and networking is one of the biggest applications.Second, the packets that are queued because of momentary overloads experience substantial and nonuniform delays
You wouldn't expect uniform delays.
A queueing system with a uniform distribution on expected number of customers in the queue is a very strange system indeed.
Those sorts of systems are usually related to renewal processes and don't often show up in networking applications.
That's actually a good thing, because systems with uniform distributions on just about anything are much more difficult to solve or approximate than most other systems.
"Substantial" is the key word here.
Effectively the concept of managing "flows" just means that the router is caching destinations based on fields like source port, source IP address, etc.
By using the cache rather than recomputing the destination the latencies can be reduced, thus reducing the number of times you need to use the queue.
In queueing theory terms you are decreasing mean service time to increase total service rate.
Note however that this can backfire: if you increase the variance in the service time distribution too much (some delays will be much higher when you eventually do need to use the queue) you will actually decrease performance.
Of course assumedly they've done all of this work.
In essence "flow management" seems to be the replacement of a FIFO queue with a priority queue in a queueing system, with priority based on caching.
Personally, I'm not sure how much of a benefit this can provide.
Does it work with NAT?
How often do you drop packets based on incorrect routing as compared to those you would have dropped if you had put them in the queue?
If this was a truly novel queueing theory application I would have expected to see it in a IEEE journal, not Spectrum.
And of course, any time someone opens with "The Internet is broken" you have to be a little skeptical.
Routing is a well-studied and complex subject; saying that you've replaced "packets" with "flows" ain't gunna cut it in my book.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654149</id>
	<title>Didn't Ipsilon try this a long time back?</title>
	<author>nokiator</author>
	<datestamp>1247259300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But seriously, flow management/queuing may be useful at the very edge of the network, like a BRAS. But most provider edge products (Juniper, Ericsson/Redback,<nobr> <wbr></nobr>...) already have similar capabilities. Flow management past the edge of a network is pointless, especially for TCP/IP traffic.</htmltext>
<tokenext>But seriously , flow management/queuing may be useful at the very edge of the network , like a BRAS .
But most provider edge products ( Juniper , Ericsson/Redback , ... ) already have similar capabilities .
Flow management past the edge of a network is pointless , especially for TCP/IP traffic .</tokentext>
<sentencetext>But seriously, flow management/queuing may be useful at the very edge of the network, like a BRAS.
But most provider edge products (Juniper, Ericsson/Redback, ...) already have similar capabilities.
Flow management past the edge of a network is pointless, especially for TCP/IP traffic.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654975</id>
	<title>yaaaawwwwnnn</title>
	<author>Anonymous</author>
	<datestamp>1247220540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yeah, I dont see what's radical either... this tech has been out for some time now.</p></htmltext>
<tokenext>Yeah , I dont see what 's radical either... this tech has been out for some time now .</tokentext>
<sentencetext>Yeah, I dont see what's radical either... this tech has been out for some time now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654299</id>
	<title>Re:This does not solve the problem</title>
	<author>RichiH</author>
	<datestamp>1247216880000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>&gt; TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here</p><p>In a dumb network with intelligence on the edges, you can:</p><p>1) cause congestion and then back off (TCP)<br>2) hammer away at whatever rate you think you need (UDP)<br>3) use a pre-set limit (which might be too high as well so no one does that on public networks)</p><p>State-ful packet switching is literally impossible, fixed-path routing not desirable for the reason you stated above and I would not want anyone to inspect my traffic \_by design\_, anyway.</p><p>TCP may not be perfect, but I fail to see an alternative.</p></htmltext>
<tokenext>&gt; TCP 's congestion control algorithm , which causes congestion and then backs off is the real culprit hereIn a dumb network with intelligence on the edges , you can : 1 ) cause congestion and then back off ( TCP ) 2 ) hammer away at whatever rate you think you need ( UDP ) 3 ) use a pre-set limit ( which might be too high as well so no one does that on public networks ) State-ful packet switching is literally impossible , fixed-path routing not desirable for the reason you stated above and I would not want anyone to inspect my traffic \ _by design \ _ , anyway.TCP may not be perfect , but I fail to see an alternative .</tokentext>
<sentencetext>&gt; TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit hereIn a dumb network with intelligence on the edges, you can:1) cause congestion and then back off (TCP)2) hammer away at whatever rate you think you need (UDP)3) use a pre-set limit (which might be too high as well so no one does that on public networks)State-ful packet switching is literally impossible, fixed-path routing not desirable for the reason you stated above and I would not want anyone to inspect my traffic \_by design\_, anyway.TCP may not be perfect, but I fail to see an alternative.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654315</id>
	<title>So, they've reimplemented CEF</title>
	<author>Anonymous</author>
	<datestamp>1247216940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Yippee.</p><p>Cisco (and probably several others) have done this by default for many many moons now.  By way of practical demonstration, notice that equal weight routes load balance per flow, not per packet.  What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level.  And don't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k.</p></htmltext>
<tokenext>Yippee.Cisco ( and probably several others ) have done this by default for many many moons now .
By way of practical demonstration , notice that equal weight routes load balance per flow , not per packet .
What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level .
And do n't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k .</tokentext>
<sentencetext>Yippee.Cisco (and probably several others) have done this by default for many many moons now.
By way of practical demonstration, notice that equal weight routes load balance per flow, not per packet.
What it allows is subsequent routing decisions to be offloaded from a route processor down to the asics on the card level.
And don't try to turn CEF off on a layer 3 switch - even a lightly loaded one - unless you want your throughput to resemble 56k.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653933</id>
	<title>This looks like an Anagran ad</title>
	<author>e9th</author>
	<datestamp>1247258220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Is it just me, or does the article read like an Anagran ad for the FR-1000?</htmltext>
<tokenext>Is it just me , or does the article read like an Anagran ad for the FR-1000 ?</tokentext>
<sentencetext>Is it just me, or does the article read like an Anagran ad for the FR-1000?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28660559</id>
	<title>I RTFA</title>
	<author>saleenS281</author>
	<datestamp>1247331000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And it appears most of the responders didn't.  Understandable given it's length.  One choice quote I'd like to point out though is:</p><p><div class="quote"><p>We designed the equipment to operate at the edge of networks, the point where an Internet service provider aggregates traffic from its broadband subscribers or where a corporate network connects to the outside world. Virtually all network overload occurs at the edge.</p></div><p>This isn't to replace routers, this is supposed to sit between end-users and the rest of the infrastructure so things get throttled before they get into the main router/backbone/wherever it's going.</p></div>
	</htmltext>
<tokenext>And it appears most of the responders did n't .
Understandable given it 's length .
One choice quote I 'd like to point out though is : We designed the equipment to operate at the edge of networks , the point where an Internet service provider aggregates traffic from its broadband subscribers or where a corporate network connects to the outside world .
Virtually all network overload occurs at the edge.This is n't to replace routers , this is supposed to sit between end-users and the rest of the infrastructure so things get throttled before they get into the main router/backbone/wherever it 's going .</tokentext>
<sentencetext>And it appears most of the responders didn't.
Understandable given it's length.
One choice quote I'd like to point out though is:We designed the equipment to operate at the edge of networks, the point where an Internet service provider aggregates traffic from its broadband subscribers or where a corporate network connects to the outside world.
Virtually all network overload occurs at the edge.This isn't to replace routers, this is supposed to sit between end-users and the rest of the infrastructure so things get throttled before they get into the main router/backbone/wherever it's going.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654427</id>
	<title>x25</title>
	<author>Anonymous</author>
	<datestamp>1247217360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is called x25 protocol and was the European equivalent of TCP/IP. Since the EU never had an IT industry, US had no problem enforcing it. Interestingly, when the standard war of ATM took place, US-JP made a compromise between two round numbers, midway. 53.</p></htmltext>
<tokenext>This is called x25 protocol and was the European equivalent of TCP/IP .
Since the EU never had an IT industry , US had no problem enforcing it .
Interestingly , when the standard war of ATM took place , US-JP made a compromise between two round numbers , midway .
53 .</tokentext>
<sentencetext>This is called x25 protocol and was the European equivalent of TCP/IP.
Since the EU never had an IT industry, US had no problem enforcing it.
Interestingly, when the standard war of ATM took place, US-JP made a compromise between two round numbers, midway.
53.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28671933</id>
	<title>No, IPv6 Doesn't Change Much Here</title>
	<author>billstewart</author>
	<datestamp>1247412660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>IPv6 is a new version of IP, not a new version of TCP/UDP (though it forces those protocols to change, because the IP addresses are longer.)  Yes, there are priority bits, but there are also priority bits in IPv4, and some ISPs support them for traffic within that ISP, but very few support them between ISPs.  The important change in IPv6 is of course longer addresses, plus a lot of boundless optimism about "if we're changing IP anyway, we can fix all the problems it has", some of which is warranted but most of it wasn't.  The job of a network layer protocol is to figure out where the other end of the connection is and get packets there - some of the things we hoped IPv6 would fix were to make it easier to do aggregation so you don't need exponentially-large routing tables to get there.</p></htmltext>
<tokenext>IPv6 is a new version of IP , not a new version of TCP/UDP ( though it forces those protocols to change , because the IP addresses are longer .
) Yes , there are priority bits , but there are also priority bits in IPv4 , and some ISPs support them for traffic within that ISP , but very few support them between ISPs .
The important change in IPv6 is of course longer addresses , plus a lot of boundless optimism about " if we 're changing IP anyway , we can fix all the problems it has " , some of which is warranted but most of it was n't .
The job of a network layer protocol is to figure out where the other end of the connection is and get packets there - some of the things we hoped IPv6 would fix were to make it easier to do aggregation so you do n't need exponentially-large routing tables to get there .</tokentext>
<sentencetext>IPv6 is a new version of IP, not a new version of TCP/UDP (though it forces those protocols to change, because the IP addresses are longer.
)  Yes, there are priority bits, but there are also priority bits in IPv4, and some ISPs support them for traffic within that ISP, but very few support them between ISPs.
The important change in IPv6 is of course longer addresses, plus a lot of boundless optimism about "if we're changing IP anyway, we can fix all the problems it has", some of which is warranted but most of it wasn't.
The job of a network layer protocol is to figure out where the other end of the connection is and get packets there - some of the things we hoped IPv6 would fix were to make it easier to do aggregation so you don't need exponentially-large routing tables to get there.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654437</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654111</id>
	<title>This sounds like a cracker's dream</title>
	<author>Bandman</author>
	<datestamp>1247259180000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>It manages flow of traffic, recognizing when one packet belongs with the others. This sounds wonderful, at least for people trying to inject packets.</p><p>I hope these things recognize the <a href="http://www.faqs.org/rfcs/rfc3514.html" title="faqs.org" rel="nofollow">evil bit</a> [faqs.org].</p></htmltext>
<tokenext>It manages flow of traffic , recognizing when one packet belongs with the others .
This sounds wonderful , at least for people trying to inject packets.I hope these things recognize the evil bit [ faqs.org ] .</tokentext>
<sentencetext>It manages flow of traffic, recognizing when one packet belongs with the others.
This sounds wonderful, at least for people trying to inject packets.I hope these things recognize the evil bit [faqs.org].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28657043</id>
	<title>Re:Wrong</title>
	<author>mevets</author>
	<datestamp>1247237940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Wouldn't it be cool to have a protocol where throughput was not indirectly proportional to delay?  The more {comcast,bell,...} throttled your pirate^Wtorrent, the faster it got.   If you could combine this with a protocol where latency was inversely proportional to bandwidth, just by unplugging you could have an infinite throughput, zero latency network.   How cool is that!</p></htmltext>
<tokenext>Would n't it be cool to have a protocol where throughput was not indirectly proportional to delay ?
The more { comcast,bell,... } throttled your pirate ^ Wtorrent , the faster it got .
If you could combine this with a protocol where latency was inversely proportional to bandwidth , just by unplugging you could have an infinite throughput , zero latency network .
How cool is that !</tokentext>
<sentencetext>Wouldn't it be cool to have a protocol where throughput was not indirectly proportional to delay?
The more {comcast,bell,...} throttled your pirate^Wtorrent, the faster it got.
If you could combine this with a protocol where latency was inversely proportional to bandwidth, just by unplugging you could have an infinite throughput, zero latency network.
How cool is that!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654339</id>
	<title>It looks like horrible technolgy</title>
	<author>Anonymous</author>
	<datestamp>1247217060000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Among the innovations:</p><p>no ram for buffering flows to cope with any temporary overcommitments.  Instead it does this:</p><p>"Even more significant, the FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down."</p><p>Um, discarding a random packet in the middle of my session will indeed slow the flow down, much in the same way as if you shoot me in the knee it will slow me down.</p></htmltext>
<tokenext>Among the innovations : no ram for buffering flows to cope with any temporary overcommitments .
Instead it does this : " Even more significant , the FR-1000 does away entirely with the queuing chips .
During congestion , it adjusts each flow rate at its input instead .
If an incoming flow has a rate deemed too high , the equipment discards a single packet to signal the transmission to slow down .
" Um , discarding a random packet in the middle of my session will indeed slow the flow down , much in the same way as if you shoot me in the knee it will slow me down .</tokentext>
<sentencetext>Among the innovations:no ram for buffering flows to cope with any temporary overcommitments.
Instead it does this:"Even more significant, the FR-1000 does away entirely with the queuing chips.
During congestion, it adjusts each flow rate at its input instead.
If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down.
"Um, discarding a random packet in the middle of my session will indeed slow the flow down, much in the same way as if you shoot me in the knee it will slow me down.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461</id>
	<title>Wrong</title>
	<author>slashnik</author>
	<datestamp>1247217540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>"TCP throughput is inversely proportional to delay"</p><p>Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay<br>As long as the window is large enough</p></htmltext>
<tokenext>" TCP throughput is inversely proportional to delay " Absolutely wrong , 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delayAs long as the window is large enough</tokentext>
<sentencetext>"TCP throughput is inversely proportional to delay"Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delayAs long as the window is large enough</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653965</id>
	<title>so...</title>
	<author>Anonymous</author>
	<datestamp>1247258460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Cut out the bullshit, and you get a router that prioritizes packets for already-established connections, amirite? Are stateful routers actually a new thing, or can I start mocking the word "flow" now?</p></htmltext>
<tokenext>Cut out the bullshit , and you get a router that prioritizes packets for already-established connections , amirite ?
Are stateful routers actually a new thing , or can I start mocking the word " flow " now ?</tokentext>
<sentencetext>Cut out the bullshit, and you get a router that prioritizes packets for already-established connections, amirite?
Are stateful routers actually a new thing, or can I start mocking the word "flow" now?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655723</id>
	<title>Re:Wrong</title>
	<author>sharpenyourteeth</author>
	<datestamp>1247226000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>"TCP throughput is inversely proportional to delay"</p><p>Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay
As long as the window is large enough</p></div><p>Actually it is correct. The throughput is equal to TCP's congestion window divided by the round trip time (end to end delay), or TP = CWND/RTT. What he means is that assuming the window size is the same, the throughput is inversely proportional to delay.</p></div>
	</htmltext>
<tokenext>" TCP throughput is inversely proportional to delay " Absolutely wrong , 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay As long as the window is large enoughActually it is correct .
The throughput is equal to TCP 's congestion window divided by the round trip time ( end to end delay ) , or TP = CWND/RTT .
What he means is that assuming the window size is the same , the throughput is inversely proportional to delay .</tokentext>
<sentencetext>"TCP throughput is inversely proportional to delay"Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay
As long as the window is large enoughActually it is correct.
The throughput is equal to TCP's congestion window divided by the round trip time (end to end delay), or TP = CWND/RTT.
What he means is that assuming the window size is the same, the throughput is inversely proportional to delay.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655309</id>
	<title>trOllkore</title>
	<author>Anonymous</author>
	<datestamp>1247222760000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><A HREF="http://goat.cx/" title="goat.cx" rel="nofollow">'*BSD Sux0rs'. T4is</a> [goat.cx]</htmltext>
<tokenext>' * BSD Sux0rs' .
T4is [ goat.cx ]</tokentext>
<sentencetext>'*BSD Sux0rs'.
T4is [goat.cx]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654377</id>
	<title>Re:This does not solve the problem</title>
	<author>Anonymous</author>
	<datestamp>1247217240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>See, now this is the sort of post that keeps me reading<nobr> <wbr></nobr>/.  Thanks!</p></htmltext>
<tokenext>See , now this is the sort of post that keeps me reading / .
Thanks !</tokentext>
<sentencetext>See, now this is the sort of post that keeps me reading /.
Thanks!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654889</id>
	<title>on flipside it als can do the nasty</title>
	<author>Anonymous</author>
	<datestamp>1247219940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>it can do the exact reverse and stop flows it recognizes<br>and sounds like a type a DRM they will slip by you in the name of "think of the children"</p><p>P.S. interesting news is that the pirate party of Canada now states in chat and on the ite that they are not supporting non commercial p2p / fair use / file sharing. Wonder how that's gonna work fer em?</p></htmltext>
<tokenext>it can do the exact reverse and stop flows it recognizesand sounds like a type a DRM they will slip by you in the name of " think of the children " P.S .
interesting news is that the pirate party of Canada now states in chat and on the ite that they are not supporting non commercial p2p / fair use / file sharing .
Wonder how that 's gon na work fer em ?</tokentext>
<sentencetext>it can do the exact reverse and stop flows it recognizesand sounds like a type a DRM they will slip by you in the name of "think of the children"P.S.
interesting news is that the pirate party of Canada now states in chat and on the ite that they are not supporting non commercial p2p / fair use / file sharing.
Wonder how that's gonna work fer em?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654207</id>
	<title>Re:This does not solve the problem</title>
	<author>Cyberax</author>
	<datestamp>1247216400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Flow control can be greatly improved by adding NACKs to protocol. I.e. a router will (try to) send a NACK packet after it drops your packet.</p><p>This NACK might get lost, sure, so a timeout mechanism is still required. But in general NACKs give much better flow control. Another variant is heartbeat ACKs (used in SCTP), they allow a range of other optimizations.</p><p>It's possible to do better than TCP. Though of course, circuit-switched networks are still superior in flow control.</p></htmltext>
<tokenext>Flow control can be greatly improved by adding NACKs to protocol .
I.e. a router will ( try to ) send a NACK packet after it drops your packet.This NACK might get lost , sure , so a timeout mechanism is still required .
But in general NACKs give much better flow control .
Another variant is heartbeat ACKs ( used in SCTP ) , they allow a range of other optimizations.It 's possible to do better than TCP .
Though of course , circuit-switched networks are still superior in flow control .</tokentext>
<sentencetext>Flow control can be greatly improved by adding NACKs to protocol.
I.e. a router will (try to) send a NACK packet after it drops your packet.This NACK might get lost, sure, so a timeout mechanism is still required.
But in general NACKs give much better flow control.
Another variant is heartbeat ACKs (used in SCTP), they allow a range of other optimizations.It's possible to do better than TCP.
Though of course, circuit-switched networks are still superior in flow control.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654273</id>
	<title>Re:Net neutrality anyone?</title>
	<author>vertinox</author>
	<datestamp>1247216820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this?</i></p><p>I dunno. If the router is designed to look at packet flow rather than the contents of said packets or its source and destination, then you have still can have net neutrality.</p></htmltext>
<tokenext>So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules .
Are n't we supposed to be against this ? I dunno .
If the router is designed to look at packet flow rather than the contents of said packets or its source and destination , then you have still can have net neutrality .</tokentext>
<sentencetext>So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules.
Aren't we supposed to be against this?I dunno.
If the router is designed to look at packet flow rather than the contents of said packets or its source and destination, then you have still can have net neutrality.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653987</id>
	<title>This isn't new</title>
	<author>khafre</author>
	<datestamp>1247258580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ah yes, Larry Roberts.  He seems to poke his head up every once in a while.  From Caspian Networks, and now Anagran.  He certainly likes to push flow routing, although it's been shown not to scale in practice.</p></htmltext>
<tokenext>Ah yes , Larry Roberts .
He seems to poke his head up every once in a while .
From Caspian Networks , and now Anagran .
He certainly likes to push flow routing , although it 's been shown not to scale in practice .</tokentext>
<sentencetext>Ah yes, Larry Roberts.
He seems to poke his head up every once in a while.
From Caspian Networks, and now Anagran.
He certainly likes to push flow routing, although it's been shown not to scale in practice.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654061</id>
	<title>Re:This isn't new</title>
	<author>Anonymous</author>
	<datestamp>1247258940000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Definitely not new.</p><p>"The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals."</p><p>Where have I heard this before...oh hay...</p><p><a href="http://en.wikipedia.org/wiki/Cisco\_Express\_Forwarding" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Cisco\_Express\_Forwarding</a> [wikipedia.org]</p></htmltext>
<tokenext>Definitely not new .
" The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals .
" Where have I heard this before...oh hay...http : //en.wikipedia.org/wiki/Cisco \ _Express \ _Forwarding [ wikipedia.org ]</tokentext>
<sentencetext>Definitely not new.
"The router recognizes packets that are following the first and sends them along faster than if it had to route them as individuals.
"Where have I heard this before...oh hay...http://en.wikipedia.org/wiki/Cisco\_Express\_Forwarding [wikipedia.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654399</id>
	<title>This Design is Flawed</title>
	<author>neelsheyal</author>
	<datestamp>1247217300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>Routing/Switching based on flows is highly flawed. The article claims that the benefit is due to reduced table lookup based on individual packet content. Instead if the 5 tuple is hashed to a flowid. then the presence of flowid indicates that the flow is already active and will be treated preferentially during a congestion. First of all, if the number of flowids are large then there is no way to store all the different flowids in a scalable and cost effective manner. Which means you associate an eviction clause which can hurt you more with all these complexities.
Secondly, there is concept of hardware caching which works better than hashing flowids. Finally, all the classes of flow which are really important, can be protected with class based queuing.</htmltext>
<tokenext>Routing/Switching based on flows is highly flawed .
The article claims that the benefit is due to reduced table lookup based on individual packet content .
Instead if the 5 tuple is hashed to a flowid .
then the presence of flowid indicates that the flow is already active and will be treated preferentially during a congestion .
First of all , if the number of flowids are large then there is no way to store all the different flowids in a scalable and cost effective manner .
Which means you associate an eviction clause which can hurt you more with all these complexities .
Secondly , there is concept of hardware caching which works better than hashing flowids .
Finally , all the classes of flow which are really important , can be protected with class based queuing .</tokentext>
<sentencetext>Routing/Switching based on flows is highly flawed.
The article claims that the benefit is due to reduced table lookup based on individual packet content.
Instead if the 5 tuple is hashed to a flowid.
then the presence of flowid indicates that the flow is already active and will be treated preferentially during a congestion.
First of all, if the number of flowids are large then there is no way to store all the different flowids in a scalable and cost effective manner.
Which means you associate an eviction clause which can hurt you more with all these complexities.
Secondly, there is concept of hardware caching which works better than hashing flowids.
Finally, all the classes of flow which are really important, can be protected with class based queuing.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654385</id>
	<title>Whats the date on this, 1998?</title>
	<author>Anonymous</author>
	<datestamp>1247217300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Someone put the spades back. flow routing is SUCH old news... How the heck did this make slashdot?</p></htmltext>
<tokenext>Someone put the spades back .
flow routing is SUCH old news... How the heck did this make slashdot ?</tokentext>
<sentencetext>Someone put the spades back.
flow routing is SUCH old news... How the heck did this make slashdot?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654075</id>
	<title>Pretty girls make things go faster</title>
	<author>Anonymous</author>
	<datestamp>1247258940000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why can't we just put a pretty girl on top of it and make the packets go faster.</p><p>Seems to work with car advertising and on animals.</p></htmltext>
<tokenext>Why ca n't we just put a pretty girl on top of it and make the packets go faster.Seems to work with car advertising and on animals .</tokentext>
<sentencetext>Why can't we just put a pretty girl on top of it and make the packets go faster.Seems to work with car advertising and on animals.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654107</id>
	<title>I don't actually know what I'm talking about</title>
	<author>Anonymous</author>
	<datestamp>1247259060000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Finally!  It's about time!  I mean, jeez, could it have been more obvious?  I've been saying this for years!</p><p>But in all seriousness, it would be pretty sweet if it helps streamed media; be it audio, video, games, or some super google chrome plot to launch skynet.</p></htmltext>
<tokenext>Finally !
It 's about time !
I mean , jeez , could it have been more obvious ?
I 've been saying this for years ! But in all seriousness , it would be pretty sweet if it helps streamed media ; be it audio , video , games , or some super google chrome plot to launch skynet .</tokentext>
<sentencetext>Finally!
It's about time!
I mean, jeez, could it have been more obvious?
I've been saying this for years!But in all seriousness, it would be pretty sweet if it helps streamed media; be it audio, video, games, or some super google chrome plot to launch skynet.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654929</id>
	<title>Look what it *doesn't* have.</title>
	<author>JakiChan</author>
	<datestamp>1247220120000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>He's doing it *without* custom ASICs and without TCAM.  TCAM is very expensive.  I'm not sure this is faster than CEF or the like, but it may very well be cheaper.</p></htmltext>
<tokenext>He 's doing it * without * custom ASICs and without TCAM .
TCAM is very expensive .
I 'm not sure this is faster than CEF or the like , but it may very well be cheaper .</tokentext>
<sentencetext>He's doing it *without* custom ASICs and without TCAM.
TCAM is very expensive.
I'm not sure this is faster than CEF or the like, but it may very well be cheaper.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655419</id>
	<title>Re:Net neutrality anyone?</title>
	<author>bogd</author>
	<datestamp>1247223540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control. The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.</i>

<br> <br>

It's more than that. The older techologies ("fast switching" in the Cisco world) used to do this - route first packet, then switch the other packets in the flow. However, CEF goes one step forward, and allows for <i>all</i> the packets to be switched by the hardware (not even the first packet in the flow hits the router processor). Which means that what the author seems to be suggesting would actually mean moving backwards.
<br>
Either there is more to the router than the article says, or the author hasn't been keeping track of developments in this field...</htmltext>
<tokenext>CEF ( Cisco Express Forwarding ) and MPLS [ wikipedia.org ] ( Multiprotocol Label Switching ) use flow control .
The perform a lookup on the first packet , cache the information in a forwarding table and all further packets which are part of the same flow are switched , not routed , at effectively wire speeds .
It 's more than that .
The older techologies ( " fast switching " in the Cisco world ) used to do this - route first packet , then switch the other packets in the flow .
However , CEF goes one step forward , and allows for all the packets to be switched by the hardware ( not even the first packet in the flow hits the router processor ) .
Which means that what the author seems to be suggesting would actually mean moving backwards .
Either there is more to the router than the article says , or the author has n't been keeping track of developments in this field.. .</tokentext>
<sentencetext>CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control.
The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.
It's more than that.
The older techologies ("fast switching" in the Cisco world) used to do this - route first packet, then switch the other packets in the flow.
However, CEF goes one step forward, and allows for all the packets to be switched by the hardware (not even the first packet in the flow hits the router processor).
Which means that what the author seems to be suggesting would actually mean moving backwards.
Either there is more to the router than the article says, or the author hasn't been keeping track of developments in this field...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654291</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655929</id>
	<title>Aunt Flow</title>
	<author>Anonymous</author>
	<datestamp>1247227260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Will it manage how often she comes to visit? That would be a sure-fire hit with the married men, at least.</htmltext>
<tokenext>Will it manage how often she comes to visit ?
That would be a sure-fire hit with the married men , at least .</tokentext>
<sentencetext>Will it manage how often she comes to visit?
That would be a sure-fire hit with the married men, at least.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28659603</id>
	<title>Re:So, they've reimplemented CEF</title>
	<author>Anonymous</author>
	<datestamp>1247324220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Load balancing in this case is achieved by a hash algorithm.  Values from the packet (source/dest ip, source/dest port, etc.) are used to determine which link to route the packet over.  The individual flow is not tracked.</p></htmltext>
<tokenext>Load balancing in this case is achieved by a hash algorithm .
Values from the packet ( source/dest ip , source/dest port , etc .
) are used to determine which link to route the packet over .
The individual flow is not tracked .</tokentext>
<sentencetext>Load balancing in this case is achieved by a hash algorithm.
Values from the packet (source/dest ip, source/dest port, etc.
) are used to determine which link to route the packet over.
The individual flow is not tracked.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654315</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656921</id>
	<title>Re:Well duh</title>
	<author>Anonymous</author>
	<datestamp>1247236440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Who cares...  Come get me when they make one that doesn't drop sync with my cable modem.</p></htmltext>
<tokenext>Who cares... Come get me when they make one that does n't drop sync with my cable modem .</tokentext>
<sentencetext>Who cares...  Come get me when they make one that doesn't drop sync with my cable modem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654437</id>
	<title>Re:This does not solve the problem</title>
	<author>religious freak</author>
	<datestamp>1247217420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I defer to your knowledge relative to mine, but I do wonder why we work on switching pieces of transport protocols around and changing the tiny things when we could just move to something entirely different.  I recall reading IPv6 has a host of new mechanisms built directly into the protocol which address these types of concerns.  IPv6 is far more than just NAT avoidance and long IP addresses - with its built in packet priority values and other bells and whistles I think IPv6 could help solve this type of problem?</htmltext>
<tokenext>I defer to your knowledge relative to mine , but I do wonder why we work on switching pieces of transport protocols around and changing the tiny things when we could just move to something entirely different .
I recall reading IPv6 has a host of new mechanisms built directly into the protocol which address these types of concerns .
IPv6 is far more than just NAT avoidance and long IP addresses - with its built in packet priority values and other bells and whistles I think IPv6 could help solve this type of problem ?</tokentext>
<sentencetext>I defer to your knowledge relative to mine, but I do wonder why we work on switching pieces of transport protocols around and changing the tiny things when we could just move to something entirely different.
I recall reading IPv6 has a host of new mechanisms built directly into the protocol which address these types of concerns.
IPv6 is far more than just NAT avoidance and long IP addresses - with its built in packet priority values and other bells and whistles I think IPv6 could help solve this type of problem?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655531</id>
	<title>Caspian Networks Reloaded</title>
	<author>Eristone</author>
	<datestamp>1247224440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Fun watching people say this "doesn't work" -- back when I was at Caspian, the real world runs were working quite well at gigabit speed and if memory serves, they had a 10 gigabit line card (this was 2006).  The cost was they had to design asics to do this and they were trying to get the same performance out of commodity hardware.  It looks like this is the case - which means it's dropped the cost of the equipment significantly.</p><p>Where it does improvement over current routing and qos is that it does it on the fly and at wire speeds with an administrator putting in parameters that state the type of performance he wants for "detected flows".  In addition, a lot of profiling was done on various traffic to figure out what type of traffic produces what.  For instance, the stuff Caspian was selling could identify a VOIP connection on the fly without doing deep packet inspection even if the traffic was encrypted.  It did the same with torrent traffic, video traffic, web surfing traffic, im traffic, irc traffic, etc.  So, instead of having a deep packet inspection, a router and a switch, you'd get the flow traffic, identify it based on the traffic profile, establish a qos on the flow and then maintain it.   It would help against DDOS situations - maintain the current connections that are coming through while establishing new ones as needed.   And this is all in the same box.</p><p>I don't know what the Anagram folks have managed to do but if they're working off the same model (and probably have a bunch of the same people working on things) the stuff I mentioned should be definitely part of the same equipment.</p></htmltext>
<tokenext>Fun watching people say this " does n't work " -- back when I was at Caspian , the real world runs were working quite well at gigabit speed and if memory serves , they had a 10 gigabit line card ( this was 2006 ) .
The cost was they had to design asics to do this and they were trying to get the same performance out of commodity hardware .
It looks like this is the case - which means it 's dropped the cost of the equipment significantly.Where it does improvement over current routing and qos is that it does it on the fly and at wire speeds with an administrator putting in parameters that state the type of performance he wants for " detected flows " .
In addition , a lot of profiling was done on various traffic to figure out what type of traffic produces what .
For instance , the stuff Caspian was selling could identify a VOIP connection on the fly without doing deep packet inspection even if the traffic was encrypted .
It did the same with torrent traffic , video traffic , web surfing traffic , im traffic , irc traffic , etc .
So , instead of having a deep packet inspection , a router and a switch , you 'd get the flow traffic , identify it based on the traffic profile , establish a qos on the flow and then maintain it .
It would help against DDOS situations - maintain the current connections that are coming through while establishing new ones as needed .
And this is all in the same box.I do n't know what the Anagram folks have managed to do but if they 're working off the same model ( and probably have a bunch of the same people working on things ) the stuff I mentioned should be definitely part of the same equipment .</tokentext>
<sentencetext>Fun watching people say this "doesn't work" -- back when I was at Caspian, the real world runs were working quite well at gigabit speed and if memory serves, they had a 10 gigabit line card (this was 2006).
The cost was they had to design asics to do this and they were trying to get the same performance out of commodity hardware.
It looks like this is the case - which means it's dropped the cost of the equipment significantly.Where it does improvement over current routing and qos is that it does it on the fly and at wire speeds with an administrator putting in parameters that state the type of performance he wants for "detected flows".
In addition, a lot of profiling was done on various traffic to figure out what type of traffic produces what.
For instance, the stuff Caspian was selling could identify a VOIP connection on the fly without doing deep packet inspection even if the traffic was encrypted.
It did the same with torrent traffic, video traffic, web surfing traffic, im traffic, irc traffic, etc.
So, instead of having a deep packet inspection, a router and a switch, you'd get the flow traffic, identify it based on the traffic profile, establish a qos on the flow and then maintain it.
It would help against DDOS situations - maintain the current connections that are coming through while establishing new ones as needed.
And this is all in the same box.I don't know what the Anagram folks have managed to do but if they're working off the same model (and probably have a bunch of the same people working on things) the stuff I mentioned should be definitely part of the same equipment.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654359</id>
	<title>i don't get it, isn't that what is done now?</title>
	<author>tukia</author>
	<datestamp>1247217180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Revolutionary? The concept isn't new. On software based router, we cache route information after the first lookup from the routing table for a certain period of time based on parameters like destination ip address, nexthop and interface.  So instead of looking up the route table again, we just look up the cached route. It's called IP Flows and it's way old.</htmltext>
<tokenext>Revolutionary ?
The concept is n't new .
On software based router , we cache route information after the first lookup from the routing table for a certain period of time based on parameters like destination ip address , nexthop and interface .
So instead of looking up the route table again , we just look up the cached route .
It 's called IP Flows and it 's way old .</tokentext>
<sentencetext>Revolutionary?
The concept isn't new.
On software based router, we cache route information after the first lookup from the routing table for a certain period of time based on parameters like destination ip address, nexthop and interface.
So instead of looking up the route table again, we just look up the cached route.
It's called IP Flows and it's way old.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845</id>
	<title>Well duh</title>
	<author>Anonymous</author>
	<datestamp>1247257860000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Damn right, they manage flows.  It keeps the tubes from clogging.</p><p>Duuuurrrrrr.</p></htmltext>
<tokenext>Damn right , they manage flows .
It keeps the tubes from clogging.Duuuurrrrrr .</tokentext>
<sentencetext>Damn right, they manage flows.
It keeps the tubes from clogging.Duuuurrrrrr.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653985</id>
	<title>Re:This looks like an Anagran ad</title>
	<author>Anonymous</author>
	<datestamp>1247258520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yes it does.</p></htmltext>
<tokenext>Yes it does .</tokentext>
<sentencetext>Yes it does.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653933</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656383</id>
	<title>Re:Net neutrality anyone?</title>
	<author>mysidia</author>
	<datestamp>1247231100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
Flow-based QoS, in the form of <a href="http://www.cisco.com/en/US/docs/ios/12\_0t/12\_0t3/feature/guide/flowwred.html" title="cisco.com" rel="nofollow">Flow-based WRED</a> [cisco.com] is not a new concept.
</p><p>
Furthermore, Flow-based routing is not a new concept, it's a very old one.
</p><p>
Perhaps what has happened is general purpose computing hardware, CPUs, and Memory, have gotten a lot cheaper, at much higher speeds and capacities, than in recent years.
</p><p>
It may now be possible to build routers that have the capacity to do it.   Flow-based routing is extremely expensive, especially in terms of CPU and memory for bookkeeping all those flows.
</p><p>
Think about it:  every single open TCP connection is going to be using memory slots in a flow-based router.    If  too many distinct flows come in, are started, or continue within recent history, for the available memory to record them all, the device will be in trouble and have to reboot,  or utilize some other routing strategy that it wasn't optimized for.
</p><p>
I would  fully expect a core router of a sufficient large ISP to have billions if not trillions of flows to have to be in memory  under normal loads of a flow-based router.
</p><p>
Keep in mind a 'DNS Request' is a flow,  even if it's UDP,  oh yeah, and there are some UDP-based protocols  that involve data exchange at wider intervals.
</p><p>
A client may transmit a UDP message and expect a response sequence 5 minutes later.
</p></htmltext>
<tokenext>Flow-based QoS , in the form of Flow-based WRED [ cisco.com ] is not a new concept .
Furthermore , Flow-based routing is not a new concept , it 's a very old one .
Perhaps what has happened is general purpose computing hardware , CPUs , and Memory , have gotten a lot cheaper , at much higher speeds and capacities , than in recent years .
It may now be possible to build routers that have the capacity to do it .
Flow-based routing is extremely expensive , especially in terms of CPU and memory for bookkeeping all those flows .
Think about it : every single open TCP connection is going to be using memory slots in a flow-based router .
If too many distinct flows come in , are started , or continue within recent history , for the available memory to record them all , the device will be in trouble and have to reboot , or utilize some other routing strategy that it was n't optimized for .
I would fully expect a core router of a sufficient large ISP to have billions if not trillions of flows to have to be in memory under normal loads of a flow-based router .
Keep in mind a 'DNS Request ' is a flow , even if it 's UDP , oh yeah , and there are some UDP-based protocols that involve data exchange at wider intervals .
A client may transmit a UDP message and expect a response sequence 5 minutes later .</tokentext>
<sentencetext>
Flow-based QoS, in the form of Flow-based WRED [cisco.com] is not a new concept.
Furthermore, Flow-based routing is not a new concept, it's a very old one.
Perhaps what has happened is general purpose computing hardware, CPUs, and Memory, have gotten a lot cheaper, at much higher speeds and capacities, than in recent years.
It may now be possible to build routers that have the capacity to do it.
Flow-based routing is extremely expensive, especially in terms of CPU and memory for bookkeeping all those flows.
Think about it:  every single open TCP connection is going to be using memory slots in a flow-based router.
If  too many distinct flows come in, are started, or continue within recent history, for the available memory to record them all, the device will be in trouble and have to reboot,  or utilize some other routing strategy that it wasn't optimized for.
I would  fully expect a core router of a sufficient large ISP to have billions if not trillions of flows to have to be in memory  under normal loads of a flow-based router.
Keep in mind a 'DNS Request' is a flow,  even if it's UDP,  oh yeah, and there are some UDP-based protocols  that involve data exchange at wider intervals.
A client may transmit a UDP message and expect a response sequence 5 minutes later.
</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</id>
	<title>This does not solve the problem</title>
	<author>Anonymous</author>
	<datestamp>1247258160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>It just makes the packet switching faster.  But really, we're talking about the same idea here: datagram networks.  Congestion avoidance has been known to be a difficult problem in datagram networks for a <a href="http://en.wikipedia.org/wiki/CYCLADES#Technical\_details" title="wikipedia.org">long time</a> [wikipedia.org].
<br> <br>
TCP's congestion control algorithm, which <em>causes congestion and then backs off</em> is the real culprit here, and this router does nothing to fix that.  The way to fix that is to dump TCP's congestion control and replace it with <em>real</em> flow control in the network layer.  That requires lots of memory on intermediaries, because you need all the hosts along the data path to cooperate with each other to communicate about flow control, and that means keeping state.  At which point, we're not talking about datagram networks anymore.  And that means dumping the other desirable thing about datagram networks: fault tolerance.  Packets are path-independent.
<br> <br>
Anyway: getting back to TCP's congestion control: his article even says that "During congestion, it adjusts each flow rate at its input instead."  Wait, what?  "If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down."  That's how it works right now!  The only difference that I can see is that he's being a little smarter about <em>which</em> packets to discard, unlike RED, which is what he's comparing this to.  If so, that's an improvement, but it doesn't solve the problem.  It will still take awhile for TCP to notice the problem, because the host has to wait for a missed ACK.  TCP can only "see" the other host-- it does not know (or care) about flow control along the path.  Solving the problem requires flow control along that path, i.e., in the network layer, but IP lacks such a mechanism.</htmltext>
<tokenext>It just makes the packet switching faster .
But really , we 're talking about the same idea here : datagram networks .
Congestion avoidance has been known to be a difficult problem in datagram networks for a long time [ wikipedia.org ] .
TCP 's congestion control algorithm , which causes congestion and then backs off is the real culprit here , and this router does nothing to fix that .
The way to fix that is to dump TCP 's congestion control and replace it with real flow control in the network layer .
That requires lots of memory on intermediaries , because you need all the hosts along the data path to cooperate with each other to communicate about flow control , and that means keeping state .
At which point , we 're not talking about datagram networks anymore .
And that means dumping the other desirable thing about datagram networks : fault tolerance .
Packets are path-independent .
Anyway : getting back to TCP 's congestion control : his article even says that " During congestion , it adjusts each flow rate at its input instead .
" Wait , what ?
" If an incoming flow has a rate deemed too high , the equipment discards a single packet to signal the transmission to slow down .
" That 's how it works right now !
The only difference that I can see is that he 's being a little smarter about which packets to discard , unlike RED , which is what he 's comparing this to .
If so , that 's an improvement , but it does n't solve the problem .
It will still take awhile for TCP to notice the problem , because the host has to wait for a missed ACK .
TCP can only " see " the other host-- it does not know ( or care ) about flow control along the path .
Solving the problem requires flow control along that path , i.e. , in the network layer , but IP lacks such a mechanism .</tokentext>
<sentencetext>It just makes the packet switching faster.
But really, we're talking about the same idea here: datagram networks.
Congestion avoidance has been known to be a difficult problem in datagram networks for a long time [wikipedia.org].
TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here, and this router does nothing to fix that.
The way to fix that is to dump TCP's congestion control and replace it with real flow control in the network layer.
That requires lots of memory on intermediaries, because you need all the hosts along the data path to cooperate with each other to communicate about flow control, and that means keeping state.
At which point, we're not talking about datagram networks anymore.
And that means dumping the other desirable thing about datagram networks: fault tolerance.
Packets are path-independent.
Anyway: getting back to TCP's congestion control: his article even says that "During congestion, it adjusts each flow rate at its input instead.
"  Wait, what?
"If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down.
"  That's how it works right now!
The only difference that I can see is that he's being a little smarter about which packets to discard, unlike RED, which is what he's comparing this to.
If so, that's an improvement, but it doesn't solve the problem.
It will still take awhile for TCP to notice the problem, because the host has to wait for a missed ACK.
TCP can only "see" the other host-- it does not know (or care) about flow control along the path.
Solving the problem requires flow control along that path, i.e., in the network layer, but IP lacks such a mechanism.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654129</id>
	<title>Puffery by a startup</title>
	<author>Ungrounded Lightning</author>
	<datestamp>1247259240000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>The main players in the routing industry have been working on flow-aware routing for years.</p><p>(I'm in the hardware side of our company so I'm not sure where how many and which of the features built on the flow-based architecture are already in the field.  But I'm willing to bet a significant chunk of change that that the full bore will be deployed on more than one name-brand company's product line and be the dominant paradigm in routing long before these guys can convince the telecoms and ISPs to adopt their product.  No matter how many big names they have on staff - or how good their box is.  Breaking into networking is HARD.)</p></htmltext>
<tokenext>The main players in the routing industry have been working on flow-aware routing for years .
( I 'm in the hardware side of our company so I 'm not sure where how many and which of the features built on the flow-based architecture are already in the field .
But I 'm willing to bet a significant chunk of change that that the full bore will be deployed on more than one name-brand company 's product line and be the dominant paradigm in routing long before these guys can convince the telecoms and ISPs to adopt their product .
No matter how many big names they have on staff - or how good their box is .
Breaking into networking is HARD .
)</tokentext>
<sentencetext>The main players in the routing industry have been working on flow-aware routing for years.
(I'm in the hardware side of our company so I'm not sure where how many and which of the features built on the flow-based architecture are already in the field.
But I'm willing to bet a significant chunk of change that that the full bore will be deployed on more than one name-brand company's product line and be the dominant paradigm in routing long before these guys can convince the telecoms and ISPs to adopt their product.
No matter how many big names they have on staff - or how good their box is.
Breaking into networking is HARD.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655757</id>
	<title>OpenBSD anyone?</title>
	<author>Narcocide</author>
	<datestamp>1247226180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Isn't this something that you can accomplish with OpenBSD packet filter?</p></htmltext>
<tokenext>Is n't this something that you can accomplish with OpenBSD packet filter ?</tokentext>
<sentencetext>Isn't this something that you can accomplish with OpenBSD packet filter?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</id>
	<title>Net neutrality anyone?</title>
	<author>Anonymous</author>
	<datestamp>1247258100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules. Aren't we supposed to be against this? Because it sounds a lot to me like encrypted packets, UDP, and peer-to-peer, three things that certain well-funded groups have been trying to kill or restrict for awhile, would seem to be the worst-affected here.</p></htmltext>
<tokenext>So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules .
Are n't we supposed to be against this ?
Because it sounds a lot to me like encrypted packets , UDP , and peer-to-peer , three things that certain well-funded groups have been trying to kill or restrict for awhile , would seem to be the worst-affected here .</tokentext>
<sentencetext>So we have a router that does stateful packet inspection and prioritizes traffic based on internal rules.
Aren't we supposed to be against this?
Because it sounds a lot to me like encrypted packets, UDP, and peer-to-peer, three things that certain well-funded groups have been trying to kill or restrict for awhile, would seem to be the worst-affected here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653975</id>
	<title>so...</title>
	<author>Anonymous</author>
	<datestamp>1247258460000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>a router tampon?</p></htmltext>
<tokenext>a router tampon ?</tokentext>
<sentencetext>a router tampon?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654195</id>
	<title>Little help?</title>
	<author>xZgf6xHx2uhoAj9D</author>
	<datestamp>1247259540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I read the article and can't exactly distinguish this from <a href="http://en.wikipedia.org/wiki/Integrated\_services" title="wikipedia.org">IntServ</a> [wikipedia.org]. What's the difference?</htmltext>
<tokenext>I read the article and ca n't exactly distinguish this from IntServ [ wikipedia.org ] .
What 's the difference ?</tokentext>
<sentencetext>I read the article and can't exactly distinguish this from IntServ [wikipedia.org].
What's the difference?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655623</id>
	<title>Traffic throttling long-lived connections?</title>
	<author>Anonymous</author>
	<datestamp>1247225220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>From the article, it sounds like it's trying to sell the idea of managing traffic (specifically P2P) fairly, which is admirable.  I like the idea of a long-lived high-throughput P2P connection being treated the same as a long-lived high-throughput HTTP connection.  That seems incredibly fair to me.</p><p>However, if they start treating long-lived low-throughput connections differently (more likely in a P2P setting, I'd imagine), then that seems a little unfair.</p><p>The very quick workaround would be for P2P clients to set a limit on the duration of a connection, and once that connection expires, to drop the connection and re-establish it.  From what the article says, that would get around their "fairness" system.  Of course, this assumes the port is part of the hash table key (which is how the article makes it sound.)  If it is not (and the "fairness" is based on one IP to another, then that's a different story.</p></htmltext>
<tokenext>From the article , it sounds like it 's trying to sell the idea of managing traffic ( specifically P2P ) fairly , which is admirable .
I like the idea of a long-lived high-throughput P2P connection being treated the same as a long-lived high-throughput HTTP connection .
That seems incredibly fair to me.However , if they start treating long-lived low-throughput connections differently ( more likely in a P2P setting , I 'd imagine ) , then that seems a little unfair.The very quick workaround would be for P2P clients to set a limit on the duration of a connection , and once that connection expires , to drop the connection and re-establish it .
From what the article says , that would get around their " fairness " system .
Of course , this assumes the port is part of the hash table key ( which is how the article makes it sound .
) If it is not ( and the " fairness " is based on one IP to another , then that 's a different story .</tokentext>
<sentencetext>From the article, it sounds like it's trying to sell the idea of managing traffic (specifically P2P) fairly, which is admirable.
I like the idea of a long-lived high-throughput P2P connection being treated the same as a long-lived high-throughput HTTP connection.
That seems incredibly fair to me.However, if they start treating long-lived low-throughput connections differently (more likely in a P2P setting, I'd imagine), then that seems a little unfair.The very quick workaround would be for P2P clients to set a limit on the duration of a connection, and once that connection expires, to drop the connection and re-establish it.
From what the article says, that would get around their "fairness" system.
Of course, this assumes the port is part of the hash table key (which is how the article makes it sound.
)  If it is not (and the "fairness" is based on one IP to another, then that's a different story.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655869</id>
	<title>didn't rtfa, but i assume it's about tampax.</title>
	<author>gadabyte</author>
	<datestamp>1247226840000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>"managing flows" unburied a memory of a tampon commercial.</p><p><div class="quote"><p>During your period, your flow level can change from one day to the next. That's why Tampax developed the Compak Multipax. You get 3 tampon absorbencies to meet your changing needs, in one convenient package.</p></div></div>
	</htmltext>
<tokenext>" managing flows " unburied a memory of a tampon commercial.During your period , your flow level can change from one day to the next .
That 's why Tampax developed the Compak Multipax .
You get 3 tampon absorbencies to meet your changing needs , in one convenient package .</tokentext>
<sentencetext>"managing flows" unburied a memory of a tampon commercial.During your period, your flow level can change from one day to the next.
That's why Tampax developed the Compak Multipax.
You get 3 tampon absorbencies to meet your changing needs, in one convenient package.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654531</id>
	<title>I see what you did there...</title>
	<author>Anonymous</author>
	<datestamp>1247217960000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>He has re-invented the layer 3 switch... now with less jitter and latency because:</p><p>The FR-1000 does away entirely with the queuing chips. During congestion, it adjusts each flow rate at its input instead. If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down. And rather than just delaying or dropping packets as in regular routers, in the FR-1000 the output provides feedback to the input. If there&#226;(TM)s bandwidth available, the equipment increases the flow rates or accepts more flows at the input; if bandwidth is scarce, the router reduces flow rates or discards packets.</p><p>So we are going to implement WRED on a per flow basis, get rid of the queuing, and force the tcp stream to scale back it's window size when we run out of bandwidth by dropping a packet out of that conversation...</p><p>I mis-spoke, this is a layer 2 and a half switch!</p></htmltext>
<tokenext>He has re-invented the layer 3 switch... now with less jitter and latency because : The FR-1000 does away entirely with the queuing chips .
During congestion , it adjusts each flow rate at its input instead .
If an incoming flow has a rate deemed too high , the equipment discards a single packet to signal the transmission to slow down .
And rather than just delaying or dropping packets as in regular routers , in the FR-1000 the output provides feedback to the input .
If there   ( TM ) s bandwidth available , the equipment increases the flow rates or accepts more flows at the input ; if bandwidth is scarce , the router reduces flow rates or discards packets.So we are going to implement WRED on a per flow basis , get rid of the queuing , and force the tcp stream to scale back it 's window size when we run out of bandwidth by dropping a packet out of that conversation...I mis-spoke , this is a layer 2 and a half switch !</tokentext>
<sentencetext>He has re-invented the layer 3 switch... now with less jitter and latency because:The FR-1000 does away entirely with the queuing chips.
During congestion, it adjusts each flow rate at its input instead.
If an incoming flow has a rate deemed too high, the equipment discards a single packet to signal the transmission to slow down.
And rather than just delaying or dropping packets as in regular routers, in the FR-1000 the output provides feedback to the input.
If thereâ(TM)s bandwidth available, the equipment increases the flow rates or accepts more flows at the input; if bandwidth is scarce, the router reduces flow rates or discards packets.So we are going to implement WRED on a per flow basis, get rid of the queuing, and force the tcp stream to scale back it's window size when we run out of bandwidth by dropping a packet out of that conversation...I mis-spoke, this is a layer 2 and a half switch!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654387</id>
	<title>Re:This does not solve the problem</title>
	<author>B'Trey</author>
	<datestamp>1247217300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>This has already been addressed in the IP specs:  <a href="http://en.wikipedia.org/wiki/Explicit\_Congestion\_Notification" title="wikipedia.org">ECN</a> [wikipedia.org]</p><p>One of the big problems with getting ECN adopted has been that Windows hasn't supported it.  Vista does and I haven't seen anything specific but I'm reasonably certain that Windows 7 does as well.  MAC OSX 10.5 supports it as well.  Linux has supported it for quite awhile.  It's usually disabled by default, so that may be an issue in getting it widely supported.  But the issue isn't that we don't know how to do it better.  It's just overcoming the inertia.</p></htmltext>
<tokenext>This has already been addressed in the IP specs : ECN [ wikipedia.org ] One of the big problems with getting ECN adopted has been that Windows has n't supported it .
Vista does and I have n't seen anything specific but I 'm reasonably certain that Windows 7 does as well .
MAC OSX 10.5 supports it as well .
Linux has supported it for quite awhile .
It 's usually disabled by default , so that may be an issue in getting it widely supported .
But the issue is n't that we do n't know how to do it better .
It 's just overcoming the inertia .</tokentext>
<sentencetext>This has already been addressed in the IP specs:  ECN [wikipedia.org]One of the big problems with getting ECN adopted has been that Windows hasn't supported it.
Vista does and I haven't seen anything specific but I'm reasonably certain that Windows 7 does as well.
MAC OSX 10.5 supports it as well.
Linux has supported it for quite awhile.
It's usually disabled by default, so that may be an issue in getting it widely supported.
But the issue isn't that we don't know how to do it better.
It's just overcoming the inertia.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654217</id>
	<title>Re:This does not solve the problem</title>
	<author>Anonymous</author>
	<datestamp>1247216460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>Doesn't WRED (Weighted RED) already do this?</p></htmltext>
<tokenext>Does n't WRED ( Weighted RED ) already do this ?</tokentext>
<sentencetext>Doesn't WRED (Weighted RED) already do this?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28660361</id>
	<title>Re:Wrong</title>
	<author>n6mod</author>
	<datestamp>1247329800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Grandparent is absolutely right in any real network. I've done this for a living, twice.</p><p>In any real network, packet loss is not zero. Packet loss happens for any number of reasons, including, as has been pointed out, congestion. In fact, the guys doing TFRC determined that the throughput of tcp is approximately:</p><p>Throughput = 1.3 * MTU / (RTT * sqrt(Loss))</p><p>Note well that window size is not a term in that equation. Bandwidth of the link isn't either, though obviously that's an upper limit.</p><p>It's true that bandwidth and CWND/RTT are upper bounds, but with any WAN latency you're likely to run into the above first.</p></htmltext>
<tokenext>Grandparent is absolutely right in any real network .
I 've done this for a living , twice.In any real network , packet loss is not zero .
Packet loss happens for any number of reasons , including , as has been pointed out , congestion .
In fact , the guys doing TFRC determined that the throughput of tcp is approximately : Throughput = 1.3 * MTU / ( RTT * sqrt ( Loss ) ) Note well that window size is not a term in that equation .
Bandwidth of the link is n't either , though obviously that 's an upper limit.It 's true that bandwidth and CWND/RTT are upper bounds , but with any WAN latency you 're likely to run into the above first .</tokentext>
<sentencetext>Grandparent is absolutely right in any real network.
I've done this for a living, twice.In any real network, packet loss is not zero.
Packet loss happens for any number of reasons, including, as has been pointed out, congestion.
In fact, the guys doing TFRC determined that the throughput of tcp is approximately:Throughput = 1.3 * MTU / (RTT * sqrt(Loss))Note well that window size is not a term in that equation.
Bandwidth of the link isn't either, though obviously that's an upper limit.It's true that bandwidth and CWND/RTT are upper bounds, but with any WAN latency you're likely to run into the above first.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654311</id>
	<title>Re:Net neutrality anyone?</title>
	<author>AdamBv1</author>
	<datestamp>1247216940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I believe hes talking about managing flows to keep streams from getting interrupted so if it sees data transfers that have been going on between 2 points consistently its going to be less likely to drop packets from that stream than it is some other random ping or small packet. Basicly the idea is to keep things working that are streaming instead of dropping a packet and stalling them or sending packets out of order due to queuing.

Benefit is downloads and streams of data are more likely to stay working while one off communications and handshakes would be more likely to get dropped.</htmltext>
<tokenext>I believe hes talking about managing flows to keep streams from getting interrupted so if it sees data transfers that have been going on between 2 points consistently its going to be less likely to drop packets from that stream than it is some other random ping or small packet .
Basicly the idea is to keep things working that are streaming instead of dropping a packet and stalling them or sending packets out of order due to queuing .
Benefit is downloads and streams of data are more likely to stay working while one off communications and handshakes would be more likely to get dropped .</tokentext>
<sentencetext>I believe hes talking about managing flows to keep streams from getting interrupted so if it sees data transfers that have been going on between 2 points consistently its going to be less likely to drop packets from that stream than it is some other random ping or small packet.
Basicly the idea is to keep things working that are streaming instead of dropping a packet and stalling them or sending packets out of order due to queuing.
Benefit is downloads and streams of data are more likely to stay working while one off communications and handshakes would be more likely to get dropped.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655771</id>
	<title>Re:This isn't new</title>
	<author>cgori</author>
	<datestamp>1247226240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Thank you, he is the very same one whose ideas evaporated 200M+ in VC money on Caspian, right?  They were across the highway from me for years when I was in the valley. <i>plus ca change... </i></p></htmltext>
<tokenext>Thank you , he is the very same one whose ideas evaporated 200M + in VC money on Caspian , right ?
They were across the highway from me for years when I was in the valley .
plus ca change.. .</tokentext>
<sentencetext>Thank you, he is the very same one whose ideas evaporated 200M+ in VC money on Caspian, right?
They were across the highway from me for years when I was in the valley.
plus ca change... </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653987</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28657509</id>
	<title>What A Great New Concept!</title>
	<author>DynaSoar</author>
	<datestamp>1247243820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Cool, a transfer protocol that adapts what's sent when according to traffic flow. It needs a catchy name.</p><p>I suggest Zmodem.</p></htmltext>
<tokenext>Cool , a transfer protocol that adapts what 's sent when according to traffic flow .
It needs a catchy name.I suggest Zmodem .</tokentext>
<sentencetext>Cool, a transfer protocol that adapts what's sent when according to traffic flow.
It needs a catchy name.I suggest Zmodem.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656213</id>
	<title>Re:Net neutrality anyone?</title>
	<author>Anonymous</author>
	<datestamp>1247229780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You don't need DPI to do that. You can use IP destination, source, and destination TCP port. It's called stateful inspection.</p></htmltext>
<tokenext>You do n't need DPI to do that .
You can use IP destination , source , and destination TCP port .
It 's called stateful inspection .</tokentext>
<sentencetext>You don't need DPI to do that.
You can use IP destination, source, and destination TCP port.
It's called stateful inspection.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654291</id>
	<title>Re:Net neutrality anyone?</title>
	<author>Anonymous</author>
	<datestamp>1247216820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Exactly how is this different from what we currently have?</p><p><i>Consider a conventional router receiving two packets that are part of the same video. The router looks at the first packet's destination address and consults a routing table. It then holds the packet in a queue until it can be dispatched. When the router receives the second packet, it repeats those same steps, not "remembering" that it has just processed an earlier piece of the same video.</i> </p><p>Uh, no.  This is called process switching.  It hasn't been used in anything but the most low-end routers for quite some time.  CEF (Cisco Express Forwarding) and <a href="http://en.wikipedia.org/wiki/Multiprotocol\_Label\_Switching" title="wikipedia.org">MPLS</a> [wikipedia.org] (Multiprotocol Label Switching) use flow control.  The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.  MPLS adds a label to the packet which identifies the flow, so it isn't even necessary to check the packet for the five components which define the flow.  Just look at the label and send it on its way. </p><p>QOS (Quality Of Service) has multiple modes of operation and multiple queue types which address the issues of which packets to drop.  It may or may not include deep packet inspection to attempt to determine the type of packet.</p><p>Perhaps they've come up with some new innovations that aren't obvious in the write-up because it's written at a relatively high level,  but there's nothing here that isn't already implemented and that I don't already work with on a daily basis in production networks.</p></htmltext>
<tokenext>Exactly how is this different from what we currently have ? Consider a conventional router receiving two packets that are part of the same video .
The router looks at the first packet 's destination address and consults a routing table .
It then holds the packet in a queue until it can be dispatched .
When the router receives the second packet , it repeats those same steps , not " remembering " that it has just processed an earlier piece of the same video .
Uh , no .
This is called process switching .
It has n't been used in anything but the most low-end routers for quite some time .
CEF ( Cisco Express Forwarding ) and MPLS [ wikipedia.org ] ( Multiprotocol Label Switching ) use flow control .
The perform a lookup on the first packet , cache the information in a forwarding table and all further packets which are part of the same flow are switched , not routed , at effectively wire speeds .
MPLS adds a label to the packet which identifies the flow , so it is n't even necessary to check the packet for the five components which define the flow .
Just look at the label and send it on its way .
QOS ( Quality Of Service ) has multiple modes of operation and multiple queue types which address the issues of which packets to drop .
It may or may not include deep packet inspection to attempt to determine the type of packet.Perhaps they 've come up with some new innovations that are n't obvious in the write-up because it 's written at a relatively high level , but there 's nothing here that is n't already implemented and that I do n't already work with on a daily basis in production networks .</tokentext>
<sentencetext>Exactly how is this different from what we currently have?Consider a conventional router receiving two packets that are part of the same video.
The router looks at the first packet's destination address and consults a routing table.
It then holds the packet in a queue until it can be dispatched.
When the router receives the second packet, it repeats those same steps, not "remembering" that it has just processed an earlier piece of the same video.
Uh, no.
This is called process switching.
It hasn't been used in anything but the most low-end routers for quite some time.
CEF (Cisco Express Forwarding) and MPLS [wikipedia.org] (Multiprotocol Label Switching) use flow control.
The perform a lookup on the first packet, cache the information in a forwarding table and all further packets which are part of the same flow are switched, not routed, at effectively wire speeds.
MPLS adds a label to the packet which identifies the flow, so it isn't even necessary to check the packet for the five components which define the flow.
Just look at the label and send it on its way.
QOS (Quality Of Service) has multiple modes of operation and multiple queue types which address the issues of which packets to drop.
It may or may not include deep packet inspection to attempt to determine the type of packet.Perhaps they've come up with some new innovations that aren't obvious in the write-up because it's written at a relatively high level,  but there's nothing here that isn't already implemented and that I don't already work with on a daily basis in production networks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654213</id>
	<title>Yep</title>
	<author>Anonymous</author>
	<datestamp>1247216460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you're an ISP, you route IP packets. TCP is a payload like any other.</p></htmltext>
<tokenext>If you 're an ISP , you route IP packets .
TCP is a payload like any other .</tokentext>
<sentencetext>If you're an ISP, you route IP packets.
TCP is a payload like any other.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655841</id>
	<title>Re:Wrong</title>
	<author>Anonymous</author>
	<datestamp>1247226660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>"TCP throughput is inversely proportional to delay"</p><p>Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delay<br>As long as the window is large enough</p></div><p>But in reality a HUGE window is impractical.  For real world tcp communication he is correct.</p></div>
	</htmltext>
<tokenext>" TCP throughput is inversely proportional to delay " Absolutely wrong , 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delayAs long as the window is large enoughBut in reality a HUGE window is impractical .
For real world tcp communication he is correct .</tokentext>
<sentencetext>"TCP throughput is inversely proportional to delay"Absolutely wrong, 2Mb/s at 1ms delay gives the same throughput as 2Mb/s at 10ms delayAs long as the window is large enoughBut in reality a HUGE window is impractical.
For real world tcp communication he is correct.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656161</id>
	<title>Re:OpenBSD anyone?</title>
	<author>Anonymous</author>
	<datestamp>1247229360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>how is a packet filter related to routing?</p></htmltext>
<tokenext>how is a packet filter related to routing ?</tokentext>
<sentencetext>how is a packet filter related to routing?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655757</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654335</id>
	<title>Re:This does not solve the problem</title>
	<author>John.P.Jones</author>
	<datestamp>1247217060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>TCP's congestion control backs off exponentially because it has to.  There is a stability property that if the network is undergoing increased congestion (this is how TCP learns the available throughput and utilizes it) and the senders do not back off exponentially then their backing off will not be fast enough to relieve congestion and therefore stabilize the system.  If this router is selectively stalling individual flows I do not believe that will be fast enough to deal with growing congestion from many greedy clients.</p><p>Basically, eventually the buffer space of the router will become exhausted and it will be forced to drop packets non-selectively hence initiating TCP backoffs from randomly selected flows, resulting in current behavior.  So, of course in that gray area between the first dropped flow and when we need to revert back to normal behavior we may see improved network performance for some flows but they will just take advantage of this by opening up their TCP windows more until the inevitable collapse comes.</p><p>The end result will be delaying backing off many TCP flows (which will speed them up creating more congestion) at the expense of completely trashing a few flows (which will stall anyways for packet reordering). and so the resulting system will be less stable.</p></htmltext>
<tokenext>TCP 's congestion control backs off exponentially because it has to .
There is a stability property that if the network is undergoing increased congestion ( this is how TCP learns the available throughput and utilizes it ) and the senders do not back off exponentially then their backing off will not be fast enough to relieve congestion and therefore stabilize the system .
If this router is selectively stalling individual flows I do not believe that will be fast enough to deal with growing congestion from many greedy clients.Basically , eventually the buffer space of the router will become exhausted and it will be forced to drop packets non-selectively hence initiating TCP backoffs from randomly selected flows , resulting in current behavior .
So , of course in that gray area between the first dropped flow and when we need to revert back to normal behavior we may see improved network performance for some flows but they will just take advantage of this by opening up their TCP windows more until the inevitable collapse comes.The end result will be delaying backing off many TCP flows ( which will speed them up creating more congestion ) at the expense of completely trashing a few flows ( which will stall anyways for packet reordering ) .
and so the resulting system will be less stable .</tokentext>
<sentencetext>TCP's congestion control backs off exponentially because it has to.
There is a stability property that if the network is undergoing increased congestion (this is how TCP learns the available throughput and utilizes it) and the senders do not back off exponentially then their backing off will not be fast enough to relieve congestion and therefore stabilize the system.
If this router is selectively stalling individual flows I do not believe that will be fast enough to deal with growing congestion from many greedy clients.Basically, eventually the buffer space of the router will become exhausted and it will be forced to drop packets non-selectively hence initiating TCP backoffs from randomly selected flows, resulting in current behavior.
So, of course in that gray area between the first dropped flow and when we need to revert back to normal behavior we may see improved network performance for some flows but they will just take advantage of this by opening up their TCP windows more until the inevitable collapse comes.The end result will be delaying backing off many TCP flows (which will speed them up creating more congestion) at the expense of completely trashing a few flows (which will stall anyways for packet reordering).
and so the resulting system will be less stable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654893</id>
	<title>Been tried, and they saw it was *not* good</title>
	<author>OeLeWaPpErKe</author>
	<datestamp>1247220000000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>All older cisco equipment worked this way. This was nice, and worked very well for the first router(s) closest to the end customer. However for routers meant to route for large numbers of users this turned out to be a disaster.</p><p><a href="http://www.cisco.com/en/US/products/hw/switches/ps679/products\_data\_sheet09186a00800925f3.html" title="cisco.com" rel="nofollow">Just to give you an idea, this was EOS (end of support) before I turned 10</a> [cisco.com] (look for "netflow routing")</p><p>There are a number of very problematic properties :<br>-&gt; trivial to ddos (just generate too many flows to fit in memory, or generally increase the per-packet lookup time)<br>-&gt; not p2p compatible (p2p will cause flow based routers to perform at a snail's pace, because they open so much connections)<br>-&gt; possible triple penalty for every new flow (first a failed flow lookup, followed by a failed route lookup, going to default route)<br>-&gt; very hard to have a good qos policy this way. A pipe has a fixed bandwidth, and you almost always oversubscribe. Therefore useful policies are very hard to formulate per-flow.<br>-&gt; if you divide bandwidth per-flow over tcp then a large overload will "synchronize" everything. So let's explain what happens if 3 users are happily surfing about and another user starts bittorrent. Bandwidth gets divided over all the flows, and *every* connection closes, due to timeouts.</p><p>There are a number of advantages<br>-&gt; easy, very extensive QOS is trivial to implement<br>-&gt; stateful firewalling is almost laughably easy to implement, and very advanced firewalling can be done (e.g. easy to block ssh but not https, just filter on the string "openssh" anywhere in the connection. Added bonus : hilarity ensues if you email someone the text "openssh", and his pop3 connection keeps getting closed)</p><p>Here's the deal : a router has to lookup in a table of about 300.000 entries in per-packet switching (excepting MPLS P routers). My PC is, at this moment, opening 331 flows to various destinations, each sending an average of 5 packets (probably a lot of DNS requests are dragging this number down), but you have to keep in mind that a flow-based router has to look up first in the "flow table" AND in the route table (which still has 300.000 entries).</p><p>As soon as a flow-based router services more than 1000 machines (in either direction, ie. 100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router. That's not a lot. If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance. 2000 machines and packet-based switching is double as efficient.</p><p>So : flow-based routing<nobr> <wbr></nobr>... for your wireless access point<nobr> <wbr></nobr>... perhaps. For anything more serious than that ? No way in hell.</p></htmltext>
<tokenext>All older cisco equipment worked this way .
This was nice , and worked very well for the first router ( s ) closest to the end customer .
However for routers meant to route for large numbers of users this turned out to be a disaster.Just to give you an idea , this was EOS ( end of support ) before I turned 10 [ cisco.com ] ( look for " netflow routing " ) There are a number of very problematic properties : - &gt; trivial to ddos ( just generate too many flows to fit in memory , or generally increase the per-packet lookup time ) - &gt; not p2p compatible ( p2p will cause flow based routers to perform at a snail 's pace , because they open so much connections ) - &gt; possible triple penalty for every new flow ( first a failed flow lookup , followed by a failed route lookup , going to default route ) - &gt; very hard to have a good qos policy this way .
A pipe has a fixed bandwidth , and you almost always oversubscribe .
Therefore useful policies are very hard to formulate per-flow.- &gt; if you divide bandwidth per-flow over tcp then a large overload will " synchronize " everything .
So let 's explain what happens if 3 users are happily surfing about and another user starts bittorrent .
Bandwidth gets divided over all the flows , and * every * connection closes , due to timeouts.There are a number of advantages- &gt; easy , very extensive QOS is trivial to implement- &gt; stateful firewalling is almost laughably easy to implement , and very advanced firewalling can be done ( e.g .
easy to block ssh but not https , just filter on the string " openssh " anywhere in the connection .
Added bonus : hilarity ensues if you email someone the text " openssh " , and his pop3 connection keeps getting closed ) Here 's the deal : a router has to lookup in a table of about 300.000 entries in per-packet switching ( excepting MPLS P routers ) .
My PC is , at this moment , opening 331 flows to various destinations , each sending an average of 5 packets ( probably a lot of DNS requests are dragging this number down ) , but you have to keep in mind that a flow-based router has to look up first in the " flow table " AND in the route table ( which still has 300.000 entries ) .As soon as a flow-based router services more than 1000 machines ( in either direction , ie .
100 clients communicating with 900 internet hosts = 1000 machines serviced ) , it 's performance will fail to keep up with a packet-based router .
That 's not a lot .
If a single client torrents or p2p 's you will hit this limit easily , resulting in slower performance .
2000 machines and packet-based switching is double as efficient.So : flow-based routing ... for your wireless access point ... perhaps. For anything more serious than that ?
No way in hell .</tokentext>
<sentencetext>All older cisco equipment worked this way.
This was nice, and worked very well for the first router(s) closest to the end customer.
However for routers meant to route for large numbers of users this turned out to be a disaster.Just to give you an idea, this was EOS (end of support) before I turned 10 [cisco.com] (look for "netflow routing")There are a number of very problematic properties :-&gt; trivial to ddos (just generate too many flows to fit in memory, or generally increase the per-packet lookup time)-&gt; not p2p compatible (p2p will cause flow based routers to perform at a snail's pace, because they open so much connections)-&gt; possible triple penalty for every new flow (first a failed flow lookup, followed by a failed route lookup, going to default route)-&gt; very hard to have a good qos policy this way.
A pipe has a fixed bandwidth, and you almost always oversubscribe.
Therefore useful policies are very hard to formulate per-flow.-&gt; if you divide bandwidth per-flow over tcp then a large overload will "synchronize" everything.
So let's explain what happens if 3 users are happily surfing about and another user starts bittorrent.
Bandwidth gets divided over all the flows, and *every* connection closes, due to timeouts.There are a number of advantages-&gt; easy, very extensive QOS is trivial to implement-&gt; stateful firewalling is almost laughably easy to implement, and very advanced firewalling can be done (e.g.
easy to block ssh but not https, just filter on the string "openssh" anywhere in the connection.
Added bonus : hilarity ensues if you email someone the text "openssh", and his pop3 connection keeps getting closed)Here's the deal : a router has to lookup in a table of about 300.000 entries in per-packet switching (excepting MPLS P routers).
My PC is, at this moment, opening 331 flows to various destinations, each sending an average of 5 packets (probably a lot of DNS requests are dragging this number down), but you have to keep in mind that a flow-based router has to look up first in the "flow table" AND in the route table (which still has 300.000 entries).As soon as a flow-based router services more than 1000 machines (in either direction, ie.
100 clients communicating with 900 internet hosts = 1000 machines serviced), it's performance will fail to keep up with a packet-based router.
That's not a lot.
If a single client torrents or p2p's you will hit this limit easily, resulting in slower performance.
2000 machines and packet-based switching is double as efficient.So : flow-based routing ... for your wireless access point ... perhaps. For anything more serious than that ?
No way in hell.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28672037</id>
	<title>No, it's entirely unlike that</title>
	<author>billstewart</author>
	<datestamp>1247413620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Netcraft confirms - OpenBSD is... oh, sorry, wrong thread...</i> </p><p>Both of them are identifying flows, but the similarity pretty much stops there.<br>OpenBSD packet filters are a firewall function, typically used to protect your endpoint from evil traffic.  This is a router, deciding which interface to send a packet out when it arrives on a different interface at an ISP backbone location.  Packet filters are running at the speed of your computer's interfaces and application processing, typically well under a gigabit/sec - this is trying to run at network-backbone speeds of many gigabits/second.  </p><p>One thing this router is trying to do is have a $30k device using mostly-standard parts that performs as well as a $300k Cisco box using expensive fancy parts.</p></htmltext>
<tokenext>Netcraft confirms - OpenBSD is... oh , sorry , wrong thread... Both of them are identifying flows , but the similarity pretty much stops there.OpenBSD packet filters are a firewall function , typically used to protect your endpoint from evil traffic .
This is a router , deciding which interface to send a packet out when it arrives on a different interface at an ISP backbone location .
Packet filters are running at the speed of your computer 's interfaces and application processing , typically well under a gigabit/sec - this is trying to run at network-backbone speeds of many gigabits/second .
One thing this router is trying to do is have a $ 30k device using mostly-standard parts that performs as well as a $ 300k Cisco box using expensive fancy parts .</tokentext>
<sentencetext>Netcraft confirms - OpenBSD is... oh, sorry, wrong thread... Both of them are identifying flows, but the similarity pretty much stops there.OpenBSD packet filters are a firewall function, typically used to protect your endpoint from evil traffic.
This is a router, deciding which interface to send a packet out when it arrives on a different interface at an ISP backbone location.
Packet filters are running at the speed of your computer's interfaces and application processing, typically well under a gigabit/sec - this is trying to run at network-backbone speeds of many gigabits/second.
One thing this router is trying to do is have a $30k device using mostly-standard parts that performs as well as a $300k Cisco box using expensive fancy parts.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655757</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654983</id>
	<title>Re:This does not solve the problem</title>
	<author>snaz555</author>
	<datestamp>1247220600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>TCP's congestion control algorithm, which <em>causes congestion and then backs off</em> is the real culprit here, and this router does nothing to fix that.  The way to fix that is to dump TCP's congestion control and replace it with <em>real</em> flow control in the network layer.</p></div><p>
Just remove the excess forwarding buffers; there's no point buffering more than what's required for the internal forwarding jitter, which should really be no more than a few datagrams at most.  TCP is based on a model where congestion = loss, not congestion = pileup.  Other UDP based protocols - DNS, etc, all have their own retransmission mechanisms, also based on the same model of congestion = loss.   What happens when routers have ridiculous quantities of buffer - several seconds' worth - is that entire TCP windows' worth get piled up, and \_then\_ TCP fast retransmit piles it up \_again\_.  When the congestion eases and the router is draining its massive buffer including all the piled up retransmits, the source TCPs are still polynomially backing off.  The culprit really isn't the congestion control, but the excess buffering.  Some experimental congestion control mechanisms attempt to get around this by continuously measuring one-way latency and in this way detect intermediate buffer pileups - and stop piling up more until the buffer is drained - but it's really silly to add complexity to work around something that shouldn't be in the datagram path in the first place.  These methods of congestion control however tend not to work as well where congestion is actually caused by loss, but this tends to be pretty rare these days where loss is indicative of buffer overflow or traffic shaping.  (E.g. the common ethernet collision domain is gone due to switched full-duplex infrastructure.)</p></div>
	</htmltext>
<tokenext>TCP 's congestion control algorithm , which causes congestion and then backs off is the real culprit here , and this router does nothing to fix that .
The way to fix that is to dump TCP 's congestion control and replace it with real flow control in the network layer .
Just remove the excess forwarding buffers ; there 's no point buffering more than what 's required for the internal forwarding jitter , which should really be no more than a few datagrams at most .
TCP is based on a model where congestion = loss , not congestion = pileup .
Other UDP based protocols - DNS , etc , all have their own retransmission mechanisms , also based on the same model of congestion = loss .
What happens when routers have ridiculous quantities of buffer - several seconds ' worth - is that entire TCP windows ' worth get piled up , and \ _then \ _ TCP fast retransmit piles it up \ _again \ _ .
When the congestion eases and the router is draining its massive buffer including all the piled up retransmits , the source TCPs are still polynomially backing off .
The culprit really is n't the congestion control , but the excess buffering .
Some experimental congestion control mechanisms attempt to get around this by continuously measuring one-way latency and in this way detect intermediate buffer pileups - and stop piling up more until the buffer is drained - but it 's really silly to add complexity to work around something that should n't be in the datagram path in the first place .
These methods of congestion control however tend not to work as well where congestion is actually caused by loss , but this tends to be pretty rare these days where loss is indicative of buffer overflow or traffic shaping .
( E.g. the common ethernet collision domain is gone due to switched full-duplex infrastructure .
)</tokentext>
<sentencetext>TCP's congestion control algorithm, which causes congestion and then backs off is the real culprit here, and this router does nothing to fix that.
The way to fix that is to dump TCP's congestion control and replace it with real flow control in the network layer.
Just remove the excess forwarding buffers; there's no point buffering more than what's required for the internal forwarding jitter, which should really be no more than a few datagrams at most.
TCP is based on a model where congestion = loss, not congestion = pileup.
Other UDP based protocols - DNS, etc, all have their own retransmission mechanisms, also based on the same model of congestion = loss.
What happens when routers have ridiculous quantities of buffer - several seconds' worth - is that entire TCP windows' worth get piled up, and \_then\_ TCP fast retransmit piles it up \_again\_.
When the congestion eases and the router is draining its massive buffer including all the piled up retransmits, the source TCPs are still polynomially backing off.
The culprit really isn't the congestion control, but the excess buffering.
Some experimental congestion control mechanisms attempt to get around this by continuously measuring one-way latency and in this way detect intermediate buffer pileups - and stop piling up more until the buffer is drained - but it's really silly to add complexity to work around something that shouldn't be in the datagram path in the first place.
These methods of congestion control however tend not to work as well where congestion is actually caused by loss, but this tends to be pretty rare these days where loss is indicative of buffer overflow or traffic shaping.
(E.g. the common ethernet collision domain is gone due to switched full-duplex infrastructure.
)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655065</id>
	<title>Re:Puffery by a startup</title>
	<author>BitZtream</author>
	<datestamp>1247221080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Especially trying to break into a market by telling everyone about your awesome super cool new way of doing things<nobr> <wbr></nobr>... that everyone else has been doing for 10 years already.</p></htmltext>
<tokenext>Especially trying to break into a market by telling everyone about your awesome super cool new way of doing things ... that everyone else has been doing for 10 years already .</tokentext>
<sentencetext>Especially trying to break into a market by telling everyone about your awesome super cool new way of doing things ... that everyone else has been doing for 10 years already.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654129</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28660361
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655723
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654299
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654311
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28659603
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654315
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654217
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28672037
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655757
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654273
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28671933
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654437
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656213
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656921
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28659103
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654893
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654377
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654207
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28657043
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654357
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655065
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654129
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653985
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653933
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656161
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655757
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28678247
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654893
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654061
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653987
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654387
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656383
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655841
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654335
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654889
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654235
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655273
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654291
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654213
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655419
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654291
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654983
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_10_1830217_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655771
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653987
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654307
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654129
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655065
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654263
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653987
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654061
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655771
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653845
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654893
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28678247
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28659103
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654889
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656921
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655757
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28672037
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656161
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654461
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655841
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655723
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28657043
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28660361
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653933
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653985
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653897
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656383
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654273
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654311
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654213
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654235
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654291
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655273
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28655419
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654357
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28656213
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654315
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28659603
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654195
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654455
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28653907
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654299
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654335
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654387
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654437
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28671933
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654983
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654207
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654377
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654217
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654075
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_10_1830217.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_10_1830217.28654385
</commentlist>
</conversation>
