<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_29_1821234</id>
	<title>FCC Relying On Faulty ISP Performance Data</title>
	<author>Soulskill</author>
	<datestamp>1269890880000</datestamp>
	<htmltext>alphadogg writes <i>"The FCC recently used speed test results from comScore as an absolute indicator of specific ISPs' performance. Consulting firm NetForecast analyzed comScore's testing methodology and data to assess whether it accurately reflects broadband ISP performance, and to assess the appropriateness of using the data to reach general conclusions about the actual performance ISPs deliver to their subscribers. NetForecast <a href="http://www.networkworld.com/community/node/59354?hpg1=bn">uncovered problems on both counts</a>. They found that the effective service speeds comScore reports are <a href="http://www.netforecast.com/Reports/NFR5103\_comScore\_ISP\_Speed\_Test\_Accuracy.pdf">low by a large margin</a> (PDF) because its data calculations under-report performance and place many subscribers in a higher performance tier than they purchased."</i></htmltext>
<tokenext>alphadogg writes " The FCC recently used speed test results from comScore as an absolute indicator of specific ISPs ' performance .
Consulting firm NetForecast analyzed comScore 's testing methodology and data to assess whether it accurately reflects broadband ISP performance , and to assess the appropriateness of using the data to reach general conclusions about the actual performance ISPs deliver to their subscribers .
NetForecast uncovered problems on both counts .
They found that the effective service speeds comScore reports are low by a large margin ( PDF ) because its data calculations under-report performance and place many subscribers in a higher performance tier than they purchased .
"</tokentext>
<sentencetext>alphadogg writes "The FCC recently used speed test results from comScore as an absolute indicator of specific ISPs' performance.
Consulting firm NetForecast analyzed comScore's testing methodology and data to assess whether it accurately reflects broadband ISP performance, and to assess the appropriateness of using the data to reach general conclusions about the actual performance ISPs deliver to their subscribers.
NetForecast uncovered problems on both counts.
They found that the effective service speeds comScore reports are low by a large margin (PDF) because its data calculations under-report performance and place many subscribers in a higher performance tier than they purchased.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130</id>
	<title>FCC is faulty?</title>
	<author>Anonymous</author>
	<datestamp>1269894900000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>I just am so surprised.  Its run by a bunch of government employees, and they are rarely faulty.</htmltext>
<tokenext>I just am so surprised .
Its run by a bunch of government employees , and they are rarely faulty .</tokentext>
<sentencetext>I just am so surprised.
Its run by a bunch of government employees, and they are rarely faulty.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666</id>
	<title>Re:FCC is faulty?</title>
	<author>Anonymous</author>
	<datestamp>1269854460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Bad data == good data for a  politician.</p><p>Especially when it's in favor of whatever they desire to happen.  Politicians wanted healthcare so they generated a faulty "42.5 million americans uninsured" statistic.  How?  Using a couple mail-in postcards from voluntary recipients.  Hardly scientific.  (Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens.  +9 million if you include illegal non-citizens/intruders.)</p><p>And of course if the FCC stats show that ~40 million Americans don't have great than dialup speeds, that too works in politicians favor, and they'll justify it as a way to pass their favorite bill.  (And also make their election funders happy.)  Even on my DSL line which *never* falls below the advertised 750k, the FCC test showed only ~256k on the FCC test.  Bogus.</p><p>Okay.  Maybe I'm a little cynical.</p><p>Nah.  I work for the government.  More like - simple observation.</p></htmltext>
<tokenext>Bad data = = good data for a politician.Especially when it 's in favor of whatever they desire to happen .
Politicians wanted healthcare so they generated a faulty " 42.5 million americans uninsured " statistic .
How ? Using a couple mail-in postcards from voluntary recipients .
Hardly scientific .
( Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens. + 9 million if you include illegal non-citizens/intruders .
) And of course if the FCC stats show that ~ 40 million Americans do n't have great than dialup speeds , that too works in politicians favor , and they 'll justify it as a way to pass their favorite bill .
( And also make their election funders happy .
) Even on my DSL line which * never * falls below the advertised 750k , the FCC test showed only ~ 256k on the FCC test .
Bogus.Okay. Maybe I 'm a little cynical.Nah .
I work for the government .
More like - simple observation .</tokentext>
<sentencetext>Bad data == good data for a  politician.Especially when it's in favor of whatever they desire to happen.
Politicians wanted healthcare so they generated a faulty "42.5 million americans uninsured" statistic.
How?  Using a couple mail-in postcards from voluntary recipients.
Hardly scientific.
(Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens.  +9 million if you include illegal non-citizens/intruders.
)And of course if the FCC stats show that ~40 million Americans don't have great than dialup speeds, that too works in politicians favor, and they'll justify it as a way to pass their favorite bill.
(And also make their election funders happy.
)  Even on my DSL line which *never* falls below the advertised 750k, the FCC test showed only ~256k on the FCC test.
Bogus.Okay.  Maybe I'm a little cynical.Nah.
I work for the government.
More like - simple observation.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662124</id>
	<title>Re:How Much</title>
	<author>skids</author>
	<datestamp>1269856500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Better question: who owns NetForcast?</p><p>Actually I'm inclined to chalk this one up as plain old sniping between competing performance testing companies, rather than either the FCC/pols or the ISPs trying to fudge numbers.</p></htmltext>
<tokenext>Better question : who owns NetForcast ? Actually I 'm inclined to chalk this one up as plain old sniping between competing performance testing companies , rather than either the FCC/pols or the ISPs trying to fudge numbers .</tokentext>
<sentencetext>Better question: who owns NetForcast?Actually I'm inclined to chalk this one up as plain old sniping between competing performance testing companies, rather than either the FCC/pols or the ISPs trying to fudge numbers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664596</id>
	<title>Re:FCC is faulty?</title>
	<author>Red Flayer</author>
	<datestamp>1269869040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>(Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens. +9 million if you include illegal non-citizens/intruders.)</p></div></blockquote><p>[citation needed]<br> <br>My five minutes of googling has not come up with sources that agree with your figures.  Typically, lowball figures like yours are due to the following errors:<br> <br>1. They include only people without any insurance for the entire year, though at any given time during the year, a significantly higher number of people are uninsured.<br>2. They double-dip on the exclusions... for example, they deduct everyone making over $SOME\_ARBITRARY\_THRESHOLD as being able to afford it but choosing not to, and then they deduct $NUMBER\_OF\_ILLEGALS, falsely assuming that no individual belongs to both sets.<br>3. On the subject of exclusions, while we're at it: they state (as you did) that the number of uninsured is X, when they've deducted those who they deem able to afford it but chose not to.  Those people are uninsured, and should be counted when you make a claim about the number of uninsured.  If you instead make a claim about the number of uninsured not by choice, then you can exclude those people.  <br>4.  Exclusions, particularly the rich-enough-to-afford-it exclusion, are arbitrary. I've yet to see a valid analysis of how a family of 4 in central/northern NJ can afford health insurance on a family income of $35k -- yet $35k is the figure used in a lot of these studies.<br> <br>In short, please provide a link to your source so we can determine if you're blowing smoke or not.</p></div>
	</htmltext>
<tokenext>( Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens. + 9 million if you include illegal non-citizens/intruders .
) [ citation needed ] My five minutes of googling has not come up with sources that agree with your figures .
Typically , lowball figures like yours are due to the following errors : 1 .
They include only people without any insurance for the entire year , though at any given time during the year , a significantly higher number of people are uninsured.2 .
They double-dip on the exclusions... for example , they deduct everyone making over $ SOME \ _ARBITRARY \ _THRESHOLD as being able to afford it but choosing not to , and then they deduct $ NUMBER \ _OF \ _ILLEGALS , falsely assuming that no individual belongs to both sets.3 .
On the subject of exclusions , while we 're at it : they state ( as you did ) that the number of uninsured is X , when they 've deducted those who they deem able to afford it but chose not to .
Those people are uninsured , and should be counted when you make a claim about the number of uninsured .
If you instead make a claim about the number of uninsured not by choice , then you can exclude those people .
4. Exclusions , particularly the rich-enough-to-afford-it exclusion , are arbitrary .
I 've yet to see a valid analysis of how a family of 4 in central/northern NJ can afford health insurance on a family income of $ 35k -- yet $ 35k is the figure used in a lot of these studies .
In short , please provide a link to your source so we can determine if you 're blowing smoke or not .</tokentext>
<sentencetext>(Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens. +9 million if you include illegal non-citizens/intruders.
)[citation needed] My five minutes of googling has not come up with sources that agree with your figures.
Typically, lowball figures like yours are due to the following errors: 1.
They include only people without any insurance for the entire year, though at any given time during the year, a significantly higher number of people are uninsured.2.
They double-dip on the exclusions... for example, they deduct everyone making over $SOME\_ARBITRARY\_THRESHOLD as being able to afford it but choosing not to, and then they deduct $NUMBER\_OF\_ILLEGALS, falsely assuming that no individual belongs to both sets.3.
On the subject of exclusions, while we're at it: they state (as you did) that the number of uninsured is X, when they've deducted those who they deem able to afford it but chose not to.
Those people are uninsured, and should be counted when you make a claim about the number of uninsured.
If you instead make a claim about the number of uninsured not by choice, then you can exclude those people.
4.  Exclusions, particularly the rich-enough-to-afford-it exclusion, are arbitrary.
I've yet to see a valid analysis of how a family of 4 in central/northern NJ can afford health insurance on a family income of $35k -- yet $35k is the figure used in a lot of these studies.
In short, please provide a link to your source so we can determine if you're blowing smoke or not.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31666016</id>
	<title>Re:FCC is faulty?</title>
	<author>Hurricane78</author>
	<datestamp>1269879000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Exactly. While the commercial employees function perfectly... while working hard to rape the money out of your as hard as they can. ^^</p></htmltext>
<tokenext>Exactly .
While the commercial employees function perfectly... while working hard to rape the money out of your as hard as they can .
^ ^</tokentext>
<sentencetext>Exactly.
While the commercial employees function perfectly... while working hard to rape the money out of your as hard as they can.
^^</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661462</id>
	<title>Re:comScore got it more or less right</title>
	<author>Anonymous</author>
	<datestamp>1269853500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Ya I pay for the "Extreme" Roadrunner in my area.  which gives me a better upstream for my telecommuting wife.  supposedly 10M/1M but it is more like 3M/768k, most of this is due to really high latency and dropped packets. When it works it works, so I guess by this guy's definition I get my 10/1, just as long as you don't count the packet loss...</htmltext>
<tokenext>Ya I pay for the " Extreme " Roadrunner in my area .
which gives me a better upstream for my telecommuting wife .
supposedly 10M/1M but it is more like 3M/768k , most of this is due to really high latency and dropped packets .
When it works it works , so I guess by this guy 's definition I get my 10/1 , just as long as you do n't count the packet loss.. .</tokentext>
<sentencetext>Ya I pay for the "Extreme" Roadrunner in my area.
which gives me a better upstream for my telecommuting wife.
supposedly 10M/1M but it is more like 3M/768k, most of this is due to really high latency and dropped packets.
When it works it works, so I guess by this guy's definition I get my 10/1, just as long as you don't count the packet loss...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661254</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661216</id>
	<title>How Much</title>
	<author>Anonymous</author>
	<datestamp>1269895320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><br>
I wonder which ISP owns comScore.  Who got the worst rating?</htmltext>
<tokenext>I wonder which ISP owns comScore .
Who got the worst rating ?</tokentext>
<sentencetext>
I wonder which ISP owns comScore.
Who got the worst rating?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662546</id>
	<title>Re:FCC is faulty?</title>
	<author>Anonymous</author>
	<datestamp>1269858240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I've found a good way of dealing with the numbers politicans spew out is to take both sides and average it out...</p><p><a href="http://www.politifact.com/truth-o-meter/statements/2009/aug/21/orrin-hatch/who-are-uninsured-hatchs-take/" title="politifact.com" rel="nofollow">http://www.politifact.com/truth-o-meter/statements/2009/aug/21/orrin-hatch/who-are-uninsured-hatchs-take/</a> [politifact.com]</p><p><a href="http://www.politifact.com/truth-o-meter/statements/2009/aug/18/barack-obama/number-those-without-health-insurance-about-46-mil/" title="politifact.com" rel="nofollow">http://www.politifact.com/truth-o-meter/statements/2009/aug/18/barack-obama/number-those-without-health-insurance-about-46-mil/</a> [politifact.com]</p><p>Hey, look, reality is actually in the middle.  Go fig.</p></htmltext>
<tokenext>I 've found a good way of dealing with the numbers politicans spew out is to take both sides and average it out...http : //www.politifact.com/truth-o-meter/statements/2009/aug/21/orrin-hatch/who-are-uninsured-hatchs-take/ [ politifact.com ] http : //www.politifact.com/truth-o-meter/statements/2009/aug/18/barack-obama/number-those-without-health-insurance-about-46-mil/ [ politifact.com ] Hey , look , reality is actually in the middle .
Go fig .</tokentext>
<sentencetext>I've found a good way of dealing with the numbers politicans spew out is to take both sides and average it out...http://www.politifact.com/truth-o-meter/statements/2009/aug/21/orrin-hatch/who-are-uninsured-hatchs-take/ [politifact.com]http://www.politifact.com/truth-o-meter/statements/2009/aug/18/barack-obama/number-those-without-health-insurance-about-46-mil/ [politifact.com]Hey, look, reality is actually in the middle.
Go fig.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31676566</id>
	<title>Re:FCC is faulty?</title>
	<author>Anonymous</author>
	<datestamp>1269940440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A little off topic, but re private parties levying taxes... U.S. telcos have "levied" the "FCC Approved Customer Line Charge" for years. Most consumers think it's a tax, and BigTel does nothing to dissuade them of that. It's a surcharge - pure and simple guaranteed margin. Would that be an appropriate example?</p></htmltext>
<tokenext>A little off topic , but re private parties levying taxes... U.S. telcos have " levied " the " FCC Approved Customer Line Charge " for years .
Most consumers think it 's a tax , and BigTel does nothing to dissuade them of that .
It 's a surcharge - pure and simple guaranteed margin .
Would that be an appropriate example ?</tokentext>
<sentencetext>A little off topic, but re private parties levying taxes... U.S. telcos have "levied" the "FCC Approved Customer Line Charge" for years.
Most consumers think it's a tax, and BigTel does nothing to dissuade them of that.
It's a surcharge - pure and simple guaranteed margin.
Would that be an appropriate example?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662702</id>
	<title>Re:FCC is faulty?</title>
	<author>Anonymous</author>
	<datestamp>1269858900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>As long as we're throwing anecdotes around here, I might as well tell you what my connection scored.</p><p>What I pay for:<br>AT&amp;T DSL, $30/month<br>D/L: 1.5Mbps up to 3.0Mbps (of course, they advertise the 3.0Mbps speed)<br>U/L: 768Kbps</p><p>What I get, according to the FCC speed tests (both of them!):<br>D/L: ~750Kbps<br>U/L: ~500Kbps<br>-&gt; Interesting note: this is what I seem to actually get everywhere. Their speed tests have confirmed problems I've been dealing with for some time now.</p><p>I used to have a pre-Yahoo 1.5Mbps/768Kbps connection, and when I upgraded, it never went any faster (but the price dropped $5, due to me getting rid of the old grandfathered-in pricing plan). It was supposed to be 3.0M/768k after that. The best D/L speed I ever saw was around 1.5Mbps (its theoretical level, not the real-life 80\% level). Then about 3 months ago, the speed dropped to half of what it was, and hasn't recovered. Their customer service is nearly useless (they could always be used as a bad example, but other than that...). I hope this recent FCC pressure at least gets them scared enough to fix their shit and give me what I'm paying for. Barring that, the state AG might like to know about widespread fraud...</p></htmltext>
<tokenext>As long as we 're throwing anecdotes around here , I might as well tell you what my connection scored.What I pay for : AT&amp;T DSL , $ 30/monthD/L : 1.5Mbps up to 3.0Mbps ( of course , they advertise the 3.0Mbps speed ) U/L : 768KbpsWhat I get , according to the FCC speed tests ( both of them !
) : D/L : ~ 750KbpsU/L : ~ 500Kbps- &gt; Interesting note : this is what I seem to actually get everywhere .
Their speed tests have confirmed problems I 've been dealing with for some time now.I used to have a pre-Yahoo 1.5Mbps/768Kbps connection , and when I upgraded , it never went any faster ( but the price dropped $ 5 , due to me getting rid of the old grandfathered-in pricing plan ) .
It was supposed to be 3.0M/768k after that .
The best D/L speed I ever saw was around 1.5Mbps ( its theoretical level , not the real-life 80 \ % level ) .
Then about 3 months ago , the speed dropped to half of what it was , and has n't recovered .
Their customer service is nearly useless ( they could always be used as a bad example , but other than that... ) .
I hope this recent FCC pressure at least gets them scared enough to fix their shit and give me what I 'm paying for .
Barring that , the state AG might like to know about widespread fraud.. .</tokentext>
<sentencetext>As long as we're throwing anecdotes around here, I might as well tell you what my connection scored.What I pay for:AT&amp;T DSL, $30/monthD/L: 1.5Mbps up to 3.0Mbps (of course, they advertise the 3.0Mbps speed)U/L: 768KbpsWhat I get, according to the FCC speed tests (both of them!
):D/L: ~750KbpsU/L: ~500Kbps-&gt; Interesting note: this is what I seem to actually get everywhere.
Their speed tests have confirmed problems I've been dealing with for some time now.I used to have a pre-Yahoo 1.5Mbps/768Kbps connection, and when I upgraded, it never went any faster (but the price dropped $5, due to me getting rid of the old grandfathered-in pricing plan).
It was supposed to be 3.0M/768k after that.
The best D/L speed I ever saw was around 1.5Mbps (its theoretical level, not the real-life 80\% level).
Then about 3 months ago, the speed dropped to half of what it was, and hasn't recovered.
Their customer service is nearly useless (they could always be used as a bad example, but other than that...).
I hope this recent FCC pressure at least gets them scared enough to fix their shit and give me what I'm paying for.
Barring that, the state AG might like to know about widespread fraud...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661270</id>
	<title>Re:How Much</title>
	<author>Anonymous</author>
	<datestamp>1269895560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Think about it; What other ISP's name starts with com?</p></htmltext>
<tokenext>Think about it ; What other ISP 's name starts with com ?</tokentext>
<sentencetext>Think about it; What other ISP's name starts with com?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661216</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661650</id>
	<title>Re:Wait for ACK?</title>
	<author>Anonymous</author>
	<datestamp>1269854340000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>by the way, I'm not in the U.S., I actually get what I pay for.</p></div><p>You might have worded that a little bit better.  Canada and Australia have <b>worse</b> broadband networks than the US does.  Most US users on DSL get what they pay for.  Cable networks may or may not deliver the promised performance at all hours, but that's simply the nature of the beast.  In my area Time Warner provides 10MBit/s service on a DOCSIS 1.1 network.  That means that just four customers are enough to max out a node that serves dozens to hundreds of customers.</p></div>
	</htmltext>
<tokenext>by the way , I 'm not in the U.S. , I actually get what I pay for.You might have worded that a little bit better .
Canada and Australia have worse broadband networks than the US does .
Most US users on DSL get what they pay for .
Cable networks may or may not deliver the promised performance at all hours , but that 's simply the nature of the beast .
In my area Time Warner provides 10MBit/s service on a DOCSIS 1.1 network .
That means that just four customers are enough to max out a node that serves dozens to hundreds of customers .</tokentext>
<sentencetext>by the way, I'm not in the U.S., I actually get what I pay for.You might have worded that a little bit better.
Canada and Australia have worse broadband networks than the US does.
Most US users on DSL get what they pay for.
Cable networks may or may not deliver the promised performance at all hours, but that's simply the nature of the beast.
In my area Time Warner provides 10MBit/s service on a DOCSIS 1.1 network.
That means that just four customers are enough to max out a node that serves dozens to hundreds of customers.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661428</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661518</id>
	<title>Need new ISP</title>
	<author>irn</author>
	<datestamp>1269853860000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>I guess I need a new ISP.  According to the article there is roughly 17\% of the population who at any given time is getting MORE bandwidth from their ISP than what they're paying for?  Is that right?  Did I misread the article? I'm sure comScore would have at least put me in a much lower tier than what I pay for.  Something here doesn't seem right.</htmltext>
<tokenext>I guess I need a new ISP .
According to the article there is roughly 17 \ % of the population who at any given time is getting MORE bandwidth from their ISP than what they 're paying for ?
Is that right ?
Did I misread the article ?
I 'm sure comScore would have at least put me in a much lower tier than what I pay for .
Something here does n't seem right .</tokentext>
<sentencetext>I guess I need a new ISP.
According to the article there is roughly 17\% of the population who at any given time is getting MORE bandwidth from their ISP than what they're paying for?
Is that right?
Did I misread the article?
I'm sure comScore would have at least put me in a much lower tier than what I pay for.
Something here doesn't seem right.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662206</id>
	<title>a few somewhat valid points, but mostly garbage.</title>
	<author>azmodean+1</author>
	<datestamp>1269856800000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>So here is the outline of their claims, with responses.<br>Data gathering errors<br>
&nbsp; &nbsp; Only one TCP connection is used<br>
&nbsp; &nbsp; &nbsp; &nbsp; Basically valid, it's a pretty rare net activity nowadays that actually maxes out the connection by itself, no idea if the promises the ISPs make contractually include any wording about per-connection performance.<br>
&nbsp; &nbsp; Client-server delay is variable<br>
&nbsp; &nbsp; &nbsp; &nbsp; Tough, this is a reality of how the network operates, if an ISP promises speed X, they need to invest in the infrastructure necessary to deliver speed X.<br>
&nbsp; &nbsp; Participants&rsquo; computers may be resource constrained<br>
&nbsp; &nbsp; &nbsp; &nbsp; Outside of listing minimum requirements for client computers, this is also a reality of how the customer will perceive network performance, and this is the important measure.<br>
&nbsp; &nbsp; Test traffic may conflict with home traffic<br>
&nbsp; &nbsp; &nbsp; &nbsp; semi valid-ish point, but I'm skeptical that it has a noticeable impact.<br>
&nbsp; &nbsp; Decimal math is incorrect<br>
&nbsp; &nbsp; &nbsp; &nbsp; This one seems like utter crap, they seem to be assuming that the testing company is saying MB and meaning MiB in one case, but that they say MB and really mean MB in another case.  It's far more likely that they are saying MB and they mean MiB in both cases, in which case this point is moot.<br>
&nbsp; &nbsp; Protocol overhead is unaccounted for<br>
&nbsp; &nbsp; &nbsp; &nbsp; Another semi-valid point, but they claim the testers have the responsibility to make the ISPs numbers look better, why isn't it instead the ISPs responsibility to make their numbers more meaningful?  IIRC, speeds are often advertised on the basis of file downloads, which means the protocol overhead should NOT be accounted for.<br>Data interpretation errors<br>
&nbsp; &nbsp; &nbsp; Purchased speed tiers are incorrectly identified<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; This is probably the most significant claim, if true.  However it's also the most wishy-washy of all the claims, going so far as to specifically state that it's the opinion of the company that it is even happening, rather than a factual claim:<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; "NetForecast estimates that it is highly likely that comScore incorrectly places many panelists' PCs into higher tiers than the subscribers purchased."</p><p>Overall, the report looks like a tiny bit of valid criticism of the testing methodology wrapped in a whole lot of weaseling about what the ISP should be expected to provide, and always siding with the ISP.  The end result for me is that the validity of the entire report is fatally undermined by the obvious grasping at straws being done, and the impression that I get that if there were any errors in the opposite direction, they will not be reported.</p></htmltext>
<tokenext>So here is the outline of their claims , with responses.Data gathering errors     Only one TCP connection is used         Basically valid , it 's a pretty rare net activity nowadays that actually maxes out the connection by itself , no idea if the promises the ISPs make contractually include any wording about per-connection performance .
    Client-server delay is variable         Tough , this is a reality of how the network operates , if an ISP promises speed X , they need to invest in the infrastructure necessary to deliver speed X .
    Participants    computers may be resource constrained         Outside of listing minimum requirements for client computers , this is also a reality of how the customer will perceive network performance , and this is the important measure .
    Test traffic may conflict with home traffic         semi valid-ish point , but I 'm skeptical that it has a noticeable impact .
    Decimal math is incorrect         This one seems like utter crap , they seem to be assuming that the testing company is saying MB and meaning MiB in one case , but that they say MB and really mean MB in another case .
It 's far more likely that they are saying MB and they mean MiB in both cases , in which case this point is moot .
    Protocol overhead is unaccounted for         Another semi-valid point , but they claim the testers have the responsibility to make the ISPs numbers look better , why is n't it instead the ISPs responsibility to make their numbers more meaningful ?
IIRC , speeds are often advertised on the basis of file downloads , which means the protocol overhead should NOT be accounted for.Data interpretation errors       Purchased speed tiers are incorrectly identified           This is probably the most significant claim , if true .
However it 's also the most wishy-washy of all the claims , going so far as to specifically state that it 's the opinion of the company that it is even happening , rather than a factual claim :           " NetForecast estimates that it is highly likely that comScore incorrectly places many panelists ' PCs into higher tiers than the subscribers purchased .
" Overall , the report looks like a tiny bit of valid criticism of the testing methodology wrapped in a whole lot of weaseling about what the ISP should be expected to provide , and always siding with the ISP .
The end result for me is that the validity of the entire report is fatally undermined by the obvious grasping at straws being done , and the impression that I get that if there were any errors in the opposite direction , they will not be reported .</tokentext>
<sentencetext>So here is the outline of their claims, with responses.Data gathering errors
    Only one TCP connection is used
        Basically valid, it's a pretty rare net activity nowadays that actually maxes out the connection by itself, no idea if the promises the ISPs make contractually include any wording about per-connection performance.
    Client-server delay is variable
        Tough, this is a reality of how the network operates, if an ISP promises speed X, they need to invest in the infrastructure necessary to deliver speed X.
    Participants’ computers may be resource constrained
        Outside of listing minimum requirements for client computers, this is also a reality of how the customer will perceive network performance, and this is the important measure.
    Test traffic may conflict with home traffic
        semi valid-ish point, but I'm skeptical that it has a noticeable impact.
    Decimal math is incorrect
        This one seems like utter crap, they seem to be assuming that the testing company is saying MB and meaning MiB in one case, but that they say MB and really mean MB in another case.
It's far more likely that they are saying MB and they mean MiB in both cases, in which case this point is moot.
    Protocol overhead is unaccounted for
        Another semi-valid point, but they claim the testers have the responsibility to make the ISPs numbers look better, why isn't it instead the ISPs responsibility to make their numbers more meaningful?
IIRC, speeds are often advertised on the basis of file downloads, which means the protocol overhead should NOT be accounted for.Data interpretation errors
      Purchased speed tiers are incorrectly identified
          This is probably the most significant claim, if true.
However it's also the most wishy-washy of all the claims, going so far as to specifically state that it's the opinion of the company that it is even happening, rather than a factual claim:
          "NetForecast estimates that it is highly likely that comScore incorrectly places many panelists' PCs into higher tiers than the subscribers purchased.
"Overall, the report looks like a tiny bit of valid criticism of the testing methodology wrapped in a whole lot of weaseling about what the ISP should be expected to provide, and always siding with the ISP.
The end result for me is that the validity of the entire report is fatally undermined by the obvious grasping at straws being done, and the impression that I get that if there were any errors in the opposite direction, they will not be reported.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661752</id>
	<title>Re:FCC is faulty?</title>
	<author>commodore64\_love</author>
	<datestamp>1269854760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Private companies don't have access to my paycheck.</p><p>For example when Comcast yanked TCM off my cable without notice (and in violation of FCC rules), then sent me some paperwork that I could get TCM back by getting a "free" digital converter box at $5 per month rental (times 3 sets), I mailed them a photo of my middle digit attached to a formal complaint to the FCC, and asked them to cancel my cable effective the day they yanked TCM w/o telling me.</p><p>Now when Comcast mails me a letter asking me to "come back" I just laugh.<br>Good luck trying to laugh at the IRS, FCC, or any other government desiring your money.</p></htmltext>
<tokenext>Private companies do n't have access to my paycheck.For example when Comcast yanked TCM off my cable without notice ( and in violation of FCC rules ) , then sent me some paperwork that I could get TCM back by getting a " free " digital converter box at $ 5 per month rental ( times 3 sets ) , I mailed them a photo of my middle digit attached to a formal complaint to the FCC , and asked them to cancel my cable effective the day they yanked TCM w/o telling me.Now when Comcast mails me a letter asking me to " come back " I just laugh.Good luck trying to laugh at the IRS , FCC , or any other government desiring your money .</tokentext>
<sentencetext>Private companies don't have access to my paycheck.For example when Comcast yanked TCM off my cable without notice (and in violation of FCC rules), then sent me some paperwork that I could get TCM back by getting a "free" digital converter box at $5 per month rental (times 3 sets), I mailed them a photo of my middle digit attached to a formal complaint to the FCC, and asked them to cancel my cable effective the day they yanked TCM w/o telling me.Now when Comcast mails me a letter asking me to "come back" I just laugh.Good luck trying to laugh at the IRS, FCC, or any other government desiring your money.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661420</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664618</id>
	<title>Re:FCC is faulty?</title>
	<author>Red Flayer</author>
	<datestamp>1269869160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Never mind the fact that Hatch cleverly ignores the fact that he's double-dipping on the exclusions.</htmltext>
<tokenext>Never mind the fact that Hatch cleverly ignores the fact that he 's double-dipping on the exclusions .</tokentext>
<sentencetext>Never mind the fact that Hatch cleverly ignores the fact that he's double-dipping on the exclusions.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662978</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661378</id>
	<title>Learn about statistics - both of you</title>
	<author>guruevi</author>
	<datestamp>1269896280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Both sides need to learn more about statistics.</p><p>The report fails to mention that across a large enough population, the results will be more-or-less correct within a certain percentage point because as he mentions, some people will test with a lot of bandwidth available at a certain point but others will test with their available bandwidth constricted. Overall, out of a large enough population the outliers are washed away.</p><p>comScore needs to realize that correlation != causation. It's not because your bandwidth correlates with other users' high-bandwidth plans, that it is caused by you actually buying the plan. But even then, even in the report the statistics show that it evens out pretty good with only a small percentage error.</p><p>Off course this brief report reeks more like paid research. Off course comScore measures the users' experience connecting to large-bandwidth centers like Akamai which has a lot of large sites on it and it doesn't accurately measures what the provider offers in the last mile. I don't care that I actually get my 10Mbps connecting to my neighborhood (unless a bunch of my neighbors actually host the Linux-ISO torrent I want) I care about getting on average getting maybe 50\% of what I pay for which I usually don't get (I get closer to 1-10\% depending on what I'm doing). comScore accurately reflects the poor status of broadband in this metropolitan area - dual-ISDN speeds (early 90's) on the best high-tier packages money can buy in this area. The only alternative is DSL which is horribly outdated.</p></htmltext>
<tokenext>Both sides need to learn more about statistics.The report fails to mention that across a large enough population , the results will be more-or-less correct within a certain percentage point because as he mentions , some people will test with a lot of bandwidth available at a certain point but others will test with their available bandwidth constricted .
Overall , out of a large enough population the outliers are washed away.comScore needs to realize that correlation ! = causation .
It 's not because your bandwidth correlates with other users ' high-bandwidth plans , that it is caused by you actually buying the plan .
But even then , even in the report the statistics show that it evens out pretty good with only a small percentage error.Off course this brief report reeks more like paid research .
Off course comScore measures the users ' experience connecting to large-bandwidth centers like Akamai which has a lot of large sites on it and it does n't accurately measures what the provider offers in the last mile .
I do n't care that I actually get my 10Mbps connecting to my neighborhood ( unless a bunch of my neighbors actually host the Linux-ISO torrent I want ) I care about getting on average getting maybe 50 \ % of what I pay for which I usually do n't get ( I get closer to 1-10 \ % depending on what I 'm doing ) .
comScore accurately reflects the poor status of broadband in this metropolitan area - dual-ISDN speeds ( early 90 's ) on the best high-tier packages money can buy in this area .
The only alternative is DSL which is horribly outdated .</tokentext>
<sentencetext>Both sides need to learn more about statistics.The report fails to mention that across a large enough population, the results will be more-or-less correct within a certain percentage point because as he mentions, some people will test with a lot of bandwidth available at a certain point but others will test with their available bandwidth constricted.
Overall, out of a large enough population the outliers are washed away.comScore needs to realize that correlation != causation.
It's not because your bandwidth correlates with other users' high-bandwidth plans, that it is caused by you actually buying the plan.
But even then, even in the report the statistics show that it evens out pretty good with only a small percentage error.Off course this brief report reeks more like paid research.
Off course comScore measures the users' experience connecting to large-bandwidth centers like Akamai which has a lot of large sites on it and it doesn't accurately measures what the provider offers in the last mile.
I don't care that I actually get my 10Mbps connecting to my neighborhood (unless a bunch of my neighbors actually host the Linux-ISO torrent I want) I care about getting on average getting maybe 50\% of what I pay for which I usually don't get (I get closer to 1-10\% depending on what I'm doing).
comScore accurately reflects the poor status of broadband in this metropolitan area - dual-ISDN speeds (early 90's) on the best high-tier packages money can buy in this area.
The only alternative is DSL which is horribly outdated.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662324</id>
	<title>Re:FCC is faulty?</title>
	<author>cmacb</author>
	<datestamp>1269857340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Obviously you don't understand the methodology here.</p><p>The failure of government is meant to convince us that we need to spend more money on government so they can do better testing to convince us we need to spend more money on government.</p></htmltext>
<tokenext>Obviously you do n't understand the methodology here.The failure of government is meant to convince us that we need to spend more money on government so they can do better testing to convince us we need to spend more money on government .</tokentext>
<sentencetext>Obviously you don't understand the methodology here.The failure of government is meant to convince us that we need to spend more money on government so they can do better testing to convince us we need to spend more money on government.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664278</id>
	<title>I wish...</title>
	<author>Anonymous</author>
	<datestamp>1269867360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I wish I could get the horrible low speeds these poor people are getting. I'm in Australia, feel free to commence your laughter now.</p></htmltext>
<tokenext>I wish I could get the horrible low speeds these poor people are getting .
I 'm in Australia , feel free to commence your laughter now .</tokentext>
<sentencetext>I wish I could get the horrible low speeds these poor people are getting.
I'm in Australia, feel free to commence your laughter now.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661134</id>
	<title>So?</title>
	<author>Anonymous</author>
	<datestamp>1269894900000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Government policy decisions made based on inaccurate or misleading data?  Surely you have heard of the Iraq war, right?</p><p>Be happy that in this case it is not complete fabrication.</p></htmltext>
<tokenext>Government policy decisions made based on inaccurate or misleading data ?
Surely you have heard of the Iraq war , right ? Be happy that in this case it is not complete fabrication .</tokentext>
<sentencetext>Government policy decisions made based on inaccurate or misleading data?
Surely you have heard of the Iraq war, right?Be happy that in this case it is not complete fabrication.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662314</id>
	<title>Re:FCC is faulty?</title>
	<author>blair1q</author>
	<datestamp>1269857280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They have jobs.</p><p>Now compare with your lot.  All of you.</p></htmltext>
<tokenext>They have jobs.Now compare with your lot .
All of you .</tokentext>
<sentencetext>They have jobs.Now compare with your lot.
All of you.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661428</id>
	<title>Re:Wait for ACK?</title>
	<author>topham</author>
	<datestamp>1269853320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>TCP/IP doesn't wait for the ACK. It keeps sending until the Window is full, or the ACK is received. If the Window fills it will wait until the ACK is received (or timeout and retry, etc).</p><p>If the test is trying to automatically place the users in specific Tiers then there could be a problem, however the rest of the issues are mostly a red herring. I use Speedtest.net and can readily attest to it's general accuracy, and I seriously doubt any other services are all that different.</p><p>by the way, I'm not in the U.S., I actually get what I pay for.</p></htmltext>
<tokenext>TCP/IP does n't wait for the ACK .
It keeps sending until the Window is full , or the ACK is received .
If the Window fills it will wait until the ACK is received ( or timeout and retry , etc ) .If the test is trying to automatically place the users in specific Tiers then there could be a problem , however the rest of the issues are mostly a red herring .
I use Speedtest.net and can readily attest to it 's general accuracy , and I seriously doubt any other services are all that different.by the way , I 'm not in the U.S. , I actually get what I pay for .</tokentext>
<sentencetext>TCP/IP doesn't wait for the ACK.
It keeps sending until the Window is full, or the ACK is received.
If the Window fills it will wait until the ACK is received (or timeout and retry, etc).If the test is trying to automatically place the users in specific Tiers then there could be a problem, however the rest of the issues are mostly a red herring.
I use Speedtest.net and can readily attest to it's general accuracy, and I seriously doubt any other services are all that different.by the way, I'm not in the U.S., I actually get what I pay for.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661122</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661690</id>
	<title>HTTP measurement is not enough</title>
	<author>Anonymous</author>
	<datestamp>1269854520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In those times of traffic shapping and deep packet inspection, test results are always awesome. I can get more than 20Mbit/s on tests with my 6Mbit/s rate comcast line. I can see 2MB sustained download speed over http. I can also see crippled 256kbit/s SSH transfer rate, dut to traffic shapping, and out the roof latency (say 400ms) on anything that is not going to port 80. Tests always look good, but for all practical purposes, this connexion is crap.</p></htmltext>
<tokenext>In those times of traffic shapping and deep packet inspection , test results are always awesome .
I can get more than 20Mbit/s on tests with my 6Mbit/s rate comcast line .
I can see 2MB sustained download speed over http .
I can also see crippled 256kbit/s SSH transfer rate , dut to traffic shapping , and out the roof latency ( say 400ms ) on anything that is not going to port 80 .
Tests always look good , but for all practical purposes , this connexion is crap .</tokentext>
<sentencetext>In those times of traffic shapping and deep packet inspection, test results are always awesome.
I can get more than 20Mbit/s on tests with my 6Mbit/s rate comcast line.
I can see 2MB sustained download speed over http.
I can also see crippled 256kbit/s SSH transfer rate, dut to traffic shapping, and out the roof latency (say 400ms) on anything that is not going to port 80.
Tests always look good, but for all practical purposes, this connexion is crap.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661254</id>
	<title>comScore got it more or less right</title>
	<author>Spazmania</author>
	<datestamp>1269895500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>comScore got the data more or less right. The OP's main complaint seems to be that the speed is under-reported because packet loss causes the TCP session they used to slow down. Guess what? Packet loss causes the TCP session to slow down. Customers on ISPs with noticeable loss rates experience slower performance than the line's rated speed. Hello!</p></htmltext>
<tokenext>comScore got the data more or less right .
The OP 's main complaint seems to be that the speed is under-reported because packet loss causes the TCP session they used to slow down .
Guess what ?
Packet loss causes the TCP session to slow down .
Customers on ISPs with noticeable loss rates experience slower performance than the line 's rated speed .
Hello !</tokentext>
<sentencetext>comScore got the data more or less right.
The OP's main complaint seems to be that the speed is under-reported because packet loss causes the TCP session they used to slow down.
Guess what?
Packet loss causes the TCP session to slow down.
Customers on ISPs with noticeable loss rates experience slower performance than the line's rated speed.
Hello!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661420</id>
	<title>Re:FCC is faulty?</title>
	<author>Anonymous</author>
	<datestamp>1269853260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>And we all know employees of private companies are infallible.</p></htmltext>
<tokenext>And we all know employees of private companies are infallible .</tokentext>
<sentencetext>And we all know employees of private companies are infallible.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661366</id>
	<title>Sounds like error in the 'good' direction</title>
	<author>ElSupreme</author>
	<datestamp>1269896220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>At least they don't have data saying that our speeds are faster than they really are. This way the problem of 3rd world net speeds can be addressed.<br>
<br>
And really the argument 'these scores are low because X slowed them down' is really not sound. If X really exists then the connection is slowed in real life. I would bet this is the results of what people expierence over the net. And should be plenty fine to help determine what needs to be done to get back in the same ballpark as the rest of the industrialized world.</htmltext>
<tokenext>At least they do n't have data saying that our speeds are faster than they really are .
This way the problem of 3rd world net speeds can be addressed .
And really the argument 'these scores are low because X slowed them down ' is really not sound .
If X really exists then the connection is slowed in real life .
I would bet this is the results of what people expierence over the net .
And should be plenty fine to help determine what needs to be done to get back in the same ballpark as the rest of the industrialized world .</tokentext>
<sentencetext>At least they don't have data saying that our speeds are faster than they really are.
This way the problem of 3rd world net speeds can be addressed.
And really the argument 'these scores are low because X slowed them down' is really not sound.
If X really exists then the connection is slowed in real life.
I would bet this is the results of what people expierence over the net.
And should be plenty fine to help determine what needs to be done to get back in the same ballpark as the rest of the industrialized world.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661702</id>
	<title>Complicated?</title>
	<author>rickb928</author>
	<datestamp>1269854580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>As if dslreports.com isn't useful?</p><p>Sheesh</p></htmltext>
<tokenext>As if dslreports.com is n't useful ? Sheesh</tokentext>
<sentencetext>As if dslreports.com isn't useful?Sheesh</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661122</id>
	<title>Wait for ACK?</title>
	<author>Anonymous</author>
	<datestamp>1269894840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Waiting for an ACK before transmitting the next packet doesn't seem like a way of measuring bandwidth. Sounds like a measure of bandwidth + latency.</htmltext>
<tokenext>Waiting for an ACK before transmitting the next packet does n't seem like a way of measuring bandwidth .
Sounds like a measure of bandwidth + latency .</tokentext>
<sentencetext>Waiting for an ACK before transmitting the next packet doesn't seem like a way of measuring bandwidth.
Sounds like a measure of bandwidth + latency.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664218</id>
	<title>Re:Wait for ACK?</title>
	<author>JWSmythe</author>
	<datestamp>1269867180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
&nbsp; &nbsp; I prefer the speedtest that you can download and put on your own servers.</p><p>
&nbsp; &nbsp; There are far too many unknown when testing from point A (your PC) to point B (the speedtest server).  Because it's a public speedtest server, that hans people are beating on it all the time.  Is the server capable of handling it?  The uplink?  Is there a congestion problem between you and them.  mtr helps to give a better image of potential problems during the test.</p><p>
&nbsp; &nbsp; I have access to servers on Verizon FiOS and BrightHouse commercial lines.  That covers most of the customers in my area.  I know the bandwidth is as advertised on both circuits, and I know what their utilization is (measured at the switch attached to the uplink).  From customer locations, sometimes I get the advertised rates, sometimes I don't.  There is absolutely *HUGE* fluctuation between what I see on either of my hosts, and the public servers.  A 10Mb/s down line (for example) will show just about that from either of my own servers.  From the public servers, I'll see anything from 1Mb/s to 15Mb/s  I know some providers fudge their numbers with QoS, and even unlimited traffic that they know is going speedtest servers.</p><p>
&nbsp; &nbsp; When I've done these tests for friends and customers, sometimes they're satisfied.  Sometimes it results in an angry call to the provider that a song and dance about intermittent problems, which suddenly gets resolved within minutes (i.e., the line was capped wrong, and they fixed it)</p><p>
&nbsp; &nbsp;</p></htmltext>
<tokenext>    I prefer the speedtest that you can download and put on your own servers .
    There are far too many unknown when testing from point A ( your PC ) to point B ( the speedtest server ) .
Because it 's a public speedtest server , that hans people are beating on it all the time .
Is the server capable of handling it ?
The uplink ?
Is there a congestion problem between you and them .
mtr helps to give a better image of potential problems during the test .
    I have access to servers on Verizon FiOS and BrightHouse commercial lines .
That covers most of the customers in my area .
I know the bandwidth is as advertised on both circuits , and I know what their utilization is ( measured at the switch attached to the uplink ) .
From customer locations , sometimes I get the advertised rates , sometimes I do n't .
There is absolutely * HUGE * fluctuation between what I see on either of my hosts , and the public servers .
A 10Mb/s down line ( for example ) will show just about that from either of my own servers .
From the public servers , I 'll see anything from 1Mb/s to 15Mb/s I know some providers fudge their numbers with QoS , and even unlimited traffic that they know is going speedtest servers .
    When I 've done these tests for friends and customers , sometimes they 're satisfied .
Sometimes it results in an angry call to the provider that a song and dance about intermittent problems , which suddenly gets resolved within minutes ( i.e. , the line was capped wrong , and they fixed it )    </tokentext>
<sentencetext>
    I prefer the speedtest that you can download and put on your own servers.
    There are far too many unknown when testing from point A (your PC) to point B (the speedtest server).
Because it's a public speedtest server, that hans people are beating on it all the time.
Is the server capable of handling it?
The uplink?
Is there a congestion problem between you and them.
mtr helps to give a better image of potential problems during the test.
    I have access to servers on Verizon FiOS and BrightHouse commercial lines.
That covers most of the customers in my area.
I know the bandwidth is as advertised on both circuits, and I know what their utilization is (measured at the switch attached to the uplink).
From customer locations, sometimes I get the advertised rates, sometimes I don't.
There is absolutely *HUGE* fluctuation between what I see on either of my hosts, and the public servers.
A 10Mb/s down line (for example) will show just about that from either of my own servers.
From the public servers, I'll see anything from 1Mb/s to 15Mb/s  I know some providers fudge their numbers with QoS, and even unlimited traffic that they know is going speedtest servers.
    When I've done these tests for friends and customers, sometimes they're satisfied.
Sometimes it results in an angry call to the provider that a song and dance about intermittent problems, which suddenly gets resolved within minutes (i.e., the line was capped wrong, and they fixed it)
   </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661428</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662978</id>
	<title>Re:FCC is faulty?</title>
	<author>stephanruby</author>
	<datestamp>1269860280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Real numbers from <b>scientists</b> estimate the number as 5-15 million uninsured U.S. citizens. +9 million if you include illegal non-citizens/intruders.</p></div><p>Senator <a href="http://en.wikipedia.org/wiki/Orrin\_Hatch" title="wikipedia.org">Orrin Hatch</a> [wikipedia.org], a scientist??? </p><p>Besides, here are his exact words. He doesn't even dispute the figures (because he does admit that they're uninsured). He just questions the assumption that those uninsured people even want insurance (which is a separate argument of its own, if you want to argue that, argue that, don't change the freaking numbers).
</p><p><div class="quote"><p>"By the way, of that 47 million people, when you deduct the ones who could have insurance through their employers but don't, you deduct the 11 million that basically qualify for CHIP or Medicaid but don't realize it (and) are not enrolled, you deduct those who are over $75,000 a year in income but just won't purchase their own health insurance, and then 6 million people who are illegal aliens, my gosh, when you put that all together, it leaves about 15 million people. So we're going to throw out a system that works for 15 million people."</p> </div><p>And by the way, people without health insurance, whether they want that health insurance or not, do affect the rest of us. And unless Senator Hatch wants to repeal the law that mandates ambulances and emergency rooms from turning away patients, emergency rooms are going to keep on closing down.</p></div>
	</htmltext>
<tokenext>Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens. + 9 million if you include illegal non-citizens/intruders.Senator Orrin Hatch [ wikipedia.org ] , a scientist ? ? ?
Besides , here are his exact words .
He does n't even dispute the figures ( because he does admit that they 're uninsured ) .
He just questions the assumption that those uninsured people even want insurance ( which is a separate argument of its own , if you want to argue that , argue that , do n't change the freaking numbers ) .
" By the way , of that 47 million people , when you deduct the ones who could have insurance through their employers but do n't , you deduct the 11 million that basically qualify for CHIP or Medicaid but do n't realize it ( and ) are not enrolled , you deduct those who are over $ 75,000 a year in income but just wo n't purchase their own health insurance , and then 6 million people who are illegal aliens , my gosh , when you put that all together , it leaves about 15 million people .
So we 're going to throw out a system that works for 15 million people .
" And by the way , people without health insurance , whether they want that health insurance or not , do affect the rest of us .
And unless Senator Hatch wants to repeal the law that mandates ambulances and emergency rooms from turning away patients , emergency rooms are going to keep on closing down .</tokentext>
<sentencetext>Real numbers from scientists estimate the number as 5-15 million uninsured U.S. citizens. +9 million if you include illegal non-citizens/intruders.Senator Orrin Hatch [wikipedia.org], a scientist???
Besides, here are his exact words.
He doesn't even dispute the figures (because he does admit that they're uninsured).
He just questions the assumption that those uninsured people even want insurance (which is a separate argument of its own, if you want to argue that, argue that, don't change the freaking numbers).
"By the way, of that 47 million people, when you deduct the ones who could have insurance through their employers but don't, you deduct the 11 million that basically qualify for CHIP or Medicaid but don't realize it (and) are not enrolled, you deduct those who are over $75,000 a year in income but just won't purchase their own health insurance, and then 6 million people who are illegal aliens, my gosh, when you put that all together, it leaves about 15 million people.
So we're going to throw out a system that works for 15 million people.
" And by the way, people without health insurance, whether they want that health insurance or not, do affect the rest of us.
And unless Senator Hatch wants to repeal the law that mandates ambulances and emergency rooms from turning away patients, emergency rooms are going to keep on closing down.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31670132</id>
	<title>The Finger</title>
	<author>bahamuut</author>
	<datestamp>1269962340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"The Secure features a fingerprint scanner and a thermal sensor 'so that the finger alone, detached from the body, will still not give access to the memory stick's contents."<br>I'm sure that if someone went through the trouble of removing the finger to access the Secure Pro, then they'd go through the trouble of warming the dead finger up so that they could have access.  seems kinda gimmicky to me...</p></htmltext>
<tokenext>" The Secure features a fingerprint scanner and a thermal sensor 'so that the finger alone , detached from the body , will still not give access to the memory stick 's contents .
" I 'm sure that if someone went through the trouble of removing the finger to access the Secure Pro , then they 'd go through the trouble of warming the dead finger up so that they could have access .
seems kinda gimmicky to me.. .</tokentext>
<sentencetext>"The Secure features a fingerprint scanner and a thermal sensor 'so that the finger alone, detached from the body, will still not give access to the memory stick's contents.
"I'm sure that if someone went through the trouble of removing the finger to access the Secure Pro, then they'd go through the trouble of warming the dead finger up so that they could have access.
seems kinda gimmicky to me...</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661650
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661428
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661122
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664618
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662978
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662324
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31676566
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661752
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661420
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662314
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662124
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661462
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661254
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31666016
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662546
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662702
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664596
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661270
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661216
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_29_1821234_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664218
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661428
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661122
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661216
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662124
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661270
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661254
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661462
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664278
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661130
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661420
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661752
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31676566
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662324
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661666
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662702
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662978
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664618
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664596
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662546
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31666016
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31662314
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661366
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661518
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661122
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661428
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661650
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31664218
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_29_1821234.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_29_1821234.31661134
</commentlist>
</conversation>
