<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_10_12_2341240</id>
	<title>Intel Caught Cheating In 3DMark Benchmark</title>
	<author>kdawson</author>
	<datestamp>1255363380000</datestamp>
	<htmltext>EconolineCrush writes <i>"3DMark Vantage developer Futuremark has clear guidelines for what sort of driver optimizations are permitted with its graphics benchmark. Intel's current Windows 7 drivers appear to be in direct violation, offloading the graphics workload onto the CPU to artificially inflate scores for the company's integrated graphics chipsets. The Tech Report <a href="http://techreport.com/articles.x/17732">lays out the evidence</a>, along with Intel's response, and illustrates that 3DMark scores don't necessarily track with game performance, anyway."</i></htmltext>
<tokenext>EconolineCrush writes " 3DMark Vantage developer Futuremark has clear guidelines for what sort of driver optimizations are permitted with its graphics benchmark .
Intel 's current Windows 7 drivers appear to be in direct violation , offloading the graphics workload onto the CPU to artificially inflate scores for the company 's integrated graphics chipsets .
The Tech Report lays out the evidence , along with Intel 's response , and illustrates that 3DMark scores do n't necessarily track with game performance , anyway .
"</tokentext>
<sentencetext>EconolineCrush writes "3DMark Vantage developer Futuremark has clear guidelines for what sort of driver optimizations are permitted with its graphics benchmark.
Intel's current Windows 7 drivers appear to be in direct violation, offloading the graphics workload onto the CPU to artificially inflate scores for the company's integrated graphics chipsets.
The Tech Report lays out the evidence, along with Intel's response, and illustrates that 3DMark scores don't necessarily track with game performance, anyway.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728599</id>
	<title>Which would make sense...</title>
	<author>Anonymous</author>
	<datestamp>1255369740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>That was my first thought, too.</p><p>Here's the thing, though: They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe, and much of that offloading was dropped. So this isn't a general-purpose optimization, which would make sense -- it's a targeted optimization, aimed at and enabled specifically for a benchmark, in order to get higher scores in said benchmark.</p><p>It reminds me of the days when Quake3.exe would give you higher benchmarks, but worse video, than Quack3.exe.</p></htmltext>
<tokenext>That was my first thought , too.Here 's the thing , though : They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe , and much of that offloading was dropped .
So this is n't a general-purpose optimization , which would make sense -- it 's a targeted optimization , aimed at and enabled specifically for a benchmark , in order to get higher scores in said benchmark.It reminds me of the days when Quake3.exe would give you higher benchmarks , but worse video , than Quack3.exe .</tokentext>
<sentencetext>That was my first thought, too.Here's the thing, though: They took 3DMarkVantage.exe and renamed it to 3DMarkVintage.exe, and much of that offloading was dropped.
So this isn't a general-purpose optimization, which would make sense -- it's a targeted optimization, aimed at and enabled specifically for a benchmark, in order to get higher scores in said benchmark.It reminds me of the days when Quake3.exe would give you higher benchmarks, but worse video, than Quack3.exe.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728561</id>
	<title>White Goodman Would be Proud</title>
	<author>Anonymous</author>
	<datestamp>1255369140000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>In true White Goodman fashion, cheating is something losers come up with to make them feel better about losing.</p><p>Is Intel the 500 lb. gorilla in chipsets?  Sure, and they got there by 'cheating.'  Which is winning.</p><p>Aint capitalism grand?</p><p>For all you losers who don't know who the great White Goodman is: <a href="http://www.imdb.com/title/tt0364725/" title="imdb.com">http://www.imdb.com/title/tt0364725/</a> [imdb.com]</p></htmltext>
<tokenext>In true White Goodman fashion , cheating is something losers come up with to make them feel better about losing.Is Intel the 500 lb .
gorilla in chipsets ?
Sure , and they got there by 'cheating .
' Which is winning.Aint capitalism grand ? For all you losers who do n't know who the great White Goodman is : http : //www.imdb.com/title/tt0364725/ [ imdb.com ]</tokentext>
<sentencetext>In true White Goodman fashion, cheating is something losers come up with to make them feel better about losing.Is Intel the 500 lb.
gorilla in chipsets?
Sure, and they got there by 'cheating.
'  Which is winning.Aint capitalism grand?For all you losers who don't know who the great White Goodman is: http://www.imdb.com/title/tt0364725/ [imdb.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728383</id>
	<title>Yes but</title>
	<author>Anonymous</author>
	<datestamp>1255367280000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>What does this have to do with Twitter (TM) ?</p><p>Can I do this on my iPod (TM) ?</p><p>These and other questions remain unanswered... like, where is my hat? DUDE! bowel</p></htmltext>
<tokenext>What does this have to do with Twitter ( TM ) ? Can I do this on my iPod ( TM ) ? These and other questions remain unanswered... like , where is my hat ?
DUDE ! bowel</tokentext>
<sentencetext>What does this have to do with Twitter (TM) ?Can I do this on my iPod (TM) ?These and other questions remain unanswered... like, where is my hat?
DUDE! bowel</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728459</id>
	<title>Doesn't 3DMark cheat too?</title>
	<author>iYk6</author>
	<datestamp>1255367760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Is 3DMark the benchmark that will give a higher score to a VIA graphics card if the Vendor ID is changed to Nvidia?</p></htmltext>
<tokenext>Is 3DMark the benchmark that will give a higher score to a VIA graphics card if the Vendor ID is changed to Nvidia ?</tokentext>
<sentencetext>Is 3DMark the benchmark that will give a higher score to a VIA graphics card if the Vendor ID is changed to Nvidia?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728531</id>
	<title>Why not?</title>
	<author>Anonymous</author>
	<datestamp>1255368720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>The newest GPUs have 2 billion transistors.  Why wouldn't you put them to use?  That's the trend anyways, even nVidia is going to release a 3 billion transistor GPU that's able to run general programs.

I'm a PC gamer, I could care less if Intel or ATI or nVidea cheat on their benchmarks.  In fact they should be encouraged to release hand coded or special drivers to improve performance in specific games.</htmltext>
<tokenext>The newest GPUs have 2 billion transistors .
Why would n't you put them to use ?
That 's the trend anyways , even nVidia is going to release a 3 billion transistor GPU that 's able to run general programs .
I 'm a PC gamer , I could care less if Intel or ATI or nVidea cheat on their benchmarks .
In fact they should be encouraged to release hand coded or special drivers to improve performance in specific games .</tokentext>
<sentencetext>The newest GPUs have 2 billion transistors.
Why wouldn't you put them to use?
That's the trend anyways, even nVidia is going to release a 3 billion transistor GPU that's able to run general programs.
I'm a PC gamer, I could care less if Intel or ATI or nVidea cheat on their benchmarks.
In fact they should be encouraged to release hand coded or special drivers to improve performance in specific games.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729271</id>
	<title>Re:Eh?</title>
	<author>Guspaz</author>
	<datestamp>1255465920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And so, if your GPU isn't powerful enough, you would offload as much as you can, no?</p></htmltext>
<tokenext>And so , if your GPU is n't powerful enough , you would offload as much as you can , no ?</tokentext>
<sentencetext>And so, if your GPU isn't powerful enough, you would offload as much as you can, no?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728507</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29737431</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>Anonymous</author>
	<datestamp>1255429440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem is that it doesn't test the GPU's individual capabilities. The GPU score should be the same whether the computer uses an 8-core 4.0 GHz overclocked Core i7 Extreme CPU or a single-core 1.8 GHz Celeron. With Intel's cheating, a faster CPU would mean a better GPU score, whereas with a dedicated ATI or Nvidia card, the CPU would have little effect on the GPU score (you obviously can't use a Pentium II or 486, but the main GPU bottleneck for a PC with a slow CPU would be the bus speed, not the CPU itself). Intel could just make sure that the test machine uses the fastest Intel CPU vs. one with the fastest AMD CPU, and use all the excess power to offload graphics calculations, with AMD still having most of their CPU time idle.</p></htmltext>
<tokenext>The problem is that it does n't test the GPU 's individual capabilities .
The GPU score should be the same whether the computer uses an 8-core 4.0 GHz overclocked Core i7 Extreme CPU or a single-core 1.8 GHz Celeron .
With Intel 's cheating , a faster CPU would mean a better GPU score , whereas with a dedicated ATI or Nvidia card , the CPU would have little effect on the GPU score ( you obviously ca n't use a Pentium II or 486 , but the main GPU bottleneck for a PC with a slow CPU would be the bus speed , not the CPU itself ) .
Intel could just make sure that the test machine uses the fastest Intel CPU vs. one with the fastest AMD CPU , and use all the excess power to offload graphics calculations , with AMD still having most of their CPU time idle .</tokentext>
<sentencetext>The problem is that it doesn't test the GPU's individual capabilities.
The GPU score should be the same whether the computer uses an 8-core 4.0 GHz overclocked Core i7 Extreme CPU or a single-core 1.8 GHz Celeron.
With Intel's cheating, a faster CPU would mean a better GPU score, whereas with a dedicated ATI or Nvidia card, the CPU would have little effect on the GPU score (you obviously can't use a Pentium II or 486, but the main GPU bottleneck for a PC with a slow CPU would be the bus speed, not the CPU itself).
Intel could just make sure that the test machine uses the fastest Intel CPU vs. one with the fastest AMD CPU, and use all the excess power to offload graphics calculations, with AMD still having most of their CPU time idle.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728981</id>
	<title>Re:Good reporting there Ric</title>
	<author>Ke3g</author>
	<datestamp>1255375620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Thanks for more reason to dislike Intel.</htmltext>
<tokenext>Thanks for more reason to dislike Intel .</tokentext>
<sentencetext>Thanks for more reason to dislike Intel.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732655</id>
	<title>Game Designers should clue in</title>
	<author>WindBourne</author>
	<datestamp>1255452300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>They should consider putting together some of their code with a timer. The idea being that run a set sequence that exercises what THEY consider important to the game. Once they start doing that, they will become the owner of benchmarks. More importantly, the chip designers will spend a LOT more time working with them.</htmltext>
<tokenext>They should consider putting together some of their code with a timer .
The idea being that run a set sequence that exercises what THEY consider important to the game .
Once they start doing that , they will become the owner of benchmarks .
More importantly , the chip designers will spend a LOT more time working with them .</tokentext>
<sentencetext>They should consider putting together some of their code with a timer.
The idea being that run a set sequence that exercises what THEY consider important to the game.
Once they start doing that, they will become the owner of benchmarks.
More importantly, the chip designers will spend a LOT more time working with them.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728449</id>
	<title>A large corporation lying?</title>
	<author>Anonymous</author>
	<datestamp>1255367760000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm shocked shocked shocked, I tell you.</p></htmltext>
<tokenext>I 'm shocked shocked shocked , I tell you .</tokentext>
<sentencetext>I'm shocked shocked shocked, I tell you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728485</id>
	<title>That's what they do for LOTS of games...</title>
	<author>Anonymous</author>
	<datestamp>1255368120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Intel fully admits that the integrated chipset graphics aren't that great.  They freely admit that they offload rendering to the CPU in some cases.  This isn't a secret.</p></htmltext>
<tokenext>Intel fully admits that the integrated chipset graphics are n't that great .
They freely admit that they offload rendering to the CPU in some cases .
This is n't a secret .</tokentext>
<sentencetext>Intel fully admits that the integrated chipset graphics aren't that great.
They freely admit that they offload rendering to the CPU in some cases.
This isn't a secret.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728739</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>Anonymous</author>
	<datestamp>1255371420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It's an optimization. The driver has a list of applications which benefit from offloading GPU work to the CPU. This benchmark happens to be one of them. I agree that this should not be done with a list of executables in the driver, but generally there's nothing wrong with using an underutilized CPU to speed up graphics. The driver should measure performance and utilization and decide based on these measurements though (and if it did, it would still accelerate the benchmark by using the CPU for graphics work, but it would do so regardless of the filename.)</p></htmltext>
<tokenext>It 's an optimization .
The driver has a list of applications which benefit from offloading GPU work to the CPU .
This benchmark happens to be one of them .
I agree that this should not be done with a list of executables in the driver , but generally there 's nothing wrong with using an underutilized CPU to speed up graphics .
The driver should measure performance and utilization and decide based on these measurements though ( and if it did , it would still accelerate the benchmark by using the CPU for graphics work , but it would do so regardless of the filename .
)</tokentext>
<sentencetext>It's an optimization.
The driver has a list of applications which benefit from offloading GPU work to the CPU.
This benchmark happens to be one of them.
I agree that this should not be done with a list of executables in the driver, but generally there's nothing wrong with using an underutilized CPU to speed up graphics.
The driver should measure performance and utilization and decide based on these measurements though (and if it did, it would still accelerate the benchmark by using the CPU for graphics work, but it would do so regardless of the filename.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29738899</id>
	<title>Legal trouble?</title>
	<author>Binder</author>
	<datestamp>1255436100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Isn't there some way to punish companies for this sort of thing?  Advertising false data is illegal shouldn't this be as well?</p></htmltext>
<tokenext>Is n't there some way to punish companies for this sort of thing ?
Advertising false data is illegal should n't this be as well ?</tokentext>
<sentencetext>Isn't there some way to punish companies for this sort of thing?
Advertising false data is illegal shouldn't this be as well?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728661</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>Anonymous</author>
	<datestamp>1255370280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>And Intel doesn't deny it.  They freely admit that they switch to CPU rendering when it's faster than the (admittedly pitiful) GPU in their onboard chipsets.</p><p>Heck, if you have one of their non-X chipsets, it *ALWAYS* uses the CPU for this.</p><p>For example, try 3DMark on their "GMA 3500" chipset, and on their "GMA X3500" chipset (G33 and G35 chipsets, repsectively,) and you'll probably get the exact same score with 3DMark in its 'proper' name, and a worse score on the X3500 when you rename the executable.  Intel knows that some games do worse using the GPU, so they code in the drivers to use the CPU on those games.  If it doesn't know, though, it'll use the GPU.</p></htmltext>
<tokenext>And Intel does n't deny it .
They freely admit that they switch to CPU rendering when it 's faster than the ( admittedly pitiful ) GPU in their onboard chipsets.Heck , if you have one of their non-X chipsets , it * ALWAYS * uses the CPU for this.For example , try 3DMark on their " GMA 3500 " chipset , and on their " GMA X3500 " chipset ( G33 and G35 chipsets , repsectively , ) and you 'll probably get the exact same score with 3DMark in its 'proper ' name , and a worse score on the X3500 when you rename the executable .
Intel knows that some games do worse using the GPU , so they code in the drivers to use the CPU on those games .
If it does n't know , though , it 'll use the GPU .</tokentext>
<sentencetext>And Intel doesn't deny it.
They freely admit that they switch to CPU rendering when it's faster than the (admittedly pitiful) GPU in their onboard chipsets.Heck, if you have one of their non-X chipsets, it *ALWAYS* uses the CPU for this.For example, try 3DMark on their "GMA 3500" chipset, and on their "GMA X3500" chipset (G33 and G35 chipsets, repsectively,) and you'll probably get the exact same score with 3DMark in its 'proper' name, and a worse score on the X3500 when you rename the executable.
Intel knows that some games do worse using the GPU, so they code in the drivers to use the CPU on those games.
If it doesn't know, though, it'll use the GPU.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729467</id>
	<title>I don't get it</title>
	<author>Anonymous</author>
	<datestamp>1255425180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>So intel will not be as fast to render a bsod as Ati or nvidia?</p></htmltext>
<tokenext>So intel will not be as fast to render a bsod as Ati or nvidia ?</tokentext>
<sentencetext>So intel will not be as fast to render a bsod as Ati or nvidia?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732897</id>
	<title>Re:Good reporting there Ric</title>
	<author>adisakp</author>
	<datestamp>1255453440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.</p></div><p>The drivers do actually accelerate a fair number of popular games -- in other words, you get a faster frame rate.  However, the acceleration is based on a database (list of exe names) that have been tested to work with the drivers.  Intel is continuing to add to the list as they test new games and tweak the drivers but they haven't enabled it as a general feature because it does not work correctly on 100\% of titles out there.<br> <br>
If a driver makes my game go faster, I don't consider that cheating on the driver as long as they are honest about what they are doing.  Dynamic-Load balancing between the GPU and CPU is only going to become more common in the future with OpenCL (physics/GPGPU/etc) and when GPU's like Larrabee (which can run Intel binary code) become more available.<br> <br>
However, for the purpose of being able to test GPU alone, the driver should have an option to disable the dynamic load balancing feature.</p></div>
	</htmltext>
<tokenext>Thanks for telling all of us that the best measure of hardware 's performance ingame is... to benchmark it with a game.The drivers do actually accelerate a fair number of popular games -- in other words , you get a faster frame rate .
However , the acceleration is based on a database ( list of exe names ) that have been tested to work with the drivers .
Intel is continuing to add to the list as they test new games and tweak the drivers but they have n't enabled it as a general feature because it does not work correctly on 100 \ % of titles out there .
If a driver makes my game go faster , I do n't consider that cheating on the driver as long as they are honest about what they are doing .
Dynamic-Load balancing between the GPU and CPU is only going to become more common in the future with OpenCL ( physics/GPGPU/etc ) and when GPU 's like Larrabee ( which can run Intel binary code ) become more available .
However , for the purpose of being able to test GPU alone , the driver should have an option to disable the dynamic load balancing feature .</tokentext>
<sentencetext>Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.The drivers do actually accelerate a fair number of popular games -- in other words, you get a faster frame rate.
However, the acceleration is based on a database (list of exe names) that have been tested to work with the drivers.
Intel is continuing to add to the list as they test new games and tweak the drivers but they haven't enabled it as a general feature because it does not work correctly on 100\% of titles out there.
If a driver makes my game go faster, I don't consider that cheating on the driver as long as they are honest about what they are doing.
Dynamic-Load balancing between the GPU and CPU is only going to become more common in the future with OpenCL (physics/GPGPU/etc) and when GPU's like Larrabee (which can run Intel binary code) become more available.
However, for the purpose of being able to test GPU alone, the driver should have an option to disable the dynamic load balancing feature.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728601</id>
	<title>Re:Good reporting there Ric</title>
	<author>palegray.net</author>
	<datestamp>1255369740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>The driver apparently detects "crysis.exe" and inflates performance metrics by offloading processing, whereas renaming the executable to "crisis.exe" gives realistic performance scores. Please RTFA before replying.</htmltext>
<tokenext>The driver apparently detects " crysis.exe " and inflates performance metrics by offloading processing , whereas renaming the executable to " crisis.exe " gives realistic performance scores .
Please RTFA before replying .</tokentext>
<sentencetext>The driver apparently detects "crysis.exe" and inflates performance metrics by offloading processing, whereas renaming the executable to "crisis.exe" gives realistic performance scores.
Please RTFA before replying.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728375</id>
	<title>32 bit or 64 bit?</title>
	<author>Anonymous</author>
	<datestamp>1255367220000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Or both? And yes, I am too lazy to RTFA</p></htmltext>
<tokenext>Or both ?
And yes , I am too lazy to RTFA</tokentext>
<sentencetext>Or both?
And yes, I am too lazy to RTFA</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728745</id>
	<title>Re:Good reporting there Ric</title>
	<author>Shadow of Eternity</author>
	<datestamp>1255371420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization."</p><p>To me at least that reads as "it cheats in 3dmark but you catch it red handed if you benchmark with a game."</p></htmltext>
<tokenext>" The G41 's new-found dominance in 3DMark does n't translate to superior gaming performance , even in this game targeted by the same optimization .
" To me at least that reads as " it cheats in 3dmark but you catch it red handed if you benchmark with a game .
"</tokentext>
<sentencetext>"The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization.
"To me at least that reads as "it cheats in 3dmark but you catch it red handed if you benchmark with a game.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728601</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728623</id>
	<title>Re:Why not?</title>
	<author>BobisOnlyBob</author>
	<datestamp>1255369920000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>It's not special drivers for specific games. It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards). This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.</p></htmltext>
<tokenext>It 's not special drivers for specific games .
It 's regular drivers with exceptions coded in to make them appear faster on " standardised " tests , which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need ( and to compare competing cards ) .
This is cheating to increase sales among the early adopter/benchmarker crowd , impress marketing types and get more units on shelves , and is generally at the cost of the consumer .</tokentext>
<sentencetext>It's not special drivers for specific games.
It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards).
This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728531</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728581</id>
	<title>xbitlabs was onto them..</title>
	<author>Anonymous</author>
	<datestamp>1255369440000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://www.xbitlabs.com/articles/mainboards/display/amd785g-intelg45\_6.html#sect1" title="xbitlabs.com" rel="nofollow">http://www.xbitlabs.com/articles/mainboards/display/amd785g-intelg45\_6.html#sect1</a> [xbitlabs.com]</p><p>quote<br>The obtained numbers are pretty interesting. The thing is that although AMD 785G solution is ahead of Intel in 3DMark06, it falls behind the competitor in 3DMark Vantage suite. It is especially strange keeping in mind that Radeon HD 4200 is considerably more powerful than GMA X4500HD according to formally calculated theoretical performance. However, the fact is undeniable: Intel G45 chipset does produce higher 3DMark Vantage score in Windows 7. By the way, this is only true for the upcoming operating system, because Intel graphics accelerator can't repeat its success in Windows Vista. And it means that we can conclude that this sudden success demonstrated by Intel G45 can only be explained by certain driver optimizations and not the GPU architecture.</p></div>
	</htmltext>
<tokenext>http : //www.xbitlabs.com/articles/mainboards/display/amd785g-intelg45 \ _6.html # sect1 [ xbitlabs.com ] quoteThe obtained numbers are pretty interesting .
The thing is that although AMD 785G solution is ahead of Intel in 3DMark06 , it falls behind the competitor in 3DMark Vantage suite .
It is especially strange keeping in mind that Radeon HD 4200 is considerably more powerful than GMA X4500HD according to formally calculated theoretical performance .
However , the fact is undeniable : Intel G45 chipset does produce higher 3DMark Vantage score in Windows 7 .
By the way , this is only true for the upcoming operating system , because Intel graphics accelerator ca n't repeat its success in Windows Vista .
And it means that we can conclude that this sudden success demonstrated by Intel G45 can only be explained by certain driver optimizations and not the GPU architecture .</tokentext>
<sentencetext>http://www.xbitlabs.com/articles/mainboards/display/amd785g-intelg45\_6.html#sect1 [xbitlabs.com]quoteThe obtained numbers are pretty interesting.
The thing is that although AMD 785G solution is ahead of Intel in 3DMark06, it falls behind the competitor in 3DMark Vantage suite.
It is especially strange keeping in mind that Radeon HD 4200 is considerably more powerful than GMA X4500HD according to formally calculated theoretical performance.
However, the fact is undeniable: Intel G45 chipset does produce higher 3DMark Vantage score in Windows 7.
By the way, this is only true for the upcoming operating system, because Intel graphics accelerator can't repeat its success in Windows Vista.
And it means that we can conclude that this sudden success demonstrated by Intel G45 can only be explained by certain driver optimizations and not the GPU architecture.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29731939</id>
	<title>Re:Hmm...</title>
	<author>wastedlife</author>
	<datestamp>1255449120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Having done tech support for a manufacturer of Nvidia video cards, I have read through more than a few driver update release notes looking for specific fixes, and they are filled to the brim with game-specific tweaks and fixes.</p><p>On the other hand, they really should not have done this for any GPU benchmarking apps, as it is misleading at best. It is certainly possible that the driver looks for a string and 3dMark was used in testing the hack ( I consider CPU offloading of GPU processes a hack), and then they forgot or neglected to remove it in the future. So it could have been a mistake, but they should still be held accountable.</p></htmltext>
<tokenext>Having done tech support for a manufacturer of Nvidia video cards , I have read through more than a few driver update release notes looking for specific fixes , and they are filled to the brim with game-specific tweaks and fixes.On the other hand , they really should not have done this for any GPU benchmarking apps , as it is misleading at best .
It is certainly possible that the driver looks for a string and 3dMark was used in testing the hack ( I consider CPU offloading of GPU processes a hack ) , and then they forgot or neglected to remove it in the future .
So it could have been a mistake , but they should still be held accountable .</tokentext>
<sentencetext>Having done tech support for a manufacturer of Nvidia video cards, I have read through more than a few driver update release notes looking for specific fixes, and they are filled to the brim with game-specific tweaks and fixes.On the other hand, they really should not have done this for any GPU benchmarking apps, as it is misleading at best.
It is certainly possible that the driver looks for a string and 3dMark was used in testing the hack ( I consider CPU offloading of GPU processes a hack), and then they forgot or neglected to remove it in the future.
So it could have been a mistake, but they should still be held accountable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729115</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730671</id>
	<title>Re:Eh?</title>
	<author>hattig</author>
	<datestamp>1255441800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is the conundrum here.</p><p>Firstly, per-game optimisations are fine if the image quality is not affected. This is often done by all companies, tweaking shaders, etc, to run better on their hardware than the shader that was shipped with the game.</p><p>Secondly, offloading some of the graphics pipeline to an underutilised, over-powerful CPU seems logical when the graphics unit utilises unified shaders. In this case, there is a significant performance increase (from "dire" to "mostly naff") without quality dropping. Again, logically this seems fine to me, as long as this type of optimisation isn't targeted at specific titles.</p><p>In this case, it was, and not only was it a specific title, it was a benchmark. This is quite bad, and is a type of cheating that the other graphics companies got over a few years ago.</p><p>In addition it appears that in the real world of games, the benchmark over-reports the Intel capabilities by a factor of two anyway, even with the application specific optimisation enabled. I.e., the ATI part gets 30fps to the Intel's 10fps/15fps, despite the ATI part's 3dmark score equalling or being below the Intel results.</p><p>Showing that when you have low-end graphics, the benchmarks should be run at lower resolutions and quality settings to get a representative comparison against other similar hardware. Running a benchmark in non-typical settings (i.e., 1920x1200 for integrated graphics) can lead to some odd skewings.</p><p>It also confirmed how rubbish Intel's integrated graphics are.</p></htmltext>
<tokenext>This is the conundrum here.Firstly , per-game optimisations are fine if the image quality is not affected .
This is often done by all companies , tweaking shaders , etc , to run better on their hardware than the shader that was shipped with the game.Secondly , offloading some of the graphics pipeline to an underutilised , over-powerful CPU seems logical when the graphics unit utilises unified shaders .
In this case , there is a significant performance increase ( from " dire " to " mostly naff " ) without quality dropping .
Again , logically this seems fine to me , as long as this type of optimisation is n't targeted at specific titles.In this case , it was , and not only was it a specific title , it was a benchmark .
This is quite bad , and is a type of cheating that the other graphics companies got over a few years ago.In addition it appears that in the real world of games , the benchmark over-reports the Intel capabilities by a factor of two anyway , even with the application specific optimisation enabled .
I.e. , the ATI part gets 30fps to the Intel 's 10fps/15fps , despite the ATI part 's 3dmark score equalling or being below the Intel results.Showing that when you have low-end graphics , the benchmarks should be run at lower resolutions and quality settings to get a representative comparison against other similar hardware .
Running a benchmark in non-typical settings ( i.e. , 1920x1200 for integrated graphics ) can lead to some odd skewings.It also confirmed how rubbish Intel 's integrated graphics are .</tokentext>
<sentencetext>This is the conundrum here.Firstly, per-game optimisations are fine if the image quality is not affected.
This is often done by all companies, tweaking shaders, etc, to run better on their hardware than the shader that was shipped with the game.Secondly, offloading some of the graphics pipeline to an underutilised, over-powerful CPU seems logical when the graphics unit utilises unified shaders.
In this case, there is a significant performance increase (from "dire" to "mostly naff") without quality dropping.
Again, logically this seems fine to me, as long as this type of optimisation isn't targeted at specific titles.In this case, it was, and not only was it a specific title, it was a benchmark.
This is quite bad, and is a type of cheating that the other graphics companies got over a few years ago.In addition it appears that in the real world of games, the benchmark over-reports the Intel capabilities by a factor of two anyway, even with the application specific optimisation enabled.
I.e., the ATI part gets 30fps to the Intel's 10fps/15fps, despite the ATI part's 3dmark score equalling or being below the Intel results.Showing that when you have low-end graphics, the benchmarks should be run at lower resolutions and quality settings to get a representative comparison against other similar hardware.
Running a benchmark in non-typical settings (i.e., 1920x1200 for integrated graphics) can lead to some odd skewings.It also confirmed how rubbish Intel's integrated graphics are.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728507</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728795</id>
	<title>Re:Why not?</title>
	<author>causality</author>
	<datestamp>1255371960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>It's not special drivers for specific games. It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards). This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.</p></div><p>No need for a car analogy on this one.  So it's like what happens when the public schools teach a generation or two in such a way that they are optimized for performance on standardized tests, and when those students eventually enter the working world, they don't know how to make change without a cash register or other calculator of some sort?  The way they don't know how to deconstruct an argument?  Let alone understand the importance of things like living within your means?</p></div>
	</htmltext>
<tokenext>It 's not special drivers for specific games .
It 's regular drivers with exceptions coded in to make them appear faster on " standardised " tests , which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need ( and to compare competing cards ) .
This is cheating to increase sales among the early adopter/benchmarker crowd , impress marketing types and get more units on shelves , and is generally at the cost of the consumer.No need for a car analogy on this one .
So it 's like what happens when the public schools teach a generation or two in such a way that they are optimized for performance on standardized tests , and when those students eventually enter the working world , they do n't know how to make change without a cash register or other calculator of some sort ?
The way they do n't know how to deconstruct an argument ?
Let alone understand the importance of things like living within your means ?</tokentext>
<sentencetext>It's not special drivers for specific games.
It's regular drivers with exceptions coded in to make them appear faster on "standardised" tests, which are meant to be an all-purpose benchmark to help consumers identify the sort of card they need (and to compare competing cards).
This is cheating to increase sales among the early adopter/benchmarker crowd, impress marketing types and get more units on shelves, and is generally at the cost of the consumer.No need for a car analogy on this one.
So it's like what happens when the public schools teach a generation or two in such a way that they are optimized for performance on standardized tests, and when those students eventually enter the working world, they don't know how to make change without a cash register or other calculator of some sort?
The way they don't know how to deconstruct an argument?
Let alone understand the importance of things like living within your means?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728623</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728549</id>
	<title>Why a bad hack when you are close to much more?</title>
	<author>parallel\_prankster</author>
	<datestamp>1255368960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance. I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ? Clearly you know the app behaviour and you know the performance optimizations work. This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack. A really bad one too!!!</htmltext>
<tokenext>Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance .
I mean , if you are detecting a file name and enabling performance optimizations , why not detect the app behaviour itself and make the optimizations generic ?
Clearly you know the app behaviour and you know the performance optimizations work .
This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool , they just made it a hack .
A really bad one too ! !
!</tokentext>
<sentencetext>Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance.
I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ?
Clearly you know the app behaviour and you know the performance optimizations work.
This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack.
A really bad one too!!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730425</id>
	<title>Intel's response-other side of story</title>
	<author>doug141</author>
	<datestamp>1255439700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers. "</p><p>The article confirms that the driver also plays crysis faster if you don't rename it. Maybe 3DMark is obsolete now that drivers are optimizing for individual games.</p></htmltext>
<tokenext>" We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing , the CPU can assist with DX10 geometry processing to enhance overall performance .
3DMarkVantage is one of those workloads , as are Call of Juarez , Crysis , Lost Planet : Extreme Conditions , and Company of Heroes .
We have used similar techniques with DX9 in previous products and drivers .
The benefit to users is optimized performance based on best use of the hardware available in the system .
Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers .
" The article confirms that the driver also plays crysis faster if you do n't rename it .
Maybe 3DMark is obsolete now that drivers are optimizing for individual games .</tokentext>
<sentencetext>"We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance.
3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes.
We have used similar techniques with DX9 in previous products and drivers.
The benefit to users is optimized performance based on best use of the hardware available in the system.
Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.
"The article confirms that the driver also plays crysis faster if you don't rename it.
Maybe 3DMark is obsolete now that drivers are optimizing for individual games.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732265</id>
	<title>3DMark is a joke anyways</title>
	<author>Anonymous</author>
	<datestamp>1255450560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>3DMark is the most rediculous "benchmark" there has ever been. It gives some meaningless numerical value to a set of various small tests which will rarely correspond to current market games actual performance. There is no way to translate 3DMark to say, Crysis FPS. It might be easier to run some 3Dmark and give some meaningless number that only serves to confuse people, but something actually useful to anyone looking into performance of any system or hardware for gaming, is only going to be hard, *real* benchmarks of *actual* games that people are going to buy.</p><p>I really wish 3DMark would dissapear. So many places just give some meaningless 3Dmark score for graphics benchmarks on mainstream systems. Laptop Magazine is particularly offending, often just giving some offhand 3dmark score for a reviewed system, with nothing to corroborate that with, like a real game's performance numbers running at details suited for the systems hardware. You don't benchmark Crysis on a GMA4500, you benchmark HL2 or something, on medium settings. 3Dmark only serves to confuse people about graphics hardware. You can assign any number you want to a system and call it elite3dbenchmark2009Premium or something, but that number is never going to give anyone an idea of a given game's performance on that hardware. The only way to determine that with 3dmark numbers is to go and compare the scores with other otherware, and that takes alot more work and hunting, than does just reading an actual performance result of a set of games people are going to buy and play and expect them to run decently.</p></htmltext>
<tokenext>3DMark is the most rediculous " benchmark " there has ever been .
It gives some meaningless numerical value to a set of various small tests which will rarely correspond to current market games actual performance .
There is no way to translate 3DMark to say , Crysis FPS .
It might be easier to run some 3Dmark and give some meaningless number that only serves to confuse people , but something actually useful to anyone looking into performance of any system or hardware for gaming , is only going to be hard , * real * benchmarks of * actual * games that people are going to buy.I really wish 3DMark would dissapear .
So many places just give some meaningless 3Dmark score for graphics benchmarks on mainstream systems .
Laptop Magazine is particularly offending , often just giving some offhand 3dmark score for a reviewed system , with nothing to corroborate that with , like a real game 's performance numbers running at details suited for the systems hardware .
You do n't benchmark Crysis on a GMA4500 , you benchmark HL2 or something , on medium settings .
3Dmark only serves to confuse people about graphics hardware .
You can assign any number you want to a system and call it elite3dbenchmark2009Premium or something , but that number is never going to give anyone an idea of a given game 's performance on that hardware .
The only way to determine that with 3dmark numbers is to go and compare the scores with other otherware , and that takes alot more work and hunting , than does just reading an actual performance result of a set of games people are going to buy and play and expect them to run decently .</tokentext>
<sentencetext>3DMark is the most rediculous "benchmark" there has ever been.
It gives some meaningless numerical value to a set of various small tests which will rarely correspond to current market games actual performance.
There is no way to translate 3DMark to say, Crysis FPS.
It might be easier to run some 3Dmark and give some meaningless number that only serves to confuse people, but something actually useful to anyone looking into performance of any system or hardware for gaming, is only going to be hard, *real* benchmarks of *actual* games that people are going to buy.I really wish 3DMark would dissapear.
So many places just give some meaningless 3Dmark score for graphics benchmarks on mainstream systems.
Laptop Magazine is particularly offending, often just giving some offhand 3dmark score for a reviewed system, with nothing to corroborate that with, like a real game's performance numbers running at details suited for the systems hardware.
You don't benchmark Crysis on a GMA4500, you benchmark HL2 or something, on medium settings.
3Dmark only serves to confuse people about graphics hardware.
You can assign any number you want to a system and call it elite3dbenchmark2009Premium or something, but that number is never going to give anyone an idea of a given game's performance on that hardware.
The only way to determine that with 3dmark numbers is to go and compare the scores with other otherware, and that takes alot more work and hunting, than does just reading an actual performance result of a set of games people are going to buy and play and expect them to run decently.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729325</id>
	<title>Re:Good reporting there Ric</title>
	<author>Fred\_A</author>
	<datestamp>1255466580000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>The driver apparently detects "crysis.exe" and inflates performance metrics by offloading processing, whereas renaming the executable to "crisis.exe" gives realistic performance scores. Please RTFA before replying.</p></div><p>Thanks for the tip, I've now renamed all my games to "crysis.exe" and am now enjoying a major speed boost. You've given my laptop a new youth !</p><p>I can finally get rid of that cumbersome i7 box with that noisy nVidia !</p></div>
	</htmltext>
<tokenext>The driver apparently detects " crysis.exe " and inflates performance metrics by offloading processing , whereas renaming the executable to " crisis.exe " gives realistic performance scores .
Please RTFA before replying.Thanks for the tip , I 've now renamed all my games to " crysis.exe " and am now enjoying a major speed boost .
You 've given my laptop a new youth ! I can finally get rid of that cumbersome i7 box with that noisy nVidia !</tokentext>
<sentencetext>The driver apparently detects "crysis.exe" and inflates performance metrics by offloading processing, whereas renaming the executable to "crisis.exe" gives realistic performance scores.
Please RTFA before replying.Thanks for the tip, I've now renamed all my games to "crysis.exe" and am now enjoying a major speed boost.
You've given my laptop a new youth !I can finally get rid of that cumbersome i7 box with that noisy nVidia !
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728601</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729115</id>
	<title>Re:Hmm...</title>
	<author>Sycraft-fu</author>
	<datestamp>1255377240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I'm inclined to give Intel the benefit of the doubt here. Few reasons:</p><p>1) Nobody buys Intel integrated chips because of how they do on 3D mark. Nobody thinks they are any serious kind of performance. Hell, most people are amazed to find out that these days they are good enough that you can, in fact, play some games on them (though not near as well as dedicated hardware). So I can't imagine they are gaining lots of sales out of this. Remember these are chips on the board itself. You either got a board with one or didn't. You don't pick one up later because you liked the numbers.</p><p>2) Individual program optimization in drivers is extremely common. Some programs do things an odd way, and sometimes the vendors can figure out a way to work around it. An example would be the Unreal 3 engine and anti-aliasing in DirectX 9 mode. I don't know the details, but the upshot is it normally doesn't work. However nVidia (and probalby others) have figured out a way around this. So you can force AA on the Mass Effect and games that don't include the controls in the driver. However the driver has a particular hack for that game to make it work. If you use a program like Riva Tuner, you can mess with that sort of thing and flip the hacks on and off for various things.</p><p>3) Since Intel's integrated chips are exceedingly simple, it isn't surprising they have the CPU handle some things. I seem to recall that their older integrated chips did basically everything on the CPU, being little more than frame buffers themselves. The whole point of an integrated GPU is cheap and low power. That means it isn't going to have massive arrays of shaders to handle things. However with a clever driver, a CPU could do some of that work. Would work particularly well in an integrated GPU case since they use system memory.</p><p>So while I'm not sure I see the point in optimizing for 3DMark, I don't see the overall problem in specific optimizations for specific apps. If you discover that an app has a problem, and you can fix it, but that fix is not something to apply over all, well then why not apply that fix for that app?</p></htmltext>
<tokenext>I 'm inclined to give Intel the benefit of the doubt here .
Few reasons : 1 ) Nobody buys Intel integrated chips because of how they do on 3D mark .
Nobody thinks they are any serious kind of performance .
Hell , most people are amazed to find out that these days they are good enough that you can , in fact , play some games on them ( though not near as well as dedicated hardware ) .
So I ca n't imagine they are gaining lots of sales out of this .
Remember these are chips on the board itself .
You either got a board with one or did n't .
You do n't pick one up later because you liked the numbers.2 ) Individual program optimization in drivers is extremely common .
Some programs do things an odd way , and sometimes the vendors can figure out a way to work around it .
An example would be the Unreal 3 engine and anti-aliasing in DirectX 9 mode .
I do n't know the details , but the upshot is it normally does n't work .
However nVidia ( and probalby others ) have figured out a way around this .
So you can force AA on the Mass Effect and games that do n't include the controls in the driver .
However the driver has a particular hack for that game to make it work .
If you use a program like Riva Tuner , you can mess with that sort of thing and flip the hacks on and off for various things.3 ) Since Intel 's integrated chips are exceedingly simple , it is n't surprising they have the CPU handle some things .
I seem to recall that their older integrated chips did basically everything on the CPU , being little more than frame buffers themselves .
The whole point of an integrated GPU is cheap and low power .
That means it is n't going to have massive arrays of shaders to handle things .
However with a clever driver , a CPU could do some of that work .
Would work particularly well in an integrated GPU case since they use system memory.So while I 'm not sure I see the point in optimizing for 3DMark , I do n't see the overall problem in specific optimizations for specific apps .
If you discover that an app has a problem , and you can fix it , but that fix is not something to apply over all , well then why not apply that fix for that app ?</tokentext>
<sentencetext>I'm inclined to give Intel the benefit of the doubt here.
Few reasons:1) Nobody buys Intel integrated chips because of how they do on 3D mark.
Nobody thinks they are any serious kind of performance.
Hell, most people are amazed to find out that these days they are good enough that you can, in fact, play some games on them (though not near as well as dedicated hardware).
So I can't imagine they are gaining lots of sales out of this.
Remember these are chips on the board itself.
You either got a board with one or didn't.
You don't pick one up later because you liked the numbers.2) Individual program optimization in drivers is extremely common.
Some programs do things an odd way, and sometimes the vendors can figure out a way to work around it.
An example would be the Unreal 3 engine and anti-aliasing in DirectX 9 mode.
I don't know the details, but the upshot is it normally doesn't work.
However nVidia (and probalby others) have figured out a way around this.
So you can force AA on the Mass Effect and games that don't include the controls in the driver.
However the driver has a particular hack for that game to make it work.
If you use a program like Riva Tuner, you can mess with that sort of thing and flip the hacks on and off for various things.3) Since Intel's integrated chips are exceedingly simple, it isn't surprising they have the CPU handle some things.
I seem to recall that their older integrated chips did basically everything on the CPU, being little more than frame buffers themselves.
The whole point of an integrated GPU is cheap and low power.
That means it isn't going to have massive arrays of shaders to handle things.
However with a clever driver, a CPU could do some of that work.
Would work particularly well in an integrated GPU case since they use system memory.So while I'm not sure I see the point in optimizing for 3DMark, I don't see the overall problem in specific optimizations for specific apps.
If you discover that an app has a problem, and you can fix it, but that fix is not something to apply over all, well then why not apply that fix for that app?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728501</id>
	<title>Re:Good reporting there Ric</title>
	<author>cjfs</author>
	<datestamp>1255368480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.</p></div><p>Except the article clearly shows that the name of the games executable determines frame rates in some cases. It then goes on to state:</p><p><div class="quote"><p>the very same 785G system managed 30 frames per second in Crysis: Warhead, which is twice the frame rate of the G41 with all its vertex offloading mojo in action. The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization.</p></div><p>This kind of offloading is definitely shady. I can't see how they'd get the driver approved.</p></div>
	</htmltext>
<tokenext>Thanks for telling all of us that the best measure of hardware 's performance ingame is... to benchmark it with a game.Except the article clearly shows that the name of the games executable determines frame rates in some cases .
It then goes on to state : the very same 785G system managed 30 frames per second in Crysis : Warhead , which is twice the frame rate of the G41 with all its vertex offloading mojo in action .
The G41 's new-found dominance in 3DMark does n't translate to superior gaming performance , even in this game targeted by the same optimization.This kind of offloading is definitely shady .
I ca n't see how they 'd get the driver approved .</tokentext>
<sentencetext>Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.Except the article clearly shows that the name of the games executable determines frame rates in some cases.
It then goes on to state:the very same 785G system managed 30 frames per second in Crysis: Warhead, which is twice the frame rate of the G41 with all its vertex offloading mojo in action.
The G41's new-found dominance in 3DMark doesn't translate to superior gaming performance, even in this game targeted by the same optimization.This kind of offloading is definitely shady.
I can't see how they'd get the driver approved.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728843</id>
	<title>Re:Why a bad hack when you are close to much more?</title>
	<author>causality</author>
	<datestamp>1255372800000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p><div class="quote"><p>Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance. I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ? Clearly you know the app behaviour and you know the performance optimizations work. This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack. A really bad one too!!!</p></div><p>Sure, but how hard would it actually be for a graphics driver to scan an arbitrary executable and determine a) that it's a game and b) how it will behave when executed?  I suppose they could model it after the heuristic and behavioristic features of some antivirus/antispyware applications, but nothing about this problem sounds trivial.  There's also the question about how bloated of a graphics driver you are willing to accept.
<br> <br>
My guess is that the above concerns explain why this was a poorly-executed hack.</p></div>
	</htmltext>
<tokenext>Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance .
I mean , if you are detecting a file name and enabling performance optimizations , why not detect the app behaviour itself and make the optimizations generic ?
Clearly you know the app behaviour and you know the performance optimizations work .
This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool , they just made it a hack .
A really bad one too ! !
! Sure , but how hard would it actually be for a graphics driver to scan an arbitrary executable and determine a ) that it 's a game and b ) how it will behave when executed ?
I suppose they could model it after the heuristic and behavioristic features of some antivirus/antispyware applications , but nothing about this problem sounds trivial .
There 's also the question about how bloated of a graphics driver you are willing to accept .
My guess is that the above concerns explain why this was a poorly-executed hack .</tokentext>
<sentencetext>Its funny that Intel simply creates an INF file and uses those to detect apps and optimize for performance.
I mean, if you are detecting a file name and enabling performance optimizations, why not detect the app behaviour itself and make the optimizations generic ?
Clearly you know the app behaviour and you know the performance optimizations work.
This seem to me a case where people were asked to ship it out fast and instead of taking the time to plug the optimization into the tool, they just made it a hack.
A really bad one too!!
!Sure, but how hard would it actually be for a graphics driver to scan an arbitrary executable and determine a) that it's a game and b) how it will behave when executed?
I suppose they could model it after the heuristic and behavioristic features of some antivirus/antispyware applications, but nothing about this problem sounds trivial.
There's also the question about how bloated of a graphics driver you are willing to accept.
My guess is that the above concerns explain why this was a poorly-executed hack.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728549</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728965</id>
	<title>Re:Why a bad hack when you are close to much more?</title>
	<author>Mashi King</author>
	<datestamp>1255375200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Detecting application behavior dynamically is non-trivial. Commonly it is performed by instrumenting the binary, which \_degrades\_ the performance of the binary. The act of observation destroys the behavior to be observed, so to speak.

This is why 3D marks vantage explicitly prohibits "Use of empirical data of application for optimization". \_After\_ you get the behavior of application, optimization is a lot easier.</htmltext>
<tokenext>Detecting application behavior dynamically is non-trivial .
Commonly it is performed by instrumenting the binary , which \ _degrades \ _ the performance of the binary .
The act of observation destroys the behavior to be observed , so to speak .
This is why 3D marks vantage explicitly prohibits " Use of empirical data of application for optimization " .
\ _After \ _ you get the behavior of application , optimization is a lot easier .</tokentext>
<sentencetext>Detecting application behavior dynamically is non-trivial.
Commonly it is performed by instrumenting the binary, which \_degrades\_ the performance of the binary.
The act of observation destroys the behavior to be observed, so to speak.
This is why 3D marks vantage explicitly prohibits "Use of empirical data of application for optimization".
\_After\_ you get the behavior of application, optimization is a lot easier.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728549</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730569</id>
	<title>Possible Solutions</title>
	<author>Plekto</author>
	<datestamp>1255440840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Two solutions come to mind immediately for this.</p><p>First off, here is the offending list of apps:<br>***<br>[Enable3DContexts\_CTG\_AddSwSettings]</p><p>HKR,, ~3DMark03.exe, \%REG\_DWORD\%, 1<br>HKR,, ~3DMark06.exe, \%REG\_DWORD\%, 1<br>HKR,, ~dreamfall.exe, \%REG\_DWORD\%, 1<br>HKR,, ~FEAR.exe, \%REG\_DWORD\%, 1<br>HKR,, ~FEARMP.exe, \%REG\_DWORD\%, 1<br>HKR,, ~HL2.exe, \%REG\_DWORD\%, 1<br>HKR,, ~LEGOIndy.exe, \%REG\_DWORD\%, 1<br>HKR,, ~RelicCOH.exe, \%REG\_DWORD\%, 1<br>HKR,, ~Sam2.exe, \%REG\_DWORD\%, 1<br>HKR,, ~SporeApp.exe, \%REG\_DWORD\%, 1<br>HKR,, ~witcher.exe, \%REG\_DWORD\%, 1<br>HKR,, ~Wow.exe, \%REG\_DWORD\%, 1</p><p>HKR,, ~3DMarkVantage.exe, \%REG\_DWORD\%, 2<br>HKR,, ~3DMarkVantageCmd.exe, \%REG\_DWORD\%, 2<br>HKR,, ~CoJ\_DX10.exe, \%REG\_DWORD\%, 2<br>HKR,, ~Crysis.exe, \%REG\_DWORD\%, 2<br>HKR,, ~RelicCoH.exe, \%REG\_DWORD\%, 2<br>HKR,, ~UAWEA.exe, \%REG\_DWORD\%, 2<br>***</p><p>As you can see, it's targeted at exactly the standard list of reviewed apps on most sites.  The easy solutions to fix this problem would be:</p><p>1 - Have the benchmark test check the config files for the video cards and flat out refuse to run tests which it finds the apps name in the list.  But this would probably be quickly hard-coded somewhere in the chipset or drivers to avoid this type of detection, so it's only a temporary fix.  The sneakier solution would be to have the benchmark randomly renamed by reviewers before running or have it randomly rename the program files.  Crysis becomes "potato46.exe" or something impossible to catch.  A nice random 1000-2000 word file like they use for email spam would be easy to add to the benchmark program(few dozen K at most) but create chaos in the drivers to try to replicate every combination.</p><p>2 - Have review sites randomly pick games that are non standard.  This would have the advantage of eventually making that list of optimized games grow to dozens if not hundreds.</p><p>Note - I wonder how much difference it would make to add the games you regularly play to the config files?  Just add the 20-30 games?  Could it be that simple?  (I don't see why all games aren't benefiting from these optimizations)</p></htmltext>
<tokenext>Two solutions come to mind immediately for this.First off , here is the offending list of apps : * * * [ Enable3DContexts \ _CTG \ _AddSwSettings ] HKR, , ~ 3DMark03.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ 3DMark06.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ dreamfall.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ FEAR.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ FEARMP.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ HL2.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ LEGOIndy.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ RelicCOH.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ Sam2.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ SporeApp.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ witcher.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ Wow.exe , \ % REG \ _DWORD \ % , 1HKR, , ~ 3DMarkVantage.exe , \ % REG \ _DWORD \ % , 2HKR, , ~ 3DMarkVantageCmd.exe , \ % REG \ _DWORD \ % , 2HKR, , ~ CoJ \ _DX10.exe , \ % REG \ _DWORD \ % , 2HKR, , ~ Crysis.exe , \ % REG \ _DWORD \ % , 2HKR, , ~ RelicCoH.exe , \ % REG \ _DWORD \ % , 2HKR, , ~ UAWEA.exe , \ % REG \ _DWORD \ % , 2 * * * As you can see , it 's targeted at exactly the standard list of reviewed apps on most sites .
The easy solutions to fix this problem would be : 1 - Have the benchmark test check the config files for the video cards and flat out refuse to run tests which it finds the apps name in the list .
But this would probably be quickly hard-coded somewhere in the chipset or drivers to avoid this type of detection , so it 's only a temporary fix .
The sneakier solution would be to have the benchmark randomly renamed by reviewers before running or have it randomly rename the program files .
Crysis becomes " potato46.exe " or something impossible to catch .
A nice random 1000-2000 word file like they use for email spam would be easy to add to the benchmark program ( few dozen K at most ) but create chaos in the drivers to try to replicate every combination.2 - Have review sites randomly pick games that are non standard .
This would have the advantage of eventually making that list of optimized games grow to dozens if not hundreds.Note - I wonder how much difference it would make to add the games you regularly play to the config files ?
Just add the 20-30 games ?
Could it be that simple ?
( I do n't see why all games are n't benefiting from these optimizations )</tokentext>
<sentencetext>Two solutions come to mind immediately for this.First off, here is the offending list of apps:***[Enable3DContexts\_CTG\_AddSwSettings]HKR,, ~3DMark03.exe, \%REG\_DWORD\%, 1HKR,, ~3DMark06.exe, \%REG\_DWORD\%, 1HKR,, ~dreamfall.exe, \%REG\_DWORD\%, 1HKR,, ~FEAR.exe, \%REG\_DWORD\%, 1HKR,, ~FEARMP.exe, \%REG\_DWORD\%, 1HKR,, ~HL2.exe, \%REG\_DWORD\%, 1HKR,, ~LEGOIndy.exe, \%REG\_DWORD\%, 1HKR,, ~RelicCOH.exe, \%REG\_DWORD\%, 1HKR,, ~Sam2.exe, \%REG\_DWORD\%, 1HKR,, ~SporeApp.exe, \%REG\_DWORD\%, 1HKR,, ~witcher.exe, \%REG\_DWORD\%, 1HKR,, ~Wow.exe, \%REG\_DWORD\%, 1HKR,, ~3DMarkVantage.exe, \%REG\_DWORD\%, 2HKR,, ~3DMarkVantageCmd.exe, \%REG\_DWORD\%, 2HKR,, ~CoJ\_DX10.exe, \%REG\_DWORD\%, 2HKR,, ~Crysis.exe, \%REG\_DWORD\%, 2HKR,, ~RelicCoH.exe, \%REG\_DWORD\%, 2HKR,, ~UAWEA.exe, \%REG\_DWORD\%, 2***As you can see, it's targeted at exactly the standard list of reviewed apps on most sites.
The easy solutions to fix this problem would be:1 - Have the benchmark test check the config files for the video cards and flat out refuse to run tests which it finds the apps name in the list.
But this would probably be quickly hard-coded somewhere in the chipset or drivers to avoid this type of detection, so it's only a temporary fix.
The sneakier solution would be to have the benchmark randomly renamed by reviewers before running or have it randomly rename the program files.
Crysis becomes "potato46.exe" or something impossible to catch.
A nice random 1000-2000 word file like they use for email spam would be easy to add to the benchmark program(few dozen K at most) but create chaos in the drivers to try to replicate every combination.2 - Have review sites randomly pick games that are non standard.
This would have the advantage of eventually making that list of optimized games grow to dozens if not hundreds.Note - I wonder how much difference it would make to add the games you regularly play to the config files?
Just add the 20-30 games?
Could it be that simple?
(I don't see why all games aren't benefiting from these optimizations)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728391</id>
	<title>INTEL ALWAYS DOES THIS</title>
	<author>Anonymous</author>
	<datestamp>1255367400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why are we surprised?  They are a marketing company!</p></htmltext>
<tokenext>Why are we surprised ?
They are a marketing company !</tokentext>
<sentencetext>Why are we surprised?
They are a marketing company!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729509</id>
	<title>Re:Hmm...</title>
	<author>Anonymous</author>
	<datestamp>1255426020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Oh come on. Every time Intel releases a new integrated chip, there's a lot of speculation that it might almost be a real 3D accelerator. I certainly have looked up 3DMarks on Intel chips, and I'm sure many other people have too.</p></htmltext>
<tokenext>Oh come on .
Every time Intel releases a new integrated chip , there 's a lot of speculation that it might almost be a real 3D accelerator .
I certainly have looked up 3DMarks on Intel chips , and I 'm sure many other people have too .</tokentext>
<sentencetext>Oh come on.
Every time Intel releases a new integrated chip, there's a lot of speculation that it might almost be a real 3D accelerator.
I certainly have looked up 3DMarks on Intel chips, and I'm sure many other people have too.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729115</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445</id>
	<title>Hmm...</title>
	<author>fuzzyfuzzyfungus</author>
	<datestamp>1255367700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>On the one hand, a mechanism that uses the CPU for some aspects of the graphics process seems perfectly reasonable(whether or not it is a good engineering decision is another matter, and would depend on whether it improves performance under desired workloads, what it does to energy consumption, total system cost, etc.), so I wouldn't blame intel for that alone.<br> <br>

On the other hand, though, the old "run 3Dmark, then run it again with the executable's name changed" test looks pretty incriminating. Historically, that has been a sign of dodgy benchmark hacks.<br> <br>

In this case, however, TFA indicates that the driver has a list of programs for which it enables these optimizations, which includes 3Dmark, but also includes a bunch of games and things. Is that just an extension of dodgy benchmark hacking, taking into account the fact that games are often used for benchmarking? Or is this optimization feature risky in some way(either unstable, or degrades performance) and so only enabled for whitelisted applications?<br> <br>

If the former, intel is being scummy. If the latter, I'm not so sure. From a theoretical purist standpoint, the idea that graphics drivers would need per-application manual tweaking kind of grosses me out; but, if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thing(assuming all the others aren't suffering for it).</htmltext>
<tokenext>On the one hand , a mechanism that uses the CPU for some aspects of the graphics process seems perfectly reasonable ( whether or not it is a good engineering decision is another matter , and would depend on whether it improves performance under desired workloads , what it does to energy consumption , total system cost , etc .
) , so I would n't blame intel for that alone .
On the other hand , though , the old " run 3Dmark , then run it again with the executable 's name changed " test looks pretty incriminating .
Historically , that has been a sign of dodgy benchmark hacks .
In this case , however , TFA indicates that the driver has a list of programs for which it enables these optimizations , which includes 3Dmark , but also includes a bunch of games and things .
Is that just an extension of dodgy benchmark hacking , taking into account the fact that games are often used for benchmarking ?
Or is this optimization feature risky in some way ( either unstable , or degrades performance ) and so only enabled for whitelisted applications ?
If the former , intel is being scummy .
If the latter , I 'm not so sure .
From a theoretical purist standpoint , the idea that graphics drivers would need per-application manual tweaking kind of grosses me out ; but , if in fact that is the way the world works , and intel can make the top N most common applications work better through manual tweaking , I 'm ca n't really say that that is a bad thing ( assuming all the others are n't suffering for it ) .</tokentext>
<sentencetext>On the one hand, a mechanism that uses the CPU for some aspects of the graphics process seems perfectly reasonable(whether or not it is a good engineering decision is another matter, and would depend on whether it improves performance under desired workloads, what it does to energy consumption, total system cost, etc.
), so I wouldn't blame intel for that alone.
On the other hand, though, the old "run 3Dmark, then run it again with the executable's name changed" test looks pretty incriminating.
Historically, that has been a sign of dodgy benchmark hacks.
In this case, however, TFA indicates that the driver has a list of programs for which it enables these optimizations, which includes 3Dmark, but also includes a bunch of games and things.
Is that just an extension of dodgy benchmark hacking, taking into account the fact that games are often used for benchmarking?
Or is this optimization feature risky in some way(either unstable, or degrades performance) and so only enabled for whitelisted applications?
If the former, intel is being scummy.
If the latter, I'm not so sure.
From a theoretical purist standpoint, the idea that graphics drivers would need per-application manual tweaking kind of grosses me out; but, if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thing(assuming all the others aren't suffering for it).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29735357</id>
	<title>Re:Why not?</title>
	<author>Anonymous</author>
	<datestamp>1255464420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I'm a PC gamer, I could care less if Intel or ATI or nVidea cheat on their benchmarks.</p></div><p>

How much less could you care? It's nice to know you currently care an amount.</p></div>
	</htmltext>
<tokenext>I 'm a PC gamer , I could care less if Intel or ATI or nVidea cheat on their benchmarks .
How much less could you care ?
It 's nice to know you currently care an amount .</tokentext>
<sentencetext>I'm a PC gamer, I could care less if Intel or ATI or nVidea cheat on their benchmarks.
How much less could you care?
It's nice to know you currently care an amount.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728531</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729289</id>
	<title>Re:Hmm...</title>
	<author>Anonymous</author>
	<datestamp>1255466100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><a href="http://www.intel.com/assets/pdf/whitepaper/318888.pdf" title="intel.com" rel="nofollow">Vertex Processing Selection Capability</a> [intel.com]</p><p>That's what Intel call it.</p></htmltext>
<tokenext>Vertex Processing Selection Capability [ intel.com ] That 's what Intel call it .</tokentext>
<sentencetext>Vertex Processing Selection Capability [intel.com]That's what Intel call it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729649</id>
	<title>Duh</title>
	<author>ThePhilips</author>
	<datestamp>1255429140000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p> 3DMark Vantage was never a legit benchmark. Heavily tuned for Intel CPU and nVidia GPU architectures it never actually meant a damm thing.

</p><p> Just compare performance of gf285/295 v. radeon 4870/5870 (any review) in 3DMark and in games. In 3DMark Vantage nVidia cards have close to 50\% advantage while in real games radeons sometimes score higher.

</p><p> The statistical <i>anomaly</i> alone is sufficient to dismiss 3DMark Vantage results as <i>outlier</i>.</p></htmltext>
<tokenext>3DMark Vantage was never a legit benchmark .
Heavily tuned for Intel CPU and nVidia GPU architectures it never actually meant a damm thing .
Just compare performance of gf285/295 v. radeon 4870/5870 ( any review ) in 3DMark and in games .
In 3DMark Vantage nVidia cards have close to 50 \ % advantage while in real games radeons sometimes score higher .
The statistical anomaly alone is sufficient to dismiss 3DMark Vantage results as outlier .</tokentext>
<sentencetext> 3DMark Vantage was never a legit benchmark.
Heavily tuned for Intel CPU and nVidia GPU architectures it never actually meant a damm thing.
Just compare performance of gf285/295 v. radeon 4870/5870 (any review) in 3DMark and in games.
In 3DMark Vantage nVidia cards have close to 50\% advantage while in real games radeons sometimes score higher.
The statistical anomaly alone is sufficient to dismiss 3DMark Vantage results as outlier.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729901</id>
	<title>Re:Eh?</title>
	<author>beelsebob</author>
	<datestamp>1255432620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Not so, an integrated GPU is simply a (often low power) GPU which uses the system's RAM instead of it's own RAM.  Because system memory buses are usually much much slower than the ones included on dedicated graphics cards, and because the IGP shares the bandwidth with the CPU, the IGP is in turn relatively slow.</p><p>There's not (normally) anything to do with using the CPU to do graphics computations.</p></htmltext>
<tokenext>Not so , an integrated GPU is simply a ( often low power ) GPU which uses the system 's RAM instead of it 's own RAM .
Because system memory buses are usually much much slower than the ones included on dedicated graphics cards , and because the IGP shares the bandwidth with the CPU , the IGP is in turn relatively slow.There 's not ( normally ) anything to do with using the CPU to do graphics computations .</tokentext>
<sentencetext>Not so, an integrated GPU is simply a (often low power) GPU which uses the system's RAM instead of it's own RAM.
Because system memory buses are usually much much slower than the ones included on dedicated graphics cards, and because the IGP shares the bandwidth with the CPU, the IGP is in turn relatively slow.There's not (normally) anything to do with using the CPU to do graphics computations.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730857</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>DMiax</author>
	<datestamp>1255443240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The question that should be asked is: What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders?</p></div><p>Offloading to the CPU has the disadvantage of using the CPU. If you are using one program only (usually the case with fullscreen games) and it is not clogging the CPU by itself (happens for specific games) you can offload without degrading experience, otherwise you better not.</p><p>Then it is understandable why they want their best numbers to go on the benchmarks. We will see if it is considered reasonable or cheating.</p></div>
	</htmltext>
<tokenext>The question that should be asked is : What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders ? Offloading to the CPU has the disadvantage of using the CPU .
If you are using one program only ( usually the case with fullscreen games ) and it is not clogging the CPU by itself ( happens for specific games ) you can offload without degrading experience , otherwise you better not.Then it is understandable why they want their best numbers to go on the benchmarks .
We will see if it is considered reasonable or cheating .</tokentext>
<sentencetext>The question that should be asked is: What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders?Offloading to the CPU has the disadvantage of using the CPU.
If you are using one program only (usually the case with fullscreen games) and it is not clogging the CPU by itself (happens for specific games) you can offload without degrading experience, otherwise you better not.Then it is understandable why they want their best numbers to go on the benchmarks.
We will see if it is considered reasonable or cheating.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29740111</id>
	<title>Intel Corks Drivers - That's News?</title>
	<author>Nom du Keyboard</author>
	<datestamp>1255445580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>So now it's Intel corking their drivers - and this is news?  Maybe they're just getting in their practice so that when it comes to Laughabee they'll actually try to make it look competitive.
<br> <br>
AMD bought ATI in a panic after they couldn't merge with Nvidia (there wasn't room for 2 giant egos in that single company) to combat Intel Core + Larrabee on the same MCM.  Now Larrabee is a no-show for months, if not years, to come and it's time for Intel to start panicking.  Core i7 may be good, but AMD + ATI on the same die might be the better fit for most people.</htmltext>
<tokenext>So now it 's Intel corking their drivers - and this is news ?
Maybe they 're just getting in their practice so that when it comes to Laughabee they 'll actually try to make it look competitive .
AMD bought ATI in a panic after they could n't merge with Nvidia ( there was n't room for 2 giant egos in that single company ) to combat Intel Core + Larrabee on the same MCM .
Now Larrabee is a no-show for months , if not years , to come and it 's time for Intel to start panicking .
Core i7 may be good , but AMD + ATI on the same die might be the better fit for most people .</tokentext>
<sentencetext>So now it's Intel corking their drivers - and this is news?
Maybe they're just getting in their practice so that when it comes to Laughabee they'll actually try to make it look competitive.
AMD bought ATI in a panic after they couldn't merge with Nvidia (there wasn't room for 2 giant egos in that single company) to combat Intel Core + Larrabee on the same MCM.
Now Larrabee is a no-show for months, if not years, to come and it's time for Intel to start panicking.
Core i7 may be good, but AMD + ATI on the same die might be the better fit for most people.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>Eil</author>
	<datestamp>1255370880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>But see also Intel's response on page 2:</p><blockquote><div><p>We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance. 3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes. We have used similar techniques with DX9 in previous products and drivers. The benefit to users is optimized performance based on best use of the hardware available in the system. Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.</p></div> </blockquote><p>And the rest of page 2 indicates that offloading some of the work to the CPU does, for certain games, improve performance significantly. Offhand, this doesn't necessarily seem like a bad thing. Intel is just trying to make the most out of the hardware of the whole machine. Also, one would also do well to bear in mind that the GPU in question is an integrated graphics chipset: they're not out to compete against a modern gaming video adapter and thus have little incentive to pump their numbers in a synthetic benchmark. Nobody buys a motherboard based on the capabilities of the integrated graphics.</p><p>The question that should be asked is: What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders?</p></div>
	</htmltext>
<tokenext>But see also Intel 's response on page 2 : We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing , the CPU can assist with DX10 geometry processing to enhance overall performance .
3DMarkVantage is one of those workloads , as are Call of Juarez , Crysis , Lost Planet : Extreme Conditions , and Company of Heroes .
We have used similar techniques with DX9 in previous products and drivers .
The benefit to users is optimized performance based on best use of the hardware available in the system .
Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers .
And the rest of page 2 indicates that offloading some of the work to the CPU does , for certain games , improve performance significantly .
Offhand , this does n't necessarily seem like a bad thing .
Intel is just trying to make the most out of the hardware of the whole machine .
Also , one would also do well to bear in mind that the GPU in question is an integrated graphics chipset : they 're not out to compete against a modern gaming video adapter and thus have little incentive to pump their numbers in a synthetic benchmark .
Nobody buys a motherboard based on the capabilities of the integrated graphics.The question that should be asked is : What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders ?</tokentext>
<sentencetext>But see also Intel's response on page 2:We have engineered intelligence into our 4 series graphics driver such that when a workload saturates graphics engine with pixel and vertex processing, the CPU can assist with DX10 geometry processing to enhance overall performance.
3DMarkVantage is one of those workloads, as are Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes.
We have used similar techniques with DX9 in previous products and drivers.
The benefit to users is optimized performance based on best use of the hardware available in the system.
Our driver is currently in the certification process with Futuremark and we fully expect it will pass their certification as did our previous DX9 drivers.
And the rest of page 2 indicates that offloading some of the work to the CPU does, for certain games, improve performance significantly.
Offhand, this doesn't necessarily seem like a bad thing.
Intel is just trying to make the most out of the hardware of the whole machine.
Also, one would also do well to bear in mind that the GPU in question is an integrated graphics chipset: they're not out to compete against a modern gaming video adapter and thus have little incentive to pump their numbers in a synthetic benchmark.
Nobody buys a motherboard based on the capabilities of the integrated graphics.The question that should be asked is: What is the technical reason for the drivers singling out only a handful of games and one benchmark utility instead of performing these optimizations on all 3D scenes that the chipset renders?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29741969</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>phntm</author>
	<datestamp>1255511160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I sure bought my laptop mainly because it DIDN'T have intel graphics, and my main comparative between models was 3d mark results of the different graphics cards.<br>I've had an intel chipset before and it was the bottleneck for the entire computer.</p></htmltext>
<tokenext>I sure bought my laptop mainly because it DID N'T have intel graphics , and my main comparative between models was 3d mark results of the different graphics cards.I 've had an intel chipset before and it was the bottleneck for the entire computer .</tokentext>
<sentencetext>I sure bought my laptop mainly because it DIDN'T have intel graphics, and my main comparative between models was 3d mark results of the different graphics cards.I've had an intel chipset before and it was the bottleneck for the entire computer.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730613</id>
	<title>I hope that new apple systems don't get stuck with</title>
	<author>Joe The Dragon</author>
	<datestamp>1255441320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I hope that new apple systems don't get stuck with this carp video + a dual core cpu. It's the new imac thinner then even with intel core i3 cpu and half as fast video starting at $1200. To get a real video card starting price is $1800.</p><p>Mac mini with the slowest corei3 and 2gb of ram starting at $500-$600.</p><p>APPLE IF you plan to pull that carp at least have a real desktop at $800-$1500+.</p></htmltext>
<tokenext>I hope that new apple systems do n't get stuck with this carp video + a dual core cpu .
It 's the new imac thinner then even with intel core i3 cpu and half as fast video starting at $ 1200 .
To get a real video card starting price is $ 1800.Mac mini with the slowest corei3 and 2gb of ram starting at $ 500- $ 600.APPLE IF you plan to pull that carp at least have a real desktop at $ 800- $ 1500 + .</tokentext>
<sentencetext>I hope that new apple systems don't get stuck with this carp video + a dual core cpu.
It's the new imac thinner then even with intel core i3 cpu and half as fast video starting at $1200.
To get a real video card starting price is $1800.Mac mini with the slowest corei3 and 2gb of ram starting at $500-$600.APPLE IF you plan to pull that carp at least have a real desktop at $800-$1500+.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729441</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>Anonymous</author>
	<datestamp>1255424940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Are you daft?<br>"Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes" are GAMES.</p><p>3DMarkVantage<nobr> <wbr></nobr>........    is not a GAME.</p></htmltext>
<tokenext>Are you daft ?
" Call of Juarez , Crysis , Lost Planet : Extreme Conditions , and Company of Heroes " are GAMES.3DMarkVantage ........ is not a GAME .</tokentext>
<sentencetext>Are you daft?
"Call of Juarez, Crysis, Lost Planet: Extreme Conditions, and Company of Heroes" are GAMES.3DMarkVantage ........    is not a GAME.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369</id>
	<title>Eh?</title>
	<author>Anonymous</author>
	<datestamp>1255367220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I thought offloading graphics computations to the CPU was the whole *point* of integrated video.</p></htmltext>
<tokenext>I thought offloading graphics computations to the CPU was the whole * point * of integrated video .</tokentext>
<sentencetext>I thought offloading graphics computations to the CPU was the whole *point* of integrated video.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728927</id>
	<title>Re:Doesn't 3DMark cheat too?</title>
	<author>pantherace</author>
	<datestamp>1255374300000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>You may be thinking of changing the CPUID on Via chips to GenuineIntel vs AuthenticAMD vs CentaurHauls.</p><p>There's one of the 'big' benchmark suites where the chip's score is roughly the same on AuthenticAMD and CentaurHauls, but gets a boost on GenuineIntel. Via's chips are the only ones with (user) changeable cpuid, so we don't know how differently IDed AMD or Intel do, but still interesting.</p><p>(First google'd link talking about it.)<br><a href="http://www.maximumpc.com/article/news/pcmark\_memory\_benchmark\_favors\_genuineintel\_over\_authenticamd" title="maximumpc.com" rel="nofollow">http://www.maximumpc.com/article/news/pcmark\_memory\_benchmark\_favors\_genuineintel\_over\_authenticamd</a> [maximumpc.com]</p></htmltext>
<tokenext>You may be thinking of changing the CPUID on Via chips to GenuineIntel vs AuthenticAMD vs CentaurHauls.There 's one of the 'big ' benchmark suites where the chip 's score is roughly the same on AuthenticAMD and CentaurHauls , but gets a boost on GenuineIntel .
Via 's chips are the only ones with ( user ) changeable cpuid , so we do n't know how differently IDed AMD or Intel do , but still interesting .
( First google 'd link talking about it .
) http : //www.maximumpc.com/article/news/pcmark \ _memory \ _benchmark \ _favors \ _genuineintel \ _over \ _authenticamd [ maximumpc.com ]</tokentext>
<sentencetext>You may be thinking of changing the CPUID on Via chips to GenuineIntel vs AuthenticAMD vs CentaurHauls.There's one of the 'big' benchmark suites where the chip's score is roughly the same on AuthenticAMD and CentaurHauls, but gets a boost on GenuineIntel.
Via's chips are the only ones with (user) changeable cpuid, so we don't know how differently IDed AMD or Intel do, but still interesting.
(First google'd link talking about it.
)http://www.maximumpc.com/article/news/pcmark\_memory\_benchmark\_favors\_genuineintel\_over\_authenticamd [maximumpc.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728459</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728565</id>
	<title>Re:Eh?</title>
	<author>parallel\_prankster</author>
	<datestamp>1255369200000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>Effectively dividing tasks among CPUs is not the issue here. They want to benchmark the GPU and they wanna make sure you don't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly.</htmltext>
<tokenext>Effectively dividing tasks among CPUs is not the issue here .
They want to benchmark the GPU and they wan na make sure you do n't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly .</tokentext>
<sentencetext>Effectively dividing tasks among CPUs is not the issue here.
They want to benchmark the GPU and they wanna make sure you don't enable optimizations that are targeted specifically for the benchmark which Intel was doing shamelessly.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729121</id>
	<title>Cheating is mandatory for corporate thugs...</title>
	<author>Bob\_Who</author>
	<datestamp>1255377300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>But they're expected not to get caught.  The truth will screw up the inflated stock values.  Shareholders get rabid, which makes the lawyers have to work slightly longer than an hour.  Just weed out the inferior ones who fail at lying and stealing and cheating like a professional capitalist, and send them off to Radio Shack in Moldova.</htmltext>
<tokenext>But they 're expected not to get caught .
The truth will screw up the inflated stock values .
Shareholders get rabid , which makes the lawyers have to work slightly longer than an hour .
Just weed out the inferior ones who fail at lying and stealing and cheating like a professional capitalist , and send them off to Radio Shack in Moldova .</tokentext>
<sentencetext>But they're expected not to get caught.
The truth will screw up the inflated stock values.
Shareholders get rabid, which makes the lawyers have to work slightly longer than an hour.
Just weed out the inferior ones who fail at lying and stealing and cheating like a professional capitalist, and send them off to Radio Shack in Moldova.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732477</id>
	<title>I don't understand...</title>
	<author>joetomato</author>
	<datestamp>1255451520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Wouldn't it be trivial to have the benchmark app randomly rename itself every time it runs? It would be far less trivial to optimize for djdusah89efhsl123d.exe...</htmltext>
<tokenext>Would n't it be trivial to have the benchmark app randomly rename itself every time it runs ?
It would be far less trivial to optimize for djdusah89efhsl123d.exe.. .</tokentext>
<sentencetext>Wouldn't it be trivial to have the benchmark app randomly rename itself every time it runs?
It would be far less trivial to optimize for djdusah89efhsl123d.exe...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451</id>
	<title>If you're too lazy to RTFA...</title>
	<author>Jonboy X</author>
	<datestamp>1255367760000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Just look at the pics. Changing the name of the executable changed the results dramatically. The driver is apparently detecting when it's running a 3DMark (or some other specific apps) and switches to some other mode to boost its scores/FPS markings.</p></htmltext>
<tokenext>Just look at the pics .
Changing the name of the executable changed the results dramatically .
The driver is apparently detecting when it 's running a 3DMark ( or some other specific apps ) and switches to some other mode to boost its scores/FPS markings .</tokentext>
<sentencetext>Just look at the pics.
Changing the name of the executable changed the results dramatically.
The driver is apparently detecting when it's running a 3DMark (or some other specific apps) and switches to some other mode to boost its scores/FPS markings.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729669</id>
	<title>Detecting CPU consumption</title>
	<author>flux</author>
	<datestamp>1255429440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The article isn't loading for me, but: can't they simply measure the amount of CPU used during the benchmark and use that information in the benchmark? I don't think it's basically evil to perform that kind of offloading (except in this case when the rules of 3DMark forbid using empirical data on it to optimize performance; but then again, I would imagine many other pieces of software also get this treatment without bad effects on quality or game experience), but dynamically detecting the situation would definitely be complicated; and it might even sometimes give the wrong answer.</p><p>One pretty useful heuristic for this kind of optimization would however be "is the CPU usage high without offloading GPU work to CPU: if so, don't do it". Hey, maybe the drivers could have a 'profiling'-mode, which would perhaps slow the performance but figure out the optimal parameters for running the program.</p></htmltext>
<tokenext>The article is n't loading for me , but : ca n't they simply measure the amount of CPU used during the benchmark and use that information in the benchmark ?
I do n't think it 's basically evil to perform that kind of offloading ( except in this case when the rules of 3DMark forbid using empirical data on it to optimize performance ; but then again , I would imagine many other pieces of software also get this treatment without bad effects on quality or game experience ) , but dynamically detecting the situation would definitely be complicated ; and it might even sometimes give the wrong answer.One pretty useful heuristic for this kind of optimization would however be " is the CPU usage high without offloading GPU work to CPU : if so , do n't do it " .
Hey , maybe the drivers could have a 'profiling'-mode , which would perhaps slow the performance but figure out the optimal parameters for running the program .</tokentext>
<sentencetext>The article isn't loading for me, but: can't they simply measure the amount of CPU used during the benchmark and use that information in the benchmark?
I don't think it's basically evil to perform that kind of offloading (except in this case when the rules of 3DMark forbid using empirical data on it to optimize performance; but then again, I would imagine many other pieces of software also get this treatment without bad effects on quality or game experience), but dynamically detecting the situation would definitely be complicated; and it might even sometimes give the wrong answer.One pretty useful heuristic for this kind of optimization would however be "is the CPU usage high without offloading GPU work to CPU: if so, don't do it".
Hey, maybe the drivers could have a 'profiling'-mode, which would perhaps slow the performance but figure out the optimal parameters for running the program.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730717</id>
	<title>Re:If you're too lazy to RTFA...</title>
	<author>hattig</author>
	<datestamp>1255442280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"intelligence" isn't a (small) hardcoded list of executable names to enable this optimisation, including the industry leading benchmarking application.</p><p>This tweak gives Intel Integrated Graphics 20\% higher score than ATI Integrated Graphics, despite performing half to one third as fast in real-world gaming tests.</p><p>Who is cheating? Intel.</p><p>Also the benchmark guidelines forbid it. Intel should add in the required "intelligence" and make it generic throughout their drivers.</p></htmltext>
<tokenext>" intelligence " is n't a ( small ) hardcoded list of executable names to enable this optimisation , including the industry leading benchmarking application.This tweak gives Intel Integrated Graphics 20 \ % higher score than ATI Integrated Graphics , despite performing half to one third as fast in real-world gaming tests.Who is cheating ?
Intel.Also the benchmark guidelines forbid it .
Intel should add in the required " intelligence " and make it generic throughout their drivers .</tokentext>
<sentencetext>"intelligence" isn't a (small) hardcoded list of executable names to enable this optimisation, including the industry leading benchmarking application.This tweak gives Intel Integrated Graphics 20\% higher score than ATI Integrated Graphics, despite performing half to one third as fast in real-world gaming tests.Who is cheating?
Intel.Also the benchmark guidelines forbid it.
Intel should add in the required "intelligence" and make it generic throughout their drivers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728507</id>
	<title>Re:Eh?</title>
	<author>The MAZZTer</author>
	<datestamp>1255368540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext>And here I thought the whole point of not doing video on the CPU was to offload it to a dedicated chip!</htmltext>
<tokenext>And here I thought the whole point of not doing video on the CPU was to offload it to a dedicated chip !</tokentext>
<sentencetext>And here I thought the whole point of not doing video on the CPU was to offload it to a dedicated chip!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729401</id>
	<title>Re:Which would make sense...</title>
	<author>buchner.johannes</author>
	<datestamp>1255424520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They should remarket it as "application-targeted profiles" and sell it as a optimization feature.</p></htmltext>
<tokenext>They should remarket it as " application-targeted profiles " and sell it as a optimization feature .</tokentext>
<sentencetext>They should remarket it as "application-targeted profiles" and sell it as a optimization feature.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728599</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732657</id>
	<title>Where's the cheating?</title>
	<author>TomRC</author>
	<datestamp>1255452300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The "rules" for 3dMark are basically a statement of what drivers they will/will not approve for use with 3dMark.<br>The driver in question is not approved for 3dMark.   Where's the cheating?<br>If anything, they should provide a GUI to let users enable CPU enhancement for a game that's not listed in the INF by default.</p></htmltext>
<tokenext>The " rules " for 3dMark are basically a statement of what drivers they will/will not approve for use with 3dMark.The driver in question is not approved for 3dMark .
Where 's the cheating ? If anything , they should provide a GUI to let users enable CPU enhancement for a game that 's not listed in the INF by default .</tokentext>
<sentencetext>The "rules" for 3dMark are basically a statement of what drivers they will/will not approve for use with 3dMark.The driver in question is not approved for 3dMark.
Where's the cheating?If anything, they should provide a GUI to let users enable CPU enhancement for a game that's not listed in the INF by default.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355</id>
	<title>Good reporting there Ric</title>
	<author>Anonymous</author>
	<datestamp>1255367100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.</p></htmltext>
<tokenext>Thanks for telling all of us that the best measure of hardware 's performance ingame is... to benchmark it with a game .</tokentext>
<sentencetext>Thanks for telling all of us that the best measure of hardware's performance ingame is... to benchmark it with a game.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729339</id>
	<title>Re:Hmm...</title>
	<author>Anonymous</author>
	<datestamp>1255466940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thing</p></div><p>It's definitely not the way it should be. Nvidia and ATI are capable of generic drivers/chips. However Intel is notorious for shoddy graphics chips and drivers. They are horrible hacks. Whenever a game player/developer (even with less graphically demanding titles) has a problem and lists some Intel graphics chip, the only response you can usually give is "Oh yea, that's Intel for you. Get a real graphics card.".</p><p>If anything those cheaters (not only this but also claiming 3D API support that is a joke) are a serious threat to PC gaming as they suggest 3D capabilities of Intel computers that is sub-standard.</p></div>
	</htmltext>
<tokenext>if in fact that is the way the world works , and intel can make the top N most common applications work better through manual tweaking , I 'm ca n't really say that that is a bad thingIt 's definitely not the way it should be .
Nvidia and ATI are capable of generic drivers/chips .
However Intel is notorious for shoddy graphics chips and drivers .
They are horrible hacks .
Whenever a game player/developer ( even with less graphically demanding titles ) has a problem and lists some Intel graphics chip , the only response you can usually give is " Oh yea , that 's Intel for you .
Get a real graphics card .
" .If anything those cheaters ( not only this but also claiming 3D API support that is a joke ) are a serious threat to PC gaming as they suggest 3D capabilities of Intel computers that is sub-standard .</tokentext>
<sentencetext>if in fact that is the way the world works, and intel can make the top N most common applications work better through manual tweaking, I'm can't really say that that is a bad thingIt's definitely not the way it should be.
Nvidia and ATI are capable of generic drivers/chips.
However Intel is notorious for shoddy graphics chips and drivers.
They are horrible hacks.
Whenever a game player/developer (even with less graphically demanding titles) has a problem and lists some Intel graphics chip, the only response you can usually give is "Oh yea, that's Intel for you.
Get a real graphics card.
".If anything those cheaters (not only this but also claiming 3D API support that is a joke) are a serious threat to PC gaming as they suggest 3D capabilities of Intel computers that is sub-standard.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728663</id>
	<title>Re:Doesn't 3DMark cheat too?</title>
	<author>Idiomatick</author>
	<datestamp>1255370340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Uh google doesn't give anything so I'll go with no.</htmltext>
<tokenext>Uh google does n't give anything so I 'll go with no .</tokentext>
<sentencetext>Uh google doesn't give anything so I'll go with no.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728459</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730671
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728507
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729289
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728745
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728601
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730857
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729901
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29737431
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728661
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729271
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728507
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728927
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728459
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728795
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728623
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728531
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729509
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729115
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729441
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728501
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29731939
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729115
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728981
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728739
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29741969
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729325
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728601
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728663
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728459
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728965
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728549
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732897
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729401
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728599
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728549
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29735357
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728531
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728565
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729339
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_12_2341240_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730717
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730613
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728485
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728459
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728927
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728663
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728531
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728623
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728795
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29735357
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728451
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728661
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728739
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728697
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29741969
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729441
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730717
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730857
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29737431
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728355
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728601
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728745
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729325
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728981
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29732897
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728501
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728445
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729289
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729115
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729509
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29731939
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729339
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730425
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730569
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728561
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728383
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728549
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728965
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728843
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728369
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728507
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29730671
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729271
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729901
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728565
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728599
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29729401
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_12_2341240.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_12_2341240.29728449
</commentlist>
</conversation>
