<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article10_03_16_1346258</id>
	<title>Blazing Fast Password Recovery With New ATI Cards</title>
	<author>CmdrTaco</author>
	<datestamp>1268751300000</datestamp>
	<htmltext>An anonymous reader writes <i>"ElcomSoft accelerates the recovery of Wi-Fi passwords and password-protected iPhone and iPod backups by using ATI video cards. The support of ATI Radeon 5000 series video accelerators allows ElcomSoft to perform <a href="http://www.net-security.org/secworld.php?id=9021">password recovery up to 20 times faster compared to Intel top of the line quad-core CPUs</a>, and up to two times faster compared to enterprise-level NVIDIA Tesla solutions. Benchmarks performed by ElcomSoft demonstrate that ATI Radeon HD5970 accelerated password recovery works up to 20 times faster than Core i7-960, Intel's current top of the line CPU unit."</i></htmltext>
<tokenext>An anonymous reader writes " ElcomSoft accelerates the recovery of Wi-Fi passwords and password-protected iPhone and iPod backups by using ATI video cards .
The support of ATI Radeon 5000 series video accelerators allows ElcomSoft to perform password recovery up to 20 times faster compared to Intel top of the line quad-core CPUs , and up to two times faster compared to enterprise-level NVIDIA Tesla solutions .
Benchmarks performed by ElcomSoft demonstrate that ATI Radeon HD5970 accelerated password recovery works up to 20 times faster than Core i7-960 , Intel 's current top of the line CPU unit .
"</tokentext>
<sentencetext>An anonymous reader writes "ElcomSoft accelerates the recovery of Wi-Fi passwords and password-protected iPhone and iPod backups by using ATI video cards.
The support of ATI Radeon 5000 series video accelerators allows ElcomSoft to perform password recovery up to 20 times faster compared to Intel top of the line quad-core CPUs, and up to two times faster compared to enterprise-level NVIDIA Tesla solutions.
Benchmarks performed by ElcomSoft demonstrate that ATI Radeon HD5970 accelerated password recovery works up to 20 times faster than Core i7-960, Intel's current top of the line CPU unit.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496830</id>
	<title>Re:Out of curiosity...</title>
	<author>ShadowRangerRIT</author>
	<datestamp>1268758740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>I run the <a href="http://folding.stanford.edu/" title="stanford.edu">Folding@home</a> [stanford.edu] <a href="http://folding.stanford.edu/English/FAQ-NVIDIA" title="stanford.edu">GPU client</a> [stanford.edu] on my GeForce 8800 GTX. On Vista and later OSes (pre-Vista, the driver model wasn't well adapted to GPGPU and this leads to a polling driven communication scheme which is really inefficient), the effect on resources is unnoticeable aside from during games (where I kill the client to reduce jerkiness); the GPGPU work is lower priority and gets shunted aside from rendering, though the latency involved is a problem for graphics intensive games. For less demanding work and general usage, it's unnoticeable; the GPU is perfectly capable of drawing the screen and curing Alzheimer's at the same time.<nobr> <wbr></nobr>:-)</htmltext>
<tokenext>I run the Folding @ home [ stanford.edu ] GPU client [ stanford.edu ] on my GeForce 8800 GTX .
On Vista and later OSes ( pre-Vista , the driver model was n't well adapted to GPGPU and this leads to a polling driven communication scheme which is really inefficient ) , the effect on resources is unnoticeable aside from during games ( where I kill the client to reduce jerkiness ) ; the GPGPU work is lower priority and gets shunted aside from rendering , though the latency involved is a problem for graphics intensive games .
For less demanding work and general usage , it 's unnoticeable ; the GPU is perfectly capable of drawing the screen and curing Alzheimer 's at the same time .
: - )</tokentext>
<sentencetext>I run the Folding@home [stanford.edu] GPU client [stanford.edu] on my GeForce 8800 GTX.
On Vista and later OSes (pre-Vista, the driver model wasn't well adapted to GPGPU and this leads to a polling driven communication scheme which is really inefficient), the effect on resources is unnoticeable aside from during games (where I kill the client to reduce jerkiness); the GPGPU work is lower priority and gets shunted aside from rendering, though the latency involved is a problem for graphics intensive games.
For less demanding work and general usage, it's unnoticeable; the GPU is perfectly capable of drawing the screen and curing Alzheimer's at the same time.
:-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497016</id>
	<title>ElcomSoft updates their Slashvertisements?</title>
	<author>InvisiBill</author>
	<datestamp>1268759340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><a href="http://it.slashdot.org/story/09/01/15/1334222/GPUs-Used-To-Crack-WiFi-Passwords-Faster" title="slashdot.org">http://it.slashdot.org/story/09/01/15/1334222/GPUs-Used-To-Crack-WiFi-Passwords-Faster</a> [slashdot.org]
<p>This seems to be an update of last year's story, just to mention that the HD5000 series is now supported, and it's faster on the newer, faster video cards.</p></htmltext>
<tokenext>http : //it.slashdot.org/story/09/01/15/1334222/GPUs-Used-To-Crack-WiFi-Passwords-Faster [ slashdot.org ] This seems to be an update of last year 's story , just to mention that the HD5000 series is now supported , and it 's faster on the newer , faster video cards .</tokentext>
<sentencetext>http://it.slashdot.org/story/09/01/15/1334222/GPUs-Used-To-Crack-WiFi-Passwords-Faster [slashdot.org]
This seems to be an update of last year's story, just to mention that the HD5000 series is now supported, and it's faster on the newer, faster video cards.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496204</id>
	<title>Re:My password is safe</title>
	<author>Anonymous</author>
	<datestamp>1268756520000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Dude, haven't you heard? It's really insecure to use such a short password. And yours is surely the shortest EVAR.</htmltext>
<tokenext>Dude , have n't you heard ?
It 's really insecure to use such a short password .
And yours is surely the shortest EVAR .</tokentext>
<sentencetext>Dude, haven't you heard?
It's really insecure to use such a short password.
And yours is surely the shortest EVAR.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495948</id>
	<title>I like the use of the word "Recovery"</title>
	<author>wisebabo</author>
	<datestamp>1268755560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think we all know what they really mean.<nobr> <wbr></nobr>;)</p><p>(Anyway, I'm also impressed by the power shown by the GPUs.  Its a good demonstration that some of the new technologies (CULA? CUDA?) that allow "regular" programmers to use this power actually will really speed up some things.)</p></htmltext>
<tokenext>I think we all know what they really mean .
; ) ( Anyway , I 'm also impressed by the power shown by the GPUs .
Its a good demonstration that some of the new technologies ( CULA ?
CUDA ? ) that allow " regular " programmers to use this power actually will really speed up some things .
)</tokentext>
<sentencetext>I think we all know what they really mean.
;)(Anyway, I'm also impressed by the power shown by the GPUs.
Its a good demonstration that some of the new technologies (CULA?
CUDA?) that allow "regular" programmers to use this power actually will really speed up some things.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496088</id>
	<title>80 Streams vs 4 Cores.</title>
	<author>Anonymous</author>
	<datestamp>1268756160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I should hope they could do 20 times better.</p></htmltext>
<tokenext>I should hope they could do 20 times better .</tokentext>
<sentencetext>I should hope they could do 20 times better.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496676</id>
	<title>Re:Slashvertisement</title>
	<author>elrous0</author>
	<datestamp>1268758260000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Agreed, looks more like the kind of "story" we'd see posted by kdawson, not Taco.</htmltext>
<tokenext>Agreed , looks more like the kind of " story " we 'd see posted by kdawson , not Taco .</tokentext>
<sentencetext>Agreed, looks more like the kind of "story" we'd see posted by kdawson, not Taco.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496122</id>
	<title>Re:GPUs</title>
	<author>imgod2u</author>
	<datestamp>1268756220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>To some level, CPU's have been moving to be more GPU like for a long time. SIMD (SSE, AltiVec, NEON) are GPU features that made their way to CPU's. Ditto for parallel, long pipelines. Remember the Pentium 4? That was a huge step in the GPU direction.</p><p>There are two problems with that approach:</p><p>1. Code that isn't pure number-crunching doesn't run well on such a compute model.<br>2. The model is almost entirely memory-starved. GPU's have up to a GB of high-speed, dedicated RAM on the card itself. CPU's have to live with high-density (relatively) slot-loaded memory.</p><p>AMD is moving in a direction where the GPU compute parts are fed by the CPU front-end. As we move forward, I suspect we'll see more of a "fusion" if you will (don't sue me) of the two compute models.</p></htmltext>
<tokenext>To some level , CPU 's have been moving to be more GPU like for a long time .
SIMD ( SSE , AltiVec , NEON ) are GPU features that made their way to CPU 's .
Ditto for parallel , long pipelines .
Remember the Pentium 4 ?
That was a huge step in the GPU direction.There are two problems with that approach : 1 .
Code that is n't pure number-crunching does n't run well on such a compute model.2 .
The model is almost entirely memory-starved .
GPU 's have up to a GB of high-speed , dedicated RAM on the card itself .
CPU 's have to live with high-density ( relatively ) slot-loaded memory.AMD is moving in a direction where the GPU compute parts are fed by the CPU front-end .
As we move forward , I suspect we 'll see more of a " fusion " if you will ( do n't sue me ) of the two compute models .</tokentext>
<sentencetext>To some level, CPU's have been moving to be more GPU like for a long time.
SIMD (SSE, AltiVec, NEON) are GPU features that made their way to CPU's.
Ditto for parallel, long pipelines.
Remember the Pentium 4?
That was a huge step in the GPU direction.There are two problems with that approach:1.
Code that isn't pure number-crunching doesn't run well on such a compute model.2.
The model is almost entirely memory-starved.
GPU's have up to a GB of high-speed, dedicated RAM on the card itself.
CPU's have to live with high-density (relatively) slot-loaded memory.AMD is moving in a direction where the GPU compute parts are fed by the CPU front-end.
As we move forward, I suspect we'll see more of a "fusion" if you will (don't sue me) of the two compute models.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502850</id>
	<title>Re:GPUs</title>
	<author>BOFHelsinki</author>
	<datestamp>1268741580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i> Remember the Pentium 4? That was a huge step in the GPU direction. </i> <br> <br>

What? <br> <br>

(And try DSP for a better example of a CPU's SIMD goal rather than a GPU.)</htmltext>
<tokenext>Remember the Pentium 4 ?
That was a huge step in the GPU direction .
What ? ( And try DSP for a better example of a CPU 's SIMD goal rather than a GPU .
)</tokentext>
<sentencetext> Remember the Pentium 4?
That was a huge step in the GPU direction.
What?  

(And try DSP for a better example of a CPU's SIMD goal rather than a GPU.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496122</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497774</id>
	<title>john the cracker and pdfcrack, other apps?</title>
	<author>Anonymous</author>
	<datestamp>1268762040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>ok, I understand the differences between linear processing and parallel processing.  And the differences between cpus and gpus.  And I understand that with some coding, nVidia/Tesla and ATI gpu cards can be used for parallel processing with applications the software (Tesla?) ports to, but <b>will apps like john the cracker and pdfcrack be able to take advantage of the gpu card sitting in my desktop or laptop?</b>  Will it take some coding by the maintainers of john and pdfcrack to gain this ability, or is it too complicated?  Can other apps take advantage of gpu processing also (I'm thinking apache (multiworker, increased processes instead of increased threads, etc), databases, ethernet, etc.
<br> <br>
Better yet, can there be a middle layer, like (God forbid) java, modperl, php's engine or other that can be written as sort of a plugin to nVidia/Tesla or ATI which automates the process of enabling the parallel processing ability of the gpu cards to be used with any app as long as the middle layer ports the gpu abilities to a standard?
<br> <br>
Nice to dream about it anyway.  Can't wait for 8/16 cores and 8/16 GB standard ram shipping in laptops and low end desktops.</htmltext>
<tokenext>ok , I understand the differences between linear processing and parallel processing .
And the differences between cpus and gpus .
And I understand that with some coding , nVidia/Tesla and ATI gpu cards can be used for parallel processing with applications the software ( Tesla ?
) ports to , but will apps like john the cracker and pdfcrack be able to take advantage of the gpu card sitting in my desktop or laptop ?
Will it take some coding by the maintainers of john and pdfcrack to gain this ability , or is it too complicated ?
Can other apps take advantage of gpu processing also ( I 'm thinking apache ( multiworker , increased processes instead of increased threads , etc ) , databases , ethernet , etc .
Better yet , can there be a middle layer , like ( God forbid ) java , modperl , php 's engine or other that can be written as sort of a plugin to nVidia/Tesla or ATI which automates the process of enabling the parallel processing ability of the gpu cards to be used with any app as long as the middle layer ports the gpu abilities to a standard ?
Nice to dream about it anyway .
Ca n't wait for 8/16 cores and 8/16 GB standard ram shipping in laptops and low end desktops .</tokentext>
<sentencetext>ok, I understand the differences between linear processing and parallel processing.
And the differences between cpus and gpus.
And I understand that with some coding, nVidia/Tesla and ATI gpu cards can be used for parallel processing with applications the software (Tesla?
) ports to, but will apps like john the cracker and pdfcrack be able to take advantage of the gpu card sitting in my desktop or laptop?
Will it take some coding by the maintainers of john and pdfcrack to gain this ability, or is it too complicated?
Can other apps take advantage of gpu processing also (I'm thinking apache (multiworker, increased processes instead of increased threads, etc), databases, ethernet, etc.
Better yet, can there be a middle layer, like (God forbid) java, modperl, php's engine or other that can be written as sort of a plugin to nVidia/Tesla or ATI which automates the process of enabling the parallel processing ability of the gpu cards to be used with any app as long as the middle layer ports the gpu abilities to a standard?
Nice to dream about it anyway.
Can't wait for 8/16 cores and 8/16 GB standard ram shipping in laptops and low end desktops.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760</id>
	<title>My password is safe</title>
	<author>Anonymous</author>
	<datestamp>1268755020000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>0</modscore>
	<htmltext><p>Because it's in my pants!</p></htmltext>
<tokenext>Because it 's in my pants !</tokentext>
<sentencetext>Because it's in my pants!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31507414</id>
	<title>Re:Stop with the advertising</title>
	<author>Anonymous</author>
	<datestamp>1268835600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>as an IT security guy, if you're so focused on what attacks are practical \_now\_ then you've failed, remind me not to put any data I care in your company services.</htmltext>
<tokenext>as an IT security guy , if you 're so focused on what attacks are practical \ _now \ _ then you 've failed , remind me not to put any data I care in your company services .</tokentext>
<sentencetext>as an IT security guy, if you're so focused on what attacks are practical \_now\_ then you've failed, remind me not to put any data I care in your company services.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499672</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</id>
	<title>GPUs</title>
	<author>Thyamine</author>
	<datestamp>1268755500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>This isn't the first story about how crazy fast GPUs are for crunching.  I know very little about that level of hardware, but why aren't we incorporating these types of things into CPUs?  Is the coding/assembly so different that it doesn't translate?  Do they only do certain kinds of processing really well (it is a GPU after all), so it couldn't handle other more 'mundane' OS needs?</htmltext>
<tokenext>This is n't the first story about how crazy fast GPUs are for crunching .
I know very little about that level of hardware , but why are n't we incorporating these types of things into CPUs ?
Is the coding/assembly so different that it does n't translate ?
Do they only do certain kinds of processing really well ( it is a GPU after all ) , so it could n't handle other more 'mundane ' OS needs ?</tokentext>
<sentencetext>This isn't the first story about how crazy fast GPUs are for crunching.
I know very little about that level of hardware, but why aren't we incorporating these types of things into CPUs?
Is the coding/assembly so different that it doesn't translate?
Do they only do certain kinds of processing really well (it is a GPU after all), so it couldn't handle other more 'mundane' OS needs?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086</id>
	<title>Out of curiosity...</title>
	<author>Anonymous</author>
	<datestamp>1268756100000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I keep hearing stories about using GPUs for non-GPU computations, but has anybody here tried it?</p><p>What does your screen look like while a program like this is running?</p></htmltext>
<tokenext>I keep hearing stories about using GPUs for non-GPU computations , but has anybody here tried it ? What does your screen look like while a program like this is running ?</tokentext>
<sentencetext>I keep hearing stories about using GPUs for non-GPU computations, but has anybody here tried it?What does your screen look like while a program like this is running?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497670</id>
	<title>It is not ATI, it is AMD</title>
	<author>postmortem</author>
	<datestamp>1268761680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>ATi is sub-brand of AMD.</p><p>Then it sounds even better, we all like when underdog beats 10-times bigger Co.</p></htmltext>
<tokenext>ATi is sub-brand of AMD.Then it sounds even better , we all like when underdog beats 10-times bigger Co .</tokentext>
<sentencetext>ATi is sub-brand of AMD.Then it sounds even better, we all like when underdog beats 10-times bigger Co.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496698</id>
	<title>Re:GPUs</title>
	<author>0123456</author>
	<datestamp>1268758320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I know very little about that level of hardware, but why aren't we incorporating these types of things into CPUs?</p></div><p>Because most people don't want their CPU consuming 300W of power when idle?</p></div>
	</htmltext>
<tokenext>I know very little about that level of hardware , but why are n't we incorporating these types of things into CPUs ? Because most people do n't want their CPU consuming 300W of power when idle ?</tokentext>
<sentencetext>I know very little about that level of hardware, but why aren't we incorporating these types of things into CPUs?Because most people don't want their CPU consuming 300W of power when idle?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496934</id>
	<title>Re:Stop with the advertising</title>
	<author>Anonymous</author>
	<datestamp>1268759040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>My Via Nano does this same thing about 27\% faster than the fastest Intel part.<br> <br>So there!</htmltext>
<tokenext>My Via Nano does this same thing about 27 \ % faster than the fastest Intel part .
So there !</tokentext>
<sentencetext>My Via Nano does this same thing about 27\% faster than the fastest Intel part.
So there!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502542</id>
	<title>Re:GPUs</title>
	<author>BOFHelsinki</author>
	<datestamp>1268739840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Nvidia and ATI use confusing nomenclature -- those "cores" are nothing like CPU cores. It's easier to count the ALUs: the new Radeons have 3200 per chip. That's obviously a humongous amount. But there are extremely imposing limitations on what can be run when. The datasets must be of quite specific and distinct type to extract the "oomph". And regardless of data parallelism, any branchy code will simply suck -- the huge inherent latencies dictate that.</htmltext>
<tokenext>Nvidia and ATI use confusing nomenclature -- those " cores " are nothing like CPU cores .
It 's easier to count the ALUs : the new Radeons have 3200 per chip .
That 's obviously a humongous amount .
But there are extremely imposing limitations on what can be run when .
The datasets must be of quite specific and distinct type to extract the " oomph " .
And regardless of data parallelism , any branchy code will simply suck -- the huge inherent latencies dictate that .</tokentext>
<sentencetext>Nvidia and ATI use confusing nomenclature -- those "cores" are nothing like CPU cores.
It's easier to count the ALUs: the new Radeons have 3200 per chip.
That's obviously a humongous amount.
But there are extremely imposing limitations on what can be run when.
The datasets must be of quite specific and distinct type to extract the "oomph".
And regardless of data parallelism, any branchy code will simply suck -- the huge inherent latencies dictate that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499392</id>
	<title>MaLDiTo\_TeXuGo</title>
	<author>Anonymous</author>
	<datestamp>1268768280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>How much faster is ATI 5000 compared to a PS3 Cell BE?</p></htmltext>
<tokenext>How much faster is ATI 5000 compared to a PS3 Cell BE ?</tokentext>
<sentencetext>How much faster is ATI 5000 compared to a PS3 Cell BE?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496916</id>
	<title>Re:Stop with the advertising</title>
	<author>drachenstern</author>
	<datestamp>1268759040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>well, I've not RTFA but if they can get double the performance of a Tesla system using much cheaper (as I recall it's expensive, which isn't saying much ~ I refuse to google if I won't RTFA) video cards isn't that something to talk about?</p><p>BAH, now you've got me bothered to RTFA... guess I should go do work instead?</p></htmltext>
<tokenext>well , I 've not RTFA but if they can get double the performance of a Tesla system using much cheaper ( as I recall it 's expensive , which is n't saying much ~ I refuse to google if I wo n't RTFA ) video cards is n't that something to talk about ? BAH , now you 've got me bothered to RTFA... guess I should go do work instead ?</tokentext>
<sentencetext>well, I've not RTFA but if they can get double the performance of a Tesla system using much cheaper (as I recall it's expensive, which isn't saying much ~ I refuse to google if I won't RTFA) video cards isn't that something to talk about?BAH, now you've got me bothered to RTFA... guess I should go do work instead?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496674</id>
	<title>Re:Stop with the advertising</title>
	<author>Anonymous</author>
	<datestamp>1268758200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>And a bit of an and underhanded advert for ATI.  'Password recovery' is an inherently parallel problem that really likes the sort of math gpus do, and not so much the sort CPU's do.  The ATI 5000 series are the fastest GPU's available at retail right now, doesn't take a genius to put 2 and 2 together here.  Anyone who knows anything about NVIDIA's workstation parts knows they are not radical departures from their current retail chips so saying your new fancy retail part is twice as fast as the workstation version of the other guys last gen part is stating the obvious.</p></htmltext>
<tokenext>And a bit of an and underhanded advert for ATI .
'Password recovery ' is an inherently parallel problem that really likes the sort of math gpus do , and not so much the sort CPU 's do .
The ATI 5000 series are the fastest GPU 's available at retail right now , does n't take a genius to put 2 and 2 together here .
Anyone who knows anything about NVIDIA 's workstation parts knows they are not radical departures from their current retail chips so saying your new fancy retail part is twice as fast as the workstation version of the other guys last gen part is stating the obvious .</tokentext>
<sentencetext>And a bit of an and underhanded advert for ATI.
'Password recovery' is an inherently parallel problem that really likes the sort of math gpus do, and not so much the sort CPU's do.
The ATI 5000 series are the fastest GPU's available at retail right now, doesn't take a genius to put 2 and 2 together here.
Anyone who knows anything about NVIDIA's workstation parts knows they are not radical departures from their current retail chips so saying your new fancy retail part is twice as fast as the workstation version of the other guys last gen part is stating the obvious.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497814</id>
	<title>Someone should revoke Taco's privileges...</title>
	<author>thefuz</author>
	<datestamp>1268762160000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext>This is a blatant advertisement.  Who's responsible for letting junk like this through?  Has your account been hacked, CmdrTaco (or should we now call you CmdrSPAM)?  It's bad enough stories are often duplicates and days/weeks old.  This is just sh*tty spam.</htmltext>
<tokenext>This is a blatant advertisement .
Who 's responsible for letting junk like this through ?
Has your account been hacked , CmdrTaco ( or should we now call you CmdrSPAM ) ?
It 's bad enough stories are often duplicates and days/weeks old .
This is just sh * tty spam .</tokentext>
<sentencetext>This is a blatant advertisement.
Who's responsible for letting junk like this through?
Has your account been hacked, CmdrTaco (or should we now call you CmdrSPAM)?
It's bad enough stories are often duplicates and days/weeks old.
This is just sh*tty spam.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496874</id>
	<title>It's pretty cool.</title>
	<author>pavon</author>
	<datestamp>1268758860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There have been several <a href="http://www.filmroster.com/2009/06/hacking-in-movies/" title="filmroster.com">documentaries about hacking</a> [filmroster.com] over the years that demonstrate the use of GPU-based computations. It is soo bad.</p></htmltext>
<tokenext>There have been several documentaries about hacking [ filmroster.com ] over the years that demonstrate the use of GPU-based computations .
It is soo bad .</tokentext>
<sentencetext>There have been several documentaries about hacking [filmroster.com] over the years that demonstrate the use of GPU-based computations.
It is soo bad.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497628</id>
	<title>Re:Out of curiosity...</title>
	<author>Waffle Iron</author>
	<datestamp>1268761500000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>What does your screen look like while a program like this is running?</p></div><p>Well I haven't kept up with the latest developments, but if it's anything like the Sinclair ZX80 I'm posting from, the screen goes blank gray when you start actively computing. Then it returns to normal when the answer is ready.</p></div>
	</htmltext>
<tokenext>What does your screen look like while a program like this is running ? Well I have n't kept up with the latest developments , but if it 's anything like the Sinclair ZX80 I 'm posting from , the screen goes blank gray when you start actively computing .
Then it returns to normal when the answer is ready .</tokentext>
<sentencetext>What does your screen look like while a program like this is running?Well I haven't kept up with the latest developments, but if it's anything like the Sinclair ZX80 I'm posting from, the screen goes blank gray when you start actively computing.
Then it returns to normal when the answer is ready.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497450</id>
	<title>Re:Out of curiosity...</title>
	<author>Bobfrankly1</author>
	<datestamp>1268760780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What does your screen look like while a program like this is running?</p></div><p>Why do you assume that the screen looks different.</p><p>He is still running a Voodoo series add-on card that takes over the video output when it is in use?</p></div>
	</htmltext>
<tokenext>What does your screen look like while a program like this is running ? Why do you assume that the screen looks different.He is still running a Voodoo series add-on card that takes over the video output when it is in use ?</tokentext>
<sentencetext>What does your screen look like while a program like this is running?Why do you assume that the screen looks different.He is still running a Voodoo series add-on card that takes over the video output when it is in use?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496006</id>
	<title>Huh?</title>
	<author>blackjackshellac</author>
	<datestamp>1268755740000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Is this supposed to be a good thing?  Sounds like someone's password encryption algorithm needs some upgrading to me.</p></htmltext>
<tokenext>Is this supposed to be a good thing ?
Sounds like someone 's password encryption algorithm needs some upgrading to me .</tokentext>
<sentencetext>Is this supposed to be a good thing?
Sounds like someone's password encryption algorithm needs some upgrading to me.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496042</id>
	<title>Re:Portrayal</title>
	<author>Anonymous</author>
	<datestamp>1268755920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>but, we wouldn't be on your lawn.</p></htmltext>
<tokenext>but , we would n't be on your lawn .</tokentext>
<sentencetext>but, we wouldn't be on your lawn.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496214</id>
	<title>Re:GPUs</title>
	<author>John Napkintosh</author>
	<datestamp>1268756580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>The last sentence nails it. They only do certain types of operations well, and the frequency with which I upgrade GPUs compared to CPUs - or more specifically, the fact that I very rarely replace both at the same time - leads me to believe I'm better off having them separate. Maybe there are <i>parts</i> of the GPU which could be incorporated into the CPU, and I think that might be what the Core i3/5/7 processors are doing with GMA integration.</p></htmltext>
<tokenext>The last sentence nails it .
They only do certain types of operations well , and the frequency with which I upgrade GPUs compared to CPUs - or more specifically , the fact that I very rarely replace both at the same time - leads me to believe I 'm better off having them separate .
Maybe there are parts of the GPU which could be incorporated into the CPU , and I think that might be what the Core i3/5/7 processors are doing with GMA integration .</tokentext>
<sentencetext>The last sentence nails it.
They only do certain types of operations well, and the frequency with which I upgrade GPUs compared to CPUs - or more specifically, the fact that I very rarely replace both at the same time - leads me to believe I'm better off having them separate.
Maybe there are parts of the GPU which could be incorporated into the CPU, and I think that might be what the Core i3/5/7 processors are doing with GMA integration.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496372</id>
	<title>Re:GPUs</title>
	<author>SuperMog2002</author>
	<datestamp>1268757360000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>Is the coding/assembly so different that it doesn't translate? Do they only do certain kinds of processing really well (it is a GPU after all), so it couldn't handle other more 'mundane' OS needs?</p></div><p>

Yes, exactly.  CPUs are built from the ground up to do scalar math really, really fast.  That lends itself well to doing tasks that must be performed in sequence, such as running an individual thread.  However, they've only recently gained the ability to do more than one thing at a time (dual core processors), and even now high end CPUs can only do six calculations at once (6 core processors).<br> <br>

Meanwhile, GPUs are built to do vector math really, really fast.  They can't do individual adds anywhere near as fast as a CPU can, but they can do dozens of them at the same time.<br> <br>

Which type of processor is best for which job depends entirely on the nature of the math involved and how parallelizable the task is.  In the case of 3D graphics, drawing a frame involves tons of vector arithmetic work, which is why your 1 GHz GPU will run circles around your 3 GHz CPU for that task (and is also where the GPU gets its name from).  In the case mentioned in the article, password cracking is highly parallelizable: you've gotta run 100 million tests, and the outcome of any one test has zero influence on the other tests, so the more you can run at the same time, the better.  By running it on the GPU, each individual test will take a bit longer than running it on the CPU would, but you'll be able to run dozens simultaneously instead of just a few, and will thus get your results much faster.<br> <br>

CPUs certainly have their place, though.  Some tasks simply must be done in sequence and cannot be easily divided up in to seperate parallel tasks.  The CPU will get these done much faster, since running them on the GPU would incur the speed penalty without realizing any benefit.<br> <br>

I've simplified it a bit for the sake of explanation, but that's the gist of it.  Hope that helps!</p></div>
	</htmltext>
<tokenext>Is the coding/assembly so different that it does n't translate ?
Do they only do certain kinds of processing really well ( it is a GPU after all ) , so it could n't handle other more 'mundane ' OS needs ?
Yes , exactly .
CPUs are built from the ground up to do scalar math really , really fast .
That lends itself well to doing tasks that must be performed in sequence , such as running an individual thread .
However , they 've only recently gained the ability to do more than one thing at a time ( dual core processors ) , and even now high end CPUs can only do six calculations at once ( 6 core processors ) .
Meanwhile , GPUs are built to do vector math really , really fast .
They ca n't do individual adds anywhere near as fast as a CPU can , but they can do dozens of them at the same time .
Which type of processor is best for which job depends entirely on the nature of the math involved and how parallelizable the task is .
In the case of 3D graphics , drawing a frame involves tons of vector arithmetic work , which is why your 1 GHz GPU will run circles around your 3 GHz CPU for that task ( and is also where the GPU gets its name from ) .
In the case mentioned in the article , password cracking is highly parallelizable : you 've got ta run 100 million tests , and the outcome of any one test has zero influence on the other tests , so the more you can run at the same time , the better .
By running it on the GPU , each individual test will take a bit longer than running it on the CPU would , but you 'll be able to run dozens simultaneously instead of just a few , and will thus get your results much faster .
CPUs certainly have their place , though .
Some tasks simply must be done in sequence and can not be easily divided up in to seperate parallel tasks .
The CPU will get these done much faster , since running them on the GPU would incur the speed penalty without realizing any benefit .
I 've simplified it a bit for the sake of explanation , but that 's the gist of it .
Hope that helps !</tokentext>
<sentencetext>Is the coding/assembly so different that it doesn't translate?
Do they only do certain kinds of processing really well (it is a GPU after all), so it couldn't handle other more 'mundane' OS needs?
Yes, exactly.
CPUs are built from the ground up to do scalar math really, really fast.
That lends itself well to doing tasks that must be performed in sequence, such as running an individual thread.
However, they've only recently gained the ability to do more than one thing at a time (dual core processors), and even now high end CPUs can only do six calculations at once (6 core processors).
Meanwhile, GPUs are built to do vector math really, really fast.
They can't do individual adds anywhere near as fast as a CPU can, but they can do dozens of them at the same time.
Which type of processor is best for which job depends entirely on the nature of the math involved and how parallelizable the task is.
In the case of 3D graphics, drawing a frame involves tons of vector arithmetic work, which is why your 1 GHz GPU will run circles around your 3 GHz CPU for that task (and is also where the GPU gets its name from).
In the case mentioned in the article, password cracking is highly parallelizable: you've gotta run 100 million tests, and the outcome of any one test has zero influence on the other tests, so the more you can run at the same time, the better.
By running it on the GPU, each individual test will take a bit longer than running it on the CPU would, but you'll be able to run dozens simultaneously instead of just a few, and will thus get your results much faster.
CPUs certainly have their place, though.
Some tasks simply must be done in sequence and cannot be easily divided up in to seperate parallel tasks.
The CPU will get these done much faster, since running them on the GPU would incur the speed penalty without realizing any benefit.
I've simplified it a bit for the sake of explanation, but that's the gist of it.
Hope that helps!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496574</id>
	<title>ObRokicki</title>
	<author>michaelmalak</author>
	<datestamp>1268757900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>From <a href="http://aminet.net/package/misc/fish/fish-0031" title="aminet.net">1986</a> [aminet.net]:<p> <i>Executes the cellular automata game of LIFE in the blitter chip. Uses a 318 by 188 display and runs at 19.8 generations per second. Author: Tomas Rokicki</i></p></htmltext>
<tokenext>From 1986 [ aminet.net ] : Executes the cellular automata game of LIFE in the blitter chip .
Uses a 318 by 188 display and runs at 19.8 generations per second .
Author : Tomas Rokicki</tokentext>
<sentencetext>From 1986 [aminet.net]: Executes the cellular automata game of LIFE in the blitter chip.
Uses a 318 by 188 display and runs at 19.8 generations per second.
Author: Tomas Rokicki</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496756</id>
	<title>Re:Out of curiosity...</title>
	<author>Anonymous</author>
	<datestamp>1268758500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>The display buffer for a 1920x1200 screen with 24-bit colour takes less than 7MB.  Even a fairly low-end graphics card will have at least 128MB of memory.  In other words, there's plenty of memory for a program running on a GPU without needing to piss on the display buffer.</p><p>If your screen is just displaying a bunch of 2D windows, then the 100s of cores in your GPU will be sitting idle.  Again, computations running on the GPU will have no impact on what you see.</p></htmltext>
<tokenext>The display buffer for a 1920x1200 screen with 24-bit colour takes less than 7MB .
Even a fairly low-end graphics card will have at least 128MB of memory .
In other words , there 's plenty of memory for a program running on a GPU without needing to piss on the display buffer.If your screen is just displaying a bunch of 2D windows , then the 100s of cores in your GPU will be sitting idle .
Again , computations running on the GPU will have no impact on what you see .</tokentext>
<sentencetext>The display buffer for a 1920x1200 screen with 24-bit colour takes less than 7MB.
Even a fairly low-end graphics card will have at least 128MB of memory.
In other words, there's plenty of memory for a program running on a GPU without needing to piss on the display buffer.If your screen is just displaying a bunch of 2D windows, then the 100s of cores in your GPU will be sitting idle.
Again, computations running on the GPU will have no impact on what you see.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496130</id>
	<title>Re:GPUs</title>
	<author>TheKidWho</author>
	<datestamp>1268756280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes GPUs are very different, they are designed to do a lot of very similiar calculations to an extremely large set of vector data.  That's also pretty much all they do, they aren't nearly as good for logic like a traditional CPU is.</p></htmltext>
<tokenext>Yes GPUs are very different , they are designed to do a lot of very similiar calculations to an extremely large set of vector data .
That 's also pretty much all they do , they are n't nearly as good for logic like a traditional CPU is .</tokentext>
<sentencetext>Yes GPUs are very different, they are designed to do a lot of very similiar calculations to an extremely large set of vector data.
That's also pretty much all they do, they aren't nearly as good for logic like a traditional CPU is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497474</id>
	<title>Re:GPUs</title>
	<author>Anonymous</author>
	<datestamp>1268760900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I wonder how GPUs would work as SSL accelerator for servers? I assume that if they are good for wifi password cracking it would be good for SSL acceleration, though I don't have deep knowledge in SSL so not sure.</p></htmltext>
<tokenext>I wonder how GPUs would work as SSL accelerator for servers ?
I assume that if they are good for wifi password cracking it would be good for SSL acceleration , though I do n't have deep knowledge in SSL so not sure .</tokentext>
<sentencetext>I wonder how GPUs would work as SSL accelerator for servers?
I assume that if they are good for wifi password cracking it would be good for SSL acceleration, though I don't have deep knowledge in SSL so not sure.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496932</id>
	<title>Re:GPUs</title>
	<author>Orgasmatron</author>
	<datestamp>1268759040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>3D rendering involves lots of integer math, and there are huge portions of any given render that do not depend on each other.  For example, the scene may involve calculating the vectors from thousands of vertices and faces of polygons towards hundreds of light sources.  That is millions of operations that are essentially independent.  Another phase of a render will require calculating the intersection of each view vector (and more if you use FSAA) with a polygon in the scene.</p><p>So, modern GPUs are a special case of a CPU.  They have many cores, each of which has many integer and/or vector units.  Their sole purpose in life is to perform those millions of parallel operations as fast as possible.  Modern GPUs can perform hundreds or thousands of operations per cycle, at speeds gradually approaching CPU speeds.</p><p>Technically, you can run arbitrary code on a GPU, just like your keyboard or SCSI controller is Turning Complete, but they are so optimized for their job that it would be impractical.  Unless, of course, you have a massively parallel integer math problem you want to solve, like brute forcing a password...</p></htmltext>
<tokenext>3D rendering involves lots of integer math , and there are huge portions of any given render that do not depend on each other .
For example , the scene may involve calculating the vectors from thousands of vertices and faces of polygons towards hundreds of light sources .
That is millions of operations that are essentially independent .
Another phase of a render will require calculating the intersection of each view vector ( and more if you use FSAA ) with a polygon in the scene.So , modern GPUs are a special case of a CPU .
They have many cores , each of which has many integer and/or vector units .
Their sole purpose in life is to perform those millions of parallel operations as fast as possible .
Modern GPUs can perform hundreds or thousands of operations per cycle , at speeds gradually approaching CPU speeds.Technically , you can run arbitrary code on a GPU , just like your keyboard or SCSI controller is Turning Complete , but they are so optimized for their job that it would be impractical .
Unless , of course , you have a massively parallel integer math problem you want to solve , like brute forcing a password.. .</tokentext>
<sentencetext>3D rendering involves lots of integer math, and there are huge portions of any given render that do not depend on each other.
For example, the scene may involve calculating the vectors from thousands of vertices and faces of polygons towards hundreds of light sources.
That is millions of operations that are essentially independent.
Another phase of a render will require calculating the intersection of each view vector (and more if you use FSAA) with a polygon in the scene.So, modern GPUs are a special case of a CPU.
They have many cores, each of which has many integer and/or vector units.
Their sole purpose in life is to perform those millions of parallel operations as fast as possible.
Modern GPUs can perform hundreds or thousands of operations per cycle, at speeds gradually approaching CPU speeds.Technically, you can run arbitrary code on a GPU, just like your keyboard or SCSI controller is Turning Complete, but they are so optimized for their job that it would be impractical.
Unless, of course, you have a massively parallel integer math problem you want to solve, like brute forcing a password...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496794</id>
	<title>Re:Portrayal</title>
	<author>Yvanhoe</author>
	<datestamp>1268758620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Yeah, they may be people who just moved in and who want to have *gasp* internet access ! The vicious bastardly devils ! The thieves of non-thievable property ! the... the... TERRO-PIRATES !</htmltext>
<tokenext>Yeah , they may be people who just moved in and who want to have * gasp * internet access !
The vicious bastardly devils !
The thieves of non-thievable property !
the... the... TERRO-PIRATES !</tokentext>
<sentencetext>Yeah, they may be people who just moved in and who want to have *gasp* internet access !
The vicious bastardly devils !
The thieves of non-thievable property !
the... the... TERRO-PIRATES !</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496246</id>
	<title>Re:Portrayal</title>
	<author>ElectricTurtle</author>
	<datestamp>1268756640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I tend to 'recover my password' for my wireless APs with my little friend the paper clip. You're absolutely right that the more common use of something like this is going to be cracking.</htmltext>
<tokenext>I tend to 'recover my password ' for my wireless APs with my little friend the paper clip .
You 're absolutely right that the more common use of something like this is going to be cracking .</tokentext>
<sentencetext>I tend to 'recover my password' for my wireless APs with my little friend the paper clip.
You're absolutely right that the more common use of something like this is going to be cracking.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496222</id>
	<title>Re:GPUs</title>
	<author>wvmarle</author>
	<datestamp>1268756580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They tend to be specialised processors, designed specifically for graphics related tasks. Those tasks happen to be computationally very similar to other tasks such as protein folding. Though they will be poor performers or possibly totally incapable of certain tasks your CPU has to do.
</p><p>That said I'm waiting for the first CPU to build in a GPU so we don't even need a separate graphics chip on our motherboards any more to for the already integrated graphics output.</p></htmltext>
<tokenext>They tend to be specialised processors , designed specifically for graphics related tasks .
Those tasks happen to be computationally very similar to other tasks such as protein folding .
Though they will be poor performers or possibly totally incapable of certain tasks your CPU has to do .
That said I 'm waiting for the first CPU to build in a GPU so we do n't even need a separate graphics chip on our motherboards any more to for the already integrated graphics output .</tokentext>
<sentencetext>They tend to be specialised processors, designed specifically for graphics related tasks.
Those tasks happen to be computationally very similar to other tasks such as protein folding.
Though they will be poor performers or possibly totally incapable of certain tasks your CPU has to do.
That said I'm waiting for the first CPU to build in a GPU so we don't even need a separate graphics chip on our motherboards any more to for the already integrated graphics output.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497356</id>
	<title>Re:Stop with the advertising</title>
	<author>jank1887</author>
	<datestamp>1268760480000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>come on. It CLEARLY states that "An anonymous reader" wrote that summary.</p></htmltext>
<tokenext>come on .
It CLEARLY states that " An anonymous reader " wrote that summary .</tokentext>
<sentencetext>come on.
It CLEARLY states that "An anonymous reader" wrote that summary.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495868</id>
	<title>My password.</title>
	<author>Anonymous</author>
	<datestamp>1268755320000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>1 - 2 - 3 - 4 - 5</p></htmltext>
<tokenext>1 - 2 - 3 - 4 - 5</tokentext>
<sentencetext>1 - 2 - 3 - 4 - 5</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497438</id>
	<title>What's difference?</title>
	<author>ukemike</author>
	<datestamp>1268760780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>What's the difference between "recovering a password" and hacking into a phone?  Shouldn't the summary read "use GPU to break into stolen smart phones."</htmltext>
<tokenext>What 's the difference between " recovering a password " and hacking into a phone ?
Should n't the summary read " use GPU to break into stolen smart phones .
"</tokentext>
<sentencetext>What's the difference between "recovering a password" and hacking into a phone?
Shouldn't the summary read "use GPU to break into stolen smart phones.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497706</id>
	<title>Re:My password is safe</title>
	<author>MiniMike</author>
	<datestamp>1268761800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Does that mean it's "dicktionary" based?</p><p>Obviously there's also no minimum length requirement...</p></htmltext>
<tokenext>Does that mean it 's " dicktionary " based ? Obviously there 's also no minimum length requirement.. .</tokentext>
<sentencetext>Does that mean it's "dicktionary" based?Obviously there's also no minimum length requirement...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496176</id>
	<title>Re:GPUs</title>
	<author>jonbryce</author>
	<datestamp>1268756400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The coding / assembly is so different that it doesn't translate, and they only do certain kinds of processing well.</p></htmltext>
<tokenext>The coding / assembly is so different that it does n't translate , and they only do certain kinds of processing well .</tokentext>
<sentencetext>The coding / assembly is so different that it doesn't translate, and they only do certain kinds of processing well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496460</id>
	<title>Re:Out of curiosity...</title>
	<author>Verunks</author>
	<datestamp>1268757600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Why do you assume that the screen looks different.</p></div><p>because when you run a cpu intensive application your pc becomes really slow, so if you use instead your gpu the screen should become "slower" too, but probably you wouldn't even notice</p></div>
	</htmltext>
<tokenext>Why do you assume that the screen looks different.because when you run a cpu intensive application your pc becomes really slow , so if you use instead your gpu the screen should become " slower " too , but probably you would n't even notice</tokentext>
<sentencetext>Why do you assume that the screen looks different.because when you run a cpu intensive application your pc becomes really slow, so if you use instead your gpu the screen should become "slower" too, but probably you wouldn't even notice
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212</id>
	<title>Re:Out of curiosity...</title>
	<author>Anonymous</author>
	<datestamp>1268756520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I keep hearing stories about using GPUs for non-GPU computations, but has anybody here tried it?</p></div><p>Yes many people do it and have for years.</p><p><div class="quote"><p>What does your screen look like while a program like this is running?</p></div><p>Why do you assume that the screen looks different.</p></div>
	</htmltext>
<tokenext>I keep hearing stories about using GPUs for non-GPU computations , but has anybody here tried it ? Yes many people do it and have for years.What does your screen look like while a program like this is running ? Why do you assume that the screen looks different .</tokentext>
<sentencetext>I keep hearing stories about using GPUs for non-GPU computations, but has anybody here tried it?Yes many people do it and have for years.What does your screen look like while a program like this is running?Why do you assume that the screen looks different.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496984</id>
	<title>Re:Portrayal</title>
	<author>Sir\_Lewk</author>
	<datestamp>1268759220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I like the way crowbar makers advertise their produce in a positive light.  As if anyone, upon realizing that they need to pull hundreds of deeply sunken nails, is going to go out to the store and buy a heavy 2 foot crowbar.  I think the typical usage for crowbars like this is of a more nefarious sort.</p></htmltext>
<tokenext>I like the way crowbar makers advertise their produce in a positive light .
As if anyone , upon realizing that they need to pull hundreds of deeply sunken nails , is going to go out to the store and buy a heavy 2 foot crowbar .
I think the typical usage for crowbars like this is of a more nefarious sort .</tokentext>
<sentencetext>I like the way crowbar makers advertise their produce in a positive light.
As if anyone, upon realizing that they need to pull hundreds of deeply sunken nails, is going to go out to the store and buy a heavy 2 foot crowbar.
I think the typical usage for crowbars like this is of a more nefarious sort.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496370</id>
	<title>Re:Portrayal</title>
	<author>Minwee</author>
	<datestamp>1268757360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>a person [...] is going to go out and buy one of these video cards, install it in a machine capable of supporting it (PSU wattage, bus speed, OS, etc), purchase the proprietary "password breaker" software (sold by the company that authored this "story"), all just to recover their password. I think the typical usage for this type of setup is of a more nefarious sort.</p></div></blockquote><p>I think you're right.  Someone could use this kind of setup to play Crysis.</p></div>
	</htmltext>
<tokenext>a person [ ... ] is going to go out and buy one of these video cards , install it in a machine capable of supporting it ( PSU wattage , bus speed , OS , etc ) , purchase the proprietary " password breaker " software ( sold by the company that authored this " story " ) , all just to recover their password .
I think the typical usage for this type of setup is of a more nefarious sort.I think you 're right .
Someone could use this kind of setup to play Crysis .</tokentext>
<sentencetext>a person [...] is going to go out and buy one of these video cards, install it in a machine capable of supporting it (PSU wattage, bus speed, OS, etc), purchase the proprietary "password breaker" software (sold by the company that authored this "story"), all just to recover their password.
I think the typical usage for this type of setup is of a more nefarious sort.I think you're right.
Someone could use this kind of setup to play Crysis.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496152</id>
	<title>Re:Portrayal</title>
	<author>wvmarle</author>
	<datestamp>1268756400000</datestamp>
	<modclass>Troll</modclass>
	<modscore>1</modscore>
	<htmltext>It all depends on your point of view. <br>One man's "password recovery" is another man's "password cracking". <br>Just like the same person being a "freedom fighter" and "terrorist/insurgent" at the same time. <br>It all depends on your point of view.</htmltext>
<tokenext>It all depends on your point of view .
One man 's " password recovery " is another man 's " password cracking " .
Just like the same person being a " freedom fighter " and " terrorist/insurgent " at the same time .
It all depends on your point of view .</tokentext>
<sentencetext>It all depends on your point of view.
One man's "password recovery" is another man's "password cracking".
Just like the same person being a "freedom fighter" and "terrorist/insurgent" at the same time.
It all depends on your point of view.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496114</id>
	<title>Re:GPUs</title>
	<author>Anonymous</author>
	<datestamp>1268756220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>GPU's are better at doing certain calculations generally, and are very good at parallel processing seeing as graphics can be broken down to be processed parallel very quickly.  For this, gpu's have a ton of cores.  So in a way processors are indeed starting to follow with multicore systems but it is nowhere near the number GPU's use.  High end GPU's now have 480+ processor cores on a card these days, thats a lot more then 4 core intel's<nobr> <wbr></nobr>;).  But if you had a ton of cores on the processor, each additional one doesn't add too much to actual cpu power as most things must be done linearly, not parallel.  Just helps with multitasking really. Which is why a few cores are useful, but overall power of the core is better then having a ton of them.  Graphics cards go with a ton of lower speed cores.</p></htmltext>
<tokenext>GPU 's are better at doing certain calculations generally , and are very good at parallel processing seeing as graphics can be broken down to be processed parallel very quickly .
For this , gpu 's have a ton of cores .
So in a way processors are indeed starting to follow with multicore systems but it is nowhere near the number GPU 's use .
High end GPU 's now have 480 + processor cores on a card these days , thats a lot more then 4 core intel 's ; ) .
But if you had a ton of cores on the processor , each additional one does n't add too much to actual cpu power as most things must be done linearly , not parallel .
Just helps with multitasking really .
Which is why a few cores are useful , but overall power of the core is better then having a ton of them .
Graphics cards go with a ton of lower speed cores .</tokentext>
<sentencetext>GPU's are better at doing certain calculations generally, and are very good at parallel processing seeing as graphics can be broken down to be processed parallel very quickly.
For this, gpu's have a ton of cores.
So in a way processors are indeed starting to follow with multicore systems but it is nowhere near the number GPU's use.
High end GPU's now have 480+ processor cores on a card these days, thats a lot more then 4 core intel's ;).
But if you had a ton of cores on the processor, each additional one doesn't add too much to actual cpu power as most things must be done linearly, not parallel.
Just helps with multitasking really.
Which is why a few cores are useful, but overall power of the core is better then having a ton of them.
Graphics cards go with a ton of lower speed cores.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31498362</id>
	<title>What about....?</title>
	<author>hesaigo999ca</author>
	<datestamp>1268764380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I am wondering if the CPU cycling is better then the quad core intel chips we use in pcs today, could we not just force a use cpus on vid cards instead of cpus on motherboard to do stuff with? Then i could buy 2 cards, one which could be used as a real vid card, then the other to replace the cpu of the machine, and technically, we could also use this for backwards compatibility, say if we used it in a P3 computer, it would definitely improve performance greatly seeing as now you have a kickass cpu instead of the old P3....???</p></htmltext>
<tokenext>I am wondering if the CPU cycling is better then the quad core intel chips we use in pcs today , could we not just force a use cpus on vid cards instead of cpus on motherboard to do stuff with ?
Then i could buy 2 cards , one which could be used as a real vid card , then the other to replace the cpu of the machine , and technically , we could also use this for backwards compatibility , say if we used it in a P3 computer , it would definitely improve performance greatly seeing as now you have a kickass cpu instead of the old P3.... ? ?
?</tokentext>
<sentencetext>I am wondering if the CPU cycling is better then the quad core intel chips we use in pcs today, could we not just force a use cpus on vid cards instead of cpus on motherboard to do stuff with?
Then i could buy 2 cards, one which could be used as a real vid card, then the other to replace the cpu of the machine, and technically, we could also use this for backwards compatibility, say if we used it in a P3 computer, it would definitely improve performance greatly seeing as now you have a kickass cpu instead of the old P3....??
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496318</id>
	<title>Re:Out of curiosity...</title>
	<author>Anonymous</author>
	<datestamp>1268757060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>He probably assumes the screen looks different because he assumes video cards are nothing but raw memory-mapped video framebuffers -- which hasn't been the case since 1990 or so.</p></htmltext>
<tokenext>He probably assumes the screen looks different because he assumes video cards are nothing but raw memory-mapped video framebuffers -- which has n't been the case since 1990 or so .</tokentext>
<sentencetext>He probably assumes the screen looks different because he assumes video cards are nothing but raw memory-mapped video framebuffers -- which hasn't been the case since 1990 or so.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31498400</id>
	<title>Re:Portrayal</title>
	<author>Nutria</author>
	<datestamp>1268764560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>You're absolutely right that the more common use of something like this is going to be cracking.</i></p><p>Since you need a USD1000 video card to go along with the s/w, it won't be your run of the mill Joe High School Geek, it'll be Serious Black Hats.</p></htmltext>
<tokenext>You 're absolutely right that the more common use of something like this is going to be cracking.Since you need a USD1000 video card to go along with the s/w , it wo n't be your run of the mill Joe High School Geek , it 'll be Serious Black Hats .</tokentext>
<sentencetext>You're absolutely right that the more common use of something like this is going to be cracking.Since you need a USD1000 video card to go along with the s/w, it won't be your run of the mill Joe High School Geek, it'll be Serious Black Hats.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496246</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31500944</id>
	<title>Re:GPUs</title>
	<author>Anonymous</author>
	<datestamp>1268731800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>However, they've only recently gained the ability to do more than one thing at a time</p></div><p>MMX(1996), 3dNow!(1998), SSE(1999), hyper-threading (2002)</p><p>You were saying?</p></div>
	</htmltext>
<tokenext>However , they 've only recently gained the ability to do more than one thing at a timeMMX ( 1996 ) , 3dNow !
( 1998 ) , SSE ( 1999 ) , hyper-threading ( 2002 ) You were saying ?</tokentext>
<sentencetext>However, they've only recently gained the ability to do more than one thing at a timeMMX(1996), 3dNow!
(1998), SSE(1999), hyper-threading (2002)You were saying?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496372</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496072</id>
	<title>Re:Portrayal</title>
	<author>berny@work</author>
	<datestamp>1268756040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yeah like selling one time password solutions to IT bosses when someone gets ahold of their SAM.....</p></htmltext>
<tokenext>Yeah like selling one time password solutions to IT bosses when someone gets ahold of their SAM.... .</tokentext>
<sentencetext>Yeah like selling one time password solutions to IT bosses when someone gets ahold of their SAM.....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496992</id>
	<title>Re:GPUs</title>
	<author>Anonymous</author>
	<datestamp>1268759280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Cell processors, anyone?</p></htmltext>
<tokenext>Cell processors , anyone ?</tokentext>
<sentencetext>Cell processors, anyone?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496002</id>
	<title>Slashvertisement</title>
	<author>Anonymous</author>
	<datestamp>1268755740000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Hey Editors,</p><p>You forgot a link to the <a href="http://www.elcomsoft.com/purchase/purchase.php?product=eprb2&amp;additional=ELCOM\_PRODUCTS\_PAGE.ELCOM.35768473.30229789.EU.4.N.X.X.X&amp;currency=EUR" title="elcomsoft.com" rel="nofollow">buying page </a> [elcomsoft.com]<br>For as low as 1.399,- &euro; you can start cracking^Wrecovering passwords today.</p></htmltext>
<tokenext>Hey Editors,You forgot a link to the buying page [ elcomsoft.com ] For as low as 1.399,-    you can start cracking ^ Wrecovering passwords today .</tokentext>
<sentencetext>Hey Editors,You forgot a link to the buying page  [elcomsoft.com]For as low as 1.399,- € you can start cracking^Wrecovering passwords today.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499286</id>
	<title>Re:103000 passwords per second. So?</title>
	<author>Revenger75</author>
	<datestamp>1268767860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>True, a simple brute force will take nearly forever.  However, if you have a good large wordlist with say the top 10 million passwords, then you should be able to "recover" most passwords in much less than a day.  Furthermore, if you wish to do a bruteforce method, I would suggest John the Ripper.  It will pump out wordlists using frequency tables.  Thus your more common passwords alphanumeric passwords will be tested before say *()3s3Ag+\%&amp;c.  Useful since most induhviduals don't like to memorize random passwords.
<br>
<br>
And I didn't RTFA, but this slashvertisement sounds strikingly similar to say pyrit/aircrack-ng/jtr available on Backtrack.</htmltext>
<tokenext>True , a simple brute force will take nearly forever .
However , if you have a good large wordlist with say the top 10 million passwords , then you should be able to " recover " most passwords in much less than a day .
Furthermore , if you wish to do a bruteforce method , I would suggest John the Ripper .
It will pump out wordlists using frequency tables .
Thus your more common passwords alphanumeric passwords will be tested before say * ( ) 3s3Ag + \ % &amp;c .
Useful since most induhviduals do n't like to memorize random passwords .
And I did n't RTFA , but this slashvertisement sounds strikingly similar to say pyrit/aircrack-ng/jtr available on Backtrack .</tokentext>
<sentencetext>True, a simple brute force will take nearly forever.
However, if you have a good large wordlist with say the top 10 million passwords, then you should be able to "recover" most passwords in much less than a day.
Furthermore, if you wish to do a bruteforce method, I would suggest John the Ripper.
It will pump out wordlists using frequency tables.
Thus your more common passwords alphanumeric passwords will be tested before say *()3s3Ag+\%&amp;c.
Useful since most induhviduals don't like to memorize random passwords.
And I didn't RTFA, but this slashvertisement sounds strikingly similar to say pyrit/aircrack-ng/jtr available on Backtrack.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497354</id>
	<title>Re:My password is safe</title>
	<author>rickb928</author>
	<datestamp>1268760480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yep, and you can be sure no one else will be using it...</p><p>In fact, no one will have any interest in it at all.  Your 'secret' is safe!</p></htmltext>
<tokenext>Yep , and you can be sure no one else will be using it...In fact , no one will have any interest in it at all .
Your 'secret ' is safe !</tokentext>
<sentencetext>Yep, and you can be sure no one else will be using it...In fact, no one will have any interest in it at all.
Your 'secret' is safe!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497400</id>
	<title>not recovering</title>
	<author>mjwalshe</author>
	<datestamp>1268760660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>its about breaking passwords obvoisly targeting Jennifer the free wifi lady as a potential market</htmltext>
<tokenext>its about breaking passwords obvoisly targeting Jennifer the free wifi lady as a potential market</tokentext>
<sentencetext>its about breaking passwords obvoisly targeting Jennifer the free wifi lady as a potential market</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496210</id>
	<title>Re:GPUs</title>
	<author>Anonymous</author>
	<datestamp>1268756520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>GPUs are ridiculously parallel SIMD style processors. They are good at performing massive amounts of calculations in parallel, but for it to be effective these calculations have to be the same across all cores. GPUs don't have a huge amount of true CPU-style cores; rather, they can run one or a few algorithms over many instances of data in parallel. This works great for certain scientific and brute-force calculations such as these (and for 3D games), but it doesn't really work for regular programs. Also, GPU programs usually need to be written in a specific programming language (usually a derivative of C) and with this parallelism in mind.</p><p>CPUs already have something like this (SIMD instructions), and they help for many workloads, but massive paralellism like this only really works for GPU-type tasks, not your average OS/apps.</p></htmltext>
<tokenext>GPUs are ridiculously parallel SIMD style processors .
They are good at performing massive amounts of calculations in parallel , but for it to be effective these calculations have to be the same across all cores .
GPUs do n't have a huge amount of true CPU-style cores ; rather , they can run one or a few algorithms over many instances of data in parallel .
This works great for certain scientific and brute-force calculations such as these ( and for 3D games ) , but it does n't really work for regular programs .
Also , GPU programs usually need to be written in a specific programming language ( usually a derivative of C ) and with this parallelism in mind.CPUs already have something like this ( SIMD instructions ) , and they help for many workloads , but massive paralellism like this only really works for GPU-type tasks , not your average OS/apps .</tokentext>
<sentencetext>GPUs are ridiculously parallel SIMD style processors.
They are good at performing massive amounts of calculations in parallel, but for it to be effective these calculations have to be the same across all cores.
GPUs don't have a huge amount of true CPU-style cores; rather, they can run one or a few algorithms over many instances of data in parallel.
This works great for certain scientific and brute-force calculations such as these (and for 3D games), but it doesn't really work for regular programs.
Also, GPU programs usually need to be written in a specific programming language (usually a derivative of C) and with this parallelism in mind.CPUs already have something like this (SIMD instructions), and they help for many workloads, but massive paralellism like this only really works for GPU-type tasks, not your average OS/apps.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31508616</id>
	<title>Re:Stop with the advertising</title>
	<author>AmiMoJo</author>
	<datestamp>1268841180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm sceptical how as to how useful this actually is anyway. When using the GPU you can only do a brute-force attack, not a dictionary attack. At 100,000 passwords per section you could search all 8 letter passwords containing all lowercase letters in about 24 days. That bumps to 21 months at 9 characters and 45 years at 10.</p><p>If you want to do mixed case and numbers then a 7 character password will take about a year, an 8 character one about 70 years.</p><p>A dictionary attack is much more feasible but can't be GPU accelerated.</p></htmltext>
<tokenext>I 'm sceptical how as to how useful this actually is anyway .
When using the GPU you can only do a brute-force attack , not a dictionary attack .
At 100,000 passwords per section you could search all 8 letter passwords containing all lowercase letters in about 24 days .
That bumps to 21 months at 9 characters and 45 years at 10.If you want to do mixed case and numbers then a 7 character password will take about a year , an 8 character one about 70 years.A dictionary attack is much more feasible but ca n't be GPU accelerated .</tokentext>
<sentencetext>I'm sceptical how as to how useful this actually is anyway.
When using the GPU you can only do a brute-force attack, not a dictionary attack.
At 100,000 passwords per section you could search all 8 letter passwords containing all lowercase letters in about 24 days.
That bumps to 21 months at 9 characters and 45 years at 10.If you want to do mixed case and numbers then a 7 character password will take about a year, an 8 character one about 70 years.A dictionary attack is much more feasible but can't be GPU accelerated.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496674</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496570</id>
	<title>Re:Slashvertisement</title>
	<author>Anonymous</author>
	<datestamp>1268757900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>nono, <a href="http://thepiratebay.org/search/elcomsoft" title="thepiratebay.org" rel="nofollow">it's actually free</a> [thepiratebay.org]</htmltext>
<tokenext>nono , it 's actually free [ thepiratebay.org ]</tokentext>
<sentencetext>nono, it's actually free [thepiratebay.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496002</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918</id>
	<title>Portrayal</title>
	<author>Dan East</author>
	<datestamp>1268755440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>I like the way this is portrayed in a totally positive light, as if a person, upon forgetting the password to their device, is going to go out and buy one of these video cards, install it in a machine capable of supporting it (PSU wattage, bus speed, OS, etc), purchase the proprietary "password breaker" software (sold by the company that authored this "story"), all just to recover their password. I think the typical usage for this type of setup is of a more nefarious sort.</p></htmltext>
<tokenext>I like the way this is portrayed in a totally positive light , as if a person , upon forgetting the password to their device , is going to go out and buy one of these video cards , install it in a machine capable of supporting it ( PSU wattage , bus speed , OS , etc ) , purchase the proprietary " password breaker " software ( sold by the company that authored this " story " ) , all just to recover their password .
I think the typical usage for this type of setup is of a more nefarious sort .</tokentext>
<sentencetext>I like the way this is portrayed in a totally positive light, as if a person, upon forgetting the password to their device, is going to go out and buy one of these video cards, install it in a machine capable of supporting it (PSU wattage, bus speed, OS, etc), purchase the proprietary "password breaker" software (sold by the company that authored this "story"), all just to recover their password.
I think the typical usage for this type of setup is of a more nefarious sort.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</id>
	<title>Stop with the advertising</title>
	<author>Anonymous</author>
	<datestamp>1268755320000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>This isn't really about GPUs, it's an advert for ElcomSoft products. The whole summary is in marketing-speak for crying out loud.</htmltext>
<tokenext>This is n't really about GPUs , it 's an advert for ElcomSoft products .
The whole summary is in marketing-speak for crying out loud .</tokentext>
<sentencetext>This isn't really about GPUs, it's an advert for ElcomSoft products.
The whole summary is in marketing-speak for crying out loud.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31500162</id>
	<title>AMD Fusion</title>
	<author>CityZen</author>
	<datestamp>1268771400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's exactly what the AMD Fusion project is all about (integrating the GPU &amp; CPU together).  Google it.</p></htmltext>
<tokenext>That 's exactly what the AMD Fusion project is all about ( integrating the GPU &amp; CPU together ) .
Google it .</tokentext>
<sentencetext>That's exactly what the AMD Fusion project is all about (integrating the GPU &amp; CPU together).
Google it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496888</id>
	<title>Re:GPUs</title>
	<author>blackchiney</author>
	<datestamp>1268758920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's all about IP. It wouldn't be horribly difficult to put a GPU and CPU on the same die. BUT, Intel doesn't GPU manufacturers getting into the x86 business and GPU makers certainly aren't going to give Intel any of their technology and get cut out the market. Intels attempts at GPUs has been less than spectacular. Good enough for Word and Excel. Not good at modern gaming.</p><p>So for the foreseeable future CPUs and GPUs will be treated as seperate entities.</p></htmltext>
<tokenext>It 's all about IP .
It would n't be horribly difficult to put a GPU and CPU on the same die .
BUT , Intel does n't GPU manufacturers getting into the x86 business and GPU makers certainly are n't going to give Intel any of their technology and get cut out the market .
Intels attempts at GPUs has been less than spectacular .
Good enough for Word and Excel .
Not good at modern gaming.So for the foreseeable future CPUs and GPUs will be treated as seperate entities .</tokentext>
<sentencetext>It's all about IP.
It wouldn't be horribly difficult to put a GPU and CPU on the same die.
BUT, Intel doesn't GPU manufacturers getting into the x86 business and GPU makers certainly aren't going to give Intel any of their technology and get cut out the market.
Intels attempts at GPUs has been less than spectacular.
Good enough for Word and Excel.
Not good at modern gaming.So for the foreseeable future CPUs and GPUs will be treated as seperate entities.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496656</id>
	<title>Re:103000 passwords per second. So?</title>
	<author>maxume</author>
	<datestamp>1268758200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So all you have to do to save 190 million years is buy two of them.</p><p>Excellent.</p></htmltext>
<tokenext>So all you have to do to save 190 million years is buy two of them.Excellent .</tokentext>
<sentencetext>So all you have to do to save 190 million years is buy two of them.Excellent.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497986</id>
	<title>Passwords are over.</title>
	<author>blair1q</author>
	<datestamp>1268762880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It's clear that it is no longer sufficient to use a string of text entered by a human being as a secure key.</p><p>Biometric or physical-token security should be a mandatory peripheral on all computing equipment sold.</p><p>For remote access, keys should start in the thousands of bits, and be locked on the client side by biometrics or tokens.</p><p>Short of that, failure to secure your data is your own fault.</p></htmltext>
<tokenext>It 's clear that it is no longer sufficient to use a string of text entered by a human being as a secure key.Biometric or physical-token security should be a mandatory peripheral on all computing equipment sold.For remote access , keys should start in the thousands of bits , and be locked on the client side by biometrics or tokens.Short of that , failure to secure your data is your own fault .</tokentext>
<sentencetext>It's clear that it is no longer sufficient to use a string of text entered by a human being as a secure key.Biometric or physical-token security should be a mandatory peripheral on all computing equipment sold.For remote access, keys should start in the thousands of bits, and be locked on the client side by biometrics or tokens.Short of that, failure to secure your data is your own fault.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496102</id>
	<title>boo</title>
	<author>Anonymous</author>
	<datestamp>1268756160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>boo slashvertisement</p></htmltext>
<tokenext>boo slashvertisement</tokentext>
<sentencetext>boo slashvertisement</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31530554</id>
	<title>Re:103000 passwords per second. So?</title>
	<author>Anonymous</author>
	<datestamp>1268914560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><i> Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863 </i> <br> <br>

Why on earth did you need Google to say that? They bought math?</htmltext>
<tokenext>Google says this : ( 2 ^ 80 / 10 ^ 5 ) / ( 3600 * 24 * 365 * 1000 ) = 383 347 863 Why on earth did you need Google to say that ?
They bought math ?</tokentext>
<sentencetext> Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863   

Why on earth did you need Google to say that?
They bought math?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192</id>
	<title>103000 passwords per second.  So?</title>
	<author>roman\_mir</author>
	<datestamp>1268756520000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>On that one ATI board that get 103K passwords per second and only 4K on the latest quad-core intel (which by the way, is almost 26 and not 20 only times faster.)</p><p>So that's wonderful.  How many passwords are there in 1024 bit SSL encryption?  1024 asymmetric is equivalent to 80 symmetric algorithm, so that's like 2^80 passwords, right?</p><p>Let's say 100,000 passwords per second, that's 10^5.</p><p>Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863</p><p>383.3 million years to go through every password in 2^80 possibilities.</p><p>In reality, of-course, not every combination is used, many passwords can be eliminated by heuristic and also it helps to have a good dictionary file handy, from which to generated most likely password combinations.  That probably cuts down from 383 million years to something much more ATI friendly.  Of-course we need to use stronger cypher.</p><p>As a final note: at last I understand why Hugh Jackman needed the 7 monitor setup, each one must have been used as an output device for the video card it was connected to.  Obviously the video cards were the actual power behind all that hacking!</p></htmltext>
<tokenext>On that one ATI board that get 103K passwords per second and only 4K on the latest quad-core intel ( which by the way , is almost 26 and not 20 only times faster .
) So that 's wonderful .
How many passwords are there in 1024 bit SSL encryption ?
1024 asymmetric is equivalent to 80 symmetric algorithm , so that 's like 2 ^ 80 passwords , right ? Let 's say 100,000 passwords per second , that 's 10 ^ 5.Google says this : ( 2 ^ 80 / 10 ^ 5 ) / ( 3600 * 24 * 365 * 1000 ) = 383 347 863383.3 million years to go through every password in 2 ^ 80 possibilities.In reality , of-course , not every combination is used , many passwords can be eliminated by heuristic and also it helps to have a good dictionary file handy , from which to generated most likely password combinations .
That probably cuts down from 383 million years to something much more ATI friendly .
Of-course we need to use stronger cypher.As a final note : at last I understand why Hugh Jackman needed the 7 monitor setup , each one must have been used as an output device for the video card it was connected to .
Obviously the video cards were the actual power behind all that hacking !</tokentext>
<sentencetext>On that one ATI board that get 103K passwords per second and only 4K on the latest quad-core intel (which by the way, is almost 26 and not 20 only times faster.
)So that's wonderful.
How many passwords are there in 1024 bit SSL encryption?
1024 asymmetric is equivalent to 80 symmetric algorithm, so that's like 2^80 passwords, right?Let's say 100,000 passwords per second, that's 10^5.Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863383.3 million years to go through every password in 2^80 possibilities.In reality, of-course, not every combination is used, many passwords can be eliminated by heuristic and also it helps to have a good dictionary file handy, from which to generated most likely password combinations.
That probably cuts down from 383 million years to something much more ATI friendly.
Of-course we need to use stronger cypher.As a final note: at last I understand why Hugh Jackman needed the 7 monitor setup, each one must have been used as an output device for the video card it was connected to.
Obviously the video cards were the actual power behind all that hacking!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499672</id>
	<title>Re:Stop with the advertising</title>
	<author>Anonymous</author>
	<datestamp>1268769420000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>As an IT security guy, I found this to be informative, actually. When analyzing the security of a system or organization, I need to know not just what is theoretically possible, but what can be done with already-existing software and hardware.</p><p>This article gives me some idea as to what attacks are currently practical (and for what key lengths).</p><p>When research or engineering achievements come from the commercial (rather than academic) sector, it isn't really reasonable to expect an academic tone. They're tooting their own horn, but they are doing it about something important.</p></htmltext>
<tokenext>As an IT security guy , I found this to be informative , actually .
When analyzing the security of a system or organization , I need to know not just what is theoretically possible , but what can be done with already-existing software and hardware.This article gives me some idea as to what attacks are currently practical ( and for what key lengths ) .When research or engineering achievements come from the commercial ( rather than academic ) sector , it is n't really reasonable to expect an academic tone .
They 're tooting their own horn , but they are doing it about something important .</tokentext>
<sentencetext>As an IT security guy, I found this to be informative, actually.
When analyzing the security of a system or organization, I need to know not just what is theoretically possible, but what can be done with already-existing software and hardware.This article gives me some idea as to what attacks are currently practical (and for what key lengths).When research or engineering achievements come from the commercial (rather than academic) sector, it isn't really reasonable to expect an academic tone.
They're tooting their own horn, but they are doing it about something important.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497520</id>
	<title>Re:Portrayal</title>
	<author>hatten</author>
	<datestamp>1268761140000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext>Wow, I didn't knew clippy could do password cracking!</htmltext>
<tokenext>Wow , I did n't knew clippy could do password cracking !</tokentext>
<sentencetext>Wow, I didn't knew clippy could do password cracking!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496246</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499310</id>
	<title>Re:Portrayal</title>
	<author>networkBoy</author>
	<datestamp>1268767920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I found crowbars to be too messy for kneecaps and skulls.  Baseball bat manufacturers... Those are the real nefarious ones because their tools are less obvious when openly carried...</p></htmltext>
<tokenext>I found crowbars to be too messy for kneecaps and skulls .
Baseball bat manufacturers... Those are the real nefarious ones because their tools are less obvious when openly carried.. .</tokentext>
<sentencetext>I found crowbars to be too messy for kneecaps and skulls.
Baseball bat manufacturers... Those are the real nefarious ones because their tools are less obvious when openly carried...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496984</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502096</id>
	<title>Re:Stop with the advertising</title>
	<author>node 3</author>
	<datestamp>1268737380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Having skimmed TFA (actually, TF Press Release) it doesn't sound like there's anything really interesting here other than GPUs are faster are parallel calculations than CPUs. This is already known.</p><p>Cracking WPA and iPod/iPhone backups is still not a feasible task. Instead of 20 billion years (or whatever), it'll now only take 1 billion? Saying "20 times faster" makes it sound like you can already reliably crack these things, and now instead of a few hours, it's only a few minutes. But unless I missed it (and I certainly could have), that's not the case. It's just Moore's Law continuing on, in this case on the GPU instead of the CPU. We already know newer chips will be able to try more keys per second, but we're a *long* way from it being something to have any reasonable level of concern over.</p><p>It strikes me as odd that they actually have a product for this. It may be useful for short key lengths, but not for the things listed in the headline. It's like saying the hydrogen bomb can destroy Jupiter 100 times faster than an atom bomb. It may be technically true, but it's not a practical solution.</p></htmltext>
<tokenext>Having skimmed TFA ( actually , TF Press Release ) it does n't sound like there 's anything really interesting here other than GPUs are faster are parallel calculations than CPUs .
This is already known.Cracking WPA and iPod/iPhone backups is still not a feasible task .
Instead of 20 billion years ( or whatever ) , it 'll now only take 1 billion ?
Saying " 20 times faster " makes it sound like you can already reliably crack these things , and now instead of a few hours , it 's only a few minutes .
But unless I missed it ( and I certainly could have ) , that 's not the case .
It 's just Moore 's Law continuing on , in this case on the GPU instead of the CPU .
We already know newer chips will be able to try more keys per second , but we 're a * long * way from it being something to have any reasonable level of concern over.It strikes me as odd that they actually have a product for this .
It may be useful for short key lengths , but not for the things listed in the headline .
It 's like saying the hydrogen bomb can destroy Jupiter 100 times faster than an atom bomb .
It may be technically true , but it 's not a practical solution .</tokentext>
<sentencetext>Having skimmed TFA (actually, TF Press Release) it doesn't sound like there's anything really interesting here other than GPUs are faster are parallel calculations than CPUs.
This is already known.Cracking WPA and iPod/iPhone backups is still not a feasible task.
Instead of 20 billion years (or whatever), it'll now only take 1 billion?
Saying "20 times faster" makes it sound like you can already reliably crack these things, and now instead of a few hours, it's only a few minutes.
But unless I missed it (and I certainly could have), that's not the case.
It's just Moore's Law continuing on, in this case on the GPU instead of the CPU.
We already know newer chips will be able to try more keys per second, but we're a *long* way from it being something to have any reasonable level of concern over.It strikes me as odd that they actually have a product for this.
It may be useful for short key lengths, but not for the things listed in the headline.
It's like saying the hydrogen bomb can destroy Jupiter 100 times faster than an atom bomb.
It may be technically true, but it's not a practical solution.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496186</id>
	<title>Re:GPUs</title>
	<author>Anonymous</author>
	<datestamp>1268756460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>YES.</p><p>These cards are fast in certain applications due to the extreme parallelism, this simply doesn't work in the majority of day-to-day computing.</p></htmltext>
<tokenext>YES.These cards are fast in certain applications due to the extreme parallelism , this simply does n't work in the majority of day-to-day computing .</tokentext>
<sentencetext>YES.These cards are fast in certain applications due to the extreme parallelism, this simply doesn't work in the majority of day-to-day computing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496864</id>
	<title>Password Recovery</title>
	<author>MrTripps</author>
	<datestamp>1268758860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Password recovery" is about the same swarthy euphemism as "waste management" or "escort service." Why did an advertisement for hacking passwords get on<nobr> <wbr></nobr>/.? Aren't their IRC channels for that sort of thing?</htmltext>
<tokenext>" Password recovery " is about the same swarthy euphemism as " waste management " or " escort service .
" Why did an advertisement for hacking passwords get on /. ?
Are n't their IRC channels for that sort of thing ?</tokentext>
<sentencetext>"Password recovery" is about the same swarthy euphemism as "waste management" or "escort service.
" Why did an advertisement for hacking passwords get on /.?
Aren't their IRC channels for that sort of thing?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499722</id>
	<title>Re:103000 passwords per second. So?</title>
	<author>xded</author>
	<datestamp>1268769540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863</p><p>383.3 million years to go through every password in 2^80 possibilities.</p></div><p>Try this: <a href="http://www.google.com/search?q=2\%5E80+\%2F+(+10\%5E5+\%2F+s+)+in+millennia" title="google.com" rel="nofollow">2^80 / ( 10^5 / s ) in millennia</a> [google.com]. Or try it with <a href="http://www.google.com/search?q=(4.3+GB)+\%2F+(100+(Mbit+\%2F+s))" title="google.com" rel="nofollow">bandwidth calculations</a> [google.com].</p><p>ded</p></div>
	</htmltext>
<tokenext>Google says this : ( 2 ^ 80 / 10 ^ 5 ) / ( 3600 * 24 * 365 * 1000 ) = 383 347 863383.3 million years to go through every password in 2 ^ 80 possibilities.Try this : 2 ^ 80 / ( 10 ^ 5 / s ) in millennia [ google.com ] .
Or try it with bandwidth calculations [ google.com ] .ded</tokentext>
<sentencetext>Google says this: (2^80 / 10^5 ) / (3600 *24 *365*1000) = 383 347 863383.3 million years to go through every password in 2^80 possibilities.Try this: 2^80 / ( 10^5 / s ) in millennia [google.com].
Or try it with bandwidth calculations [google.com].ded
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497984</id>
	<title>Re:Out of curiosity...</title>
	<author>VGPowerlord</author>
	<datestamp>1268762880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Newer windowing systems no longer draw the screen as a single 2D object.  This includes X (Compiz), OSX, and Windows.</p></htmltext>
<tokenext>Newer windowing systems no longer draw the screen as a single 2D object .
This includes X ( Compiz ) , OSX , and Windows .</tokentext>
<sentencetext>Newer windowing systems no longer draw the screen as a single 2D object.
This includes X (Compiz), OSX, and Windows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496756</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502096
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502542
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497520
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496246
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496460
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496072
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496130
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496176
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31508616
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496674
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497706
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496318
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496656
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496888
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497628
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496874
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496370
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499722
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496222
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496698
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499310
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496984
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31498400
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496246
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496916
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497450
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31507414
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499672
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496794
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499286
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31530554
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496932
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496830
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496676
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31500162
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496152
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502850
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496122
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496210
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497474
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496214
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497354
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496992
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31500944
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496372
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497984
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496756
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496570
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496002
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496042
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496186
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496204
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497356
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_10_03_16_1346258_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496934
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495946
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496698
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496186
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496176
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496114
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502542
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496122
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502850
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496992
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496214
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31500162
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496888
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496210
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496222
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497474
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496372
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31500944
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496130
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496932
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496574
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496192
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499722
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499286
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496656
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31530554
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497774
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495918
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496246
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497520
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31498400
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496042
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496370
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496794
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496072
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496152
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496984
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499310
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31498362
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496088
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495760
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497706
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496204
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497354
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495872
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496916
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496934
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31502096
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497356
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31499672
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31507414
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496674
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31508616
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31495868
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496086
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497628
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496212
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496460
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497450
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496318
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496874
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496756
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31497984
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496830
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496102
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation10_03_16_1346258.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496002
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496570
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment10_03_16_1346258.31496676
</commentlist>
</conversation>
