<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_12_23_191207</id>
	<title>An Open Source Compiler From CUDA To X86-Multicore</title>
	<author>timothy</author>
	<datestamp>1261595160000</datestamp>
	<htmltext>Gregory Diamos writes <i>"An open source project, <a href="http://code.google.com/p/gpuocelot/">Ocelot</a>, has recently released a just-in-time compiler for CUDA, allowing the same programs to be run on NVIDIA GPUs or x86 CPUs and providing an alternative to OpenCL.  A description of the compiler was recently posted on the <a href="http://forums.nvidia.com/index.php?showtopic=153448">NVIDIA forums</a>. The compiler works by translating GPU instructions to LLVM and then generating native code for any LLVM target. It has been validated against over 100 CUDA applications. All of the code is available under the New BSD license."</i></htmltext>
<tokenext>Gregory Diamos writes " An open source project , Ocelot , has recently released a just-in-time compiler for CUDA , allowing the same programs to be run on NVIDIA GPUs or x86 CPUs and providing an alternative to OpenCL .
A description of the compiler was recently posted on the NVIDIA forums .
The compiler works by translating GPU instructions to LLVM and then generating native code for any LLVM target .
It has been validated against over 100 CUDA applications .
All of the code is available under the New BSD license .
"</tokentext>
<sentencetext>Gregory Diamos writes "An open source project, Ocelot, has recently released a just-in-time compiler for CUDA, allowing the same programs to be run on NVIDIA GPUs or x86 CPUs and providing an alternative to OpenCL.
A description of the compiler was recently posted on the NVIDIA forums.
The compiler works by translating GPU instructions to LLVM and then generating native code for any LLVM target.
It has been validated against over 100 CUDA applications.
All of the code is available under the New BSD license.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30552052</id>
	<title>Re:Doesn't sound like a compiler</title>
	<author>Anonymous</author>
	<datestamp>1261771680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p> <i>There is no LLVM backend for AMD/ATI cards.</i> </p><p>Watch the LLVM tree carefully over the next few months. There may be some interesting checkins on the way.</p></htmltext>
<tokenext>There is no LLVM backend for AMD/ATI cards .
Watch the LLVM tree carefully over the next few months .
There may be some interesting checkins on the way .</tokentext>
<sentencetext> There is no LLVM backend for AMD/ATI cards.
Watch the LLVM tree carefully over the next few months.
There may be some interesting checkins on the way.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538904</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30542168</id>
	<title>Re:just-in-time compiler?</title>
	<author>slimjim8094</author>
	<datestamp>1259781540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No, it means compiled just in time. Shocking, I know.</p></htmltext>
<tokenext>No , it means compiled just in time .
Shocking , I know .</tokentext>
<sentencetext>No, it means compiled just in time.
Shocking, I know.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539112</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539112</id>
	<title>just-in-time compiler?</title>
	<author>Anonymous</author>
	<datestamp>1259749920000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Doesn't that mean "not compiled at all"</p></htmltext>
<tokenext>Does n't that mean " not compiled at all "</tokentext>
<sentencetext>Doesn't that mean "not compiled at all"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30551588</id>
	<title>Re:Alternative?</title>
	<author>badkarmadayaccount</author>
	<datestamp>1261765500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Same reason people stick to Flash, superior development tools. But there is a catch - LLVM has been romancing vector support, and I believe clang is used as a opencl frontend, so anything with a llvm backend == supports opencl</htmltext>
<tokenext>Same reason people stick to Flash , superior development tools .
But there is a catch - LLVM has been romancing vector support , and I believe clang is used as a opencl frontend , so anything with a llvm backend = = supports opencl</tokentext>
<sentencetext>Same reason people stick to Flash, superior development tools.
But there is a catch - LLVM has been romancing vector support, and I believe clang is used as a opencl frontend, so anything with a llvm backend == supports opencl</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538038</id>
	<title>Re:Alternative?</title>
	<author>Pinky's Brain</author>
	<datestamp>1259786460000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I've seen feature requests suggesting they are considering it, but at the moment too much information is lost in the PTX-&gt;LLVM step to be able to generate CAL or OpenCL.</p></htmltext>
<tokenext>I 've seen feature requests suggesting they are considering it , but at the moment too much information is lost in the PTX- &gt; LLVM step to be able to generate CAL or OpenCL .</tokentext>
<sentencetext>I've seen feature requests suggesting they are considering it, but at the moment too much information is lost in the PTX-&gt;LLVM step to be able to generate CAL or OpenCL.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538152</id>
	<title>Re:Alternative?</title>
	<author>Icegryphon</author>
	<datestamp>1259787000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What possible reason could you have to want to be locked into one GPU vendor?</p><p><div class="quote"><p>Hardware, libraries, and Toolkit.<br>
Cuda was useable way before anything else<br>
At the Time Cuda came out AMD was using <a href="http://en.wikipedia.org/wiki/Close\_to\_Metal" title="wikipedia.org">CTM</a> [wikipedia.org].<br>
Which is absolutely Painful to use.</p></div></div></div>
	</htmltext>
<tokenext>What possible reason could you have to want to be locked into one GPU vendor ? Hardware , libraries , and Toolkit .
Cuda was useable way before anything else At the Time Cuda came out AMD was using CTM [ wikipedia.org ] .
Which is absolutely Painful to use .</tokentext>
<sentencetext>What possible reason could you have to want to be locked into one GPU vendor?Hardware, libraries, and Toolkit.
Cuda was useable way before anything else
At the Time Cuda came out AMD was using CTM [wikipedia.org].
Which is absolutely Painful to use.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537852</id>
	<title>Re:Alternative?</title>
	<author>Yvan256</author>
	<datestamp>1259785200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>When did AMD drop the ATI brand?</p></htmltext>
<tokenext>When did AMD drop the ATI brand ?</tokentext>
<sentencetext>When did AMD drop the ATI brand?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538036</id>
	<title>Re:Wait wut?</title>
	<author>SpinyNorman</author>
	<datestamp>1259786400000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>I can think of a couple of reasons it may be useful on x86 :</p><p>- Better debugging tools<br>- Allows CUDA development without buying specialized hardware up-front (a lesson I've learnt - don't buy hardware until the software is ready)</p><p>It's also another option for multi-core programming. If the CUDA API is good, maybe it's an efficient way to develop certain types of parallel apps even if you never intend to use it on a GPU.</p></htmltext>
<tokenext>I can think of a couple of reasons it may be useful on x86 : - Better debugging tools- Allows CUDA development without buying specialized hardware up-front ( a lesson I 've learnt - do n't buy hardware until the software is ready ) It 's also another option for multi-core programming .
If the CUDA API is good , maybe it 's an efficient way to develop certain types of parallel apps even if you never intend to use it on a GPU .</tokentext>
<sentencetext>I can think of a couple of reasons it may be useful on x86 :- Better debugging tools- Allows CUDA development without buying specialized hardware up-front (a lesson I've learnt - don't buy hardware until the software is ready)It's also another option for multi-core programming.
If the CUDA API is good, maybe it's an efficient way to develop certain types of parallel apps even if you never intend to use it on a GPU.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537880</id>
	<title>Re:Wait wut?</title>
	<author>Anonymous</author>
	<datestamp>1259785380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Suppose you have working CUDA code but your dataset is relatively small, say a block of 1000 floating point numbers.  Then the overhead of delegating the work to the GPU isn't necessarily worth the trouble.</p></htmltext>
<tokenext>Suppose you have working CUDA code but your dataset is relatively small , say a block of 1000 floating point numbers .
Then the overhead of delegating the work to the GPU is n't necessarily worth the trouble .</tokentext>
<sentencetext>Suppose you have working CUDA code but your dataset is relatively small, say a block of 1000 floating point numbers.
Then the overhead of delegating the work to the GPU isn't necessarily worth the trouble.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808</id>
	<title>Wait wut?</title>
	<author>Anonymous</author>
	<datestamp>1259784960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?<br>
Is there support yet for double-precision floating points yet on Nvidia cards?<br>
This makes as much sense as a Wookiee on the planet Endor.<br>
Unless the Point is portability but, then why write it in Cuda to begin with?</htmltext>
<tokenext>Why would you go from CUDA ( Fast Floating-points ) to x86 ( slower Floating-points ) ?
Is there support yet for double-precision floating points yet on Nvidia cards ?
This makes as much sense as a Wookiee on the planet Endor .
Unless the Point is portability but , then why write it in Cuda to begin with ?</tokentext>
<sentencetext>Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?
Is there support yet for double-precision floating points yet on Nvidia cards?
This makes as much sense as a Wookiee on the planet Endor.
Unless the Point is portability but, then why write it in Cuda to begin with?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30541194</id>
	<title>Why?</title>
	<author>Gregory Diamos</author>
	<datestamp>1259766780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>So there seem to be several questions as to why people would want to use CUDA when an open standard exists for the same thing (OpenCL).</p><p>Well, honestly, the reason why I wrote this was because when I started, OpenCL did not exist.  </p><p>

I have heard the following reasons why some people prefer CUDA over OpenCL:
</p><ul> <li>The toolchains for OpenCL are still immature.  They are getting better, but are not quite as bug-free and high performance as CUDA at this point.</li>
<li> CUDA has more desirable features.  For example, CUDA supports many C++ features such as templates and classes in device code that are not part of the OpenCL specification.</li>
</ul><p>Additionally I would like to see a programming model like CUDA or OpenCL replace the most widespread models in industry (threads, openmp, mpi, etc...).  CUDA and OpenCL are each examples of <a href="http://en.wikipedia.org/wiki/Bulk\_synchronous\_parallel" title="wikipedia.org" rel="nofollow">Bulk Synchronous Parallel</a> [wikipedia.org] models, which explicitly are designed with the idea that communication latency and core count will increase over time.  Although I think that it is a long shot, I would like to see more applications written in these languages so there is a migration path for developers who do not want to write specialized applications for GPUs, but can instead write an application for a CPU that can take advantage of future CPUs with multiple cores, or GPUs with a large degree of fine-grained parallelism. </p><p>Most of the codebase for Ocelot could be re-used for OpenCL.  The intermediate representation for each language is very similar, with the main differences being in the runtime.  </p><p>

Please try to tear down these arguments, it really does help.</p></htmltext>
<tokenext>So there seem to be several questions as to why people would want to use CUDA when an open standard exists for the same thing ( OpenCL ) .Well , honestly , the reason why I wrote this was because when I started , OpenCL did not exist .
I have heard the following reasons why some people prefer CUDA over OpenCL : The toolchains for OpenCL are still immature .
They are getting better , but are not quite as bug-free and high performance as CUDA at this point .
CUDA has more desirable features .
For example , CUDA supports many C + + features such as templates and classes in device code that are not part of the OpenCL specification .
Additionally I would like to see a programming model like CUDA or OpenCL replace the most widespread models in industry ( threads , openmp , mpi , etc... ) .
CUDA and OpenCL are each examples of Bulk Synchronous Parallel [ wikipedia.org ] models , which explicitly are designed with the idea that communication latency and core count will increase over time .
Although I think that it is a long shot , I would like to see more applications written in these languages so there is a migration path for developers who do not want to write specialized applications for GPUs , but can instead write an application for a CPU that can take advantage of future CPUs with multiple cores , or GPUs with a large degree of fine-grained parallelism .
Most of the codebase for Ocelot could be re-used for OpenCL .
The intermediate representation for each language is very similar , with the main differences being in the runtime .
Please try to tear down these arguments , it really does help .</tokentext>
<sentencetext>So there seem to be several questions as to why people would want to use CUDA when an open standard exists for the same thing (OpenCL).Well, honestly, the reason why I wrote this was because when I started, OpenCL did not exist.
I have heard the following reasons why some people prefer CUDA over OpenCL:
 The toolchains for OpenCL are still immature.
They are getting better, but are not quite as bug-free and high performance as CUDA at this point.
CUDA has more desirable features.
For example, CUDA supports many C++ features such as templates and classes in device code that are not part of the OpenCL specification.
Additionally I would like to see a programming model like CUDA or OpenCL replace the most widespread models in industry (threads, openmp, mpi, etc...).
CUDA and OpenCL are each examples of Bulk Synchronous Parallel [wikipedia.org] models, which explicitly are designed with the idea that communication latency and core count will increase over time.
Although I think that it is a long shot, I would like to see more applications written in these languages so there is a migration path for developers who do not want to write specialized applications for GPUs, but can instead write an application for a CPU that can take advantage of future CPUs with multiple cores, or GPUs with a large degree of fine-grained parallelism.
Most of the codebase for Ocelot could be re-used for OpenCL.
The intermediate representation for each language is very similar, with the main differences being in the runtime.
Please try to tear down these arguments, it really does help.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538344</id>
	<title>Re:Doesn't sound like a compiler</title>
	<author>beelsebob</author>
	<datestamp>1259745000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><em>Seems to be just a front-end for LLVM. And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards?</em><br>Because OpenCL already does that job just fine.  The only possible use for this is to have legacy CUDA apps actually run while people port them to use OpenCL instead.</p></htmltext>
<tokenext>Seems to be just a front-end for LLVM .
And if it is just a front-end for LLVM , then why does n't it support ATI graphics cards ? Because OpenCL already does that job just fine .
The only possible use for this is to have legacy CUDA apps actually run while people port them to use OpenCL instead .</tokentext>
<sentencetext>Seems to be just a front-end for LLVM.
And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards?Because OpenCL already does that job just fine.
The only possible use for this is to have legacy CUDA apps actually run while people port them to use OpenCL instead.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537846</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30547364</id>
	<title>The worlds only Ocelot joke</title>
	<author>Lardmonster</author>
	<datestamp>1261653300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Q: How do you titilate an ocelot?<br>A: Oscillate it's tits a lot.</p><p>I thank you.</p></htmltext>
<tokenext>Q : How do you titilate an ocelot ? A : Oscillate it 's tits a lot.I thank you .</tokentext>
<sentencetext>Q: How do you titilate an ocelot?A: Oscillate it's tits a lot.I thank you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539608</id>
	<title>larrabee</title>
	<author>Anonymous</author>
	<datestamp>1259753520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Had Larrabee turned into a product this xmas, I think alot of people would have been interested in CUDA to x86.<br>I'm sure the people still working on it will be interested in it.</p><p>Next step CUDA to ATI...</p></htmltext>
<tokenext>Had Larrabee turned into a product this xmas , I think alot of people would have been interested in CUDA to x86.I 'm sure the people still working on it will be interested in it.Next step CUDA to ATI.. .</tokentext>
<sentencetext>Had Larrabee turned into a product this xmas, I think alot of people would have been interested in CUDA to x86.I'm sure the people still working on it will be interested in it.Next step CUDA to ATI...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539248</id>
	<title>OpenCL not a magic bullet</title>
	<author>Anonymous</author>
	<datestamp>1259750940000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>A bit off topic, but since I'm seeing posts about OpenCL and portability...</p><p>OpenCL will indeed get you portability between processors, however OpenCL does not make any guarantees about how well that portable code will run.  In the end, to get optimum performance you still have to code to the particular architecture on which that your code is going to run.  For example, performance on Nvidia chips is extremely sensitive to memory access patterns.  You could write OpenCL code that runs very well on Nvidia chips, but runs poorly on a different architecture.</p><p>Not saying that portability isn't a good thing, but a lot of people seem to be thinking that OpenCL will solve all your portability problems.  It won't.  It only will let code run on multiple architectures.  You'll still have to more or less hand optimize to the architecture.</p></htmltext>
<tokenext>A bit off topic , but since I 'm seeing posts about OpenCL and portability...OpenCL will indeed get you portability between processors , however OpenCL does not make any guarantees about how well that portable code will run .
In the end , to get optimum performance you still have to code to the particular architecture on which that your code is going to run .
For example , performance on Nvidia chips is extremely sensitive to memory access patterns .
You could write OpenCL code that runs very well on Nvidia chips , but runs poorly on a different architecture.Not saying that portability is n't a good thing , but a lot of people seem to be thinking that OpenCL will solve all your portability problems .
It wo n't .
It only will let code run on multiple architectures .
You 'll still have to more or less hand optimize to the architecture .</tokentext>
<sentencetext>A bit off topic, but since I'm seeing posts about OpenCL and portability...OpenCL will indeed get you portability between processors, however OpenCL does not make any guarantees about how well that portable code will run.
In the end, to get optimum performance you still have to code to the particular architecture on which that your code is going to run.
For example, performance on Nvidia chips is extremely sensitive to memory access patterns.
You could write OpenCL code that runs very well on Nvidia chips, but runs poorly on a different architecture.Not saying that portability isn't a good thing, but a lot of people seem to be thinking that OpenCL will solve all your portability problems.
It won't.
It only will let code run on multiple architectures.
You'll still have to more or less hand optimize to the architecture.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537920</id>
	<title>Metal Gear Solid Joke Here</title>
	<author>Anonymous</author>
	<datestamp>1259785680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ocelot!</p></htmltext>
<tokenext>Ocelot !</tokentext>
<sentencetext>Ocelot!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</id>
	<title>Alternative?</title>
	<author>Anonymous</author>
	<datestamp>1259784660000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>This isn't an alternative to CUDA; it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards. In other words, your choices as a developer are to use OpenCL and have your code run everywhere (AMD, nVidia, x86 slowly), or use CUDA and have your code run on nVidia or x86 slowly.</p><p>What possible reason could you have to want to be locked into one GPU vendor?</p></htmltext>
<tokenext>This is n't an alternative to CUDA ; it lets CUDA code run on x86 , but still does n't do anything for AMD graphics cards .
In other words , your choices as a developer are to use OpenCL and have your code run everywhere ( AMD , nVidia , x86 slowly ) , or use CUDA and have your code run on nVidia or x86 slowly.What possible reason could you have to want to be locked into one GPU vendor ?</tokentext>
<sentencetext>This isn't an alternative to CUDA; it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards.
In other words, your choices as a developer are to use OpenCL and have your code run everywhere (AMD, nVidia, x86 slowly), or use CUDA and have your code run on nVidia or x86 slowly.What possible reason could you have to want to be locked into one GPU vendor?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540140</id>
	<title>We need the opposite...</title>
	<author>Anonymous</author>
	<datestamp>1259757480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>An (Open Source) Compiler From  X86-Multicore To CUDA.... This way, the ION3 could completely miss the Atom part of the equation, and we would get one more player in the x86 field.</p></htmltext>
<tokenext>An ( Open Source ) Compiler From X86-Multicore To CUDA.... This way , the ION3 could completely miss the Atom part of the equation , and we would get one more player in the x86 field .</tokentext>
<sentencetext>An (Open Source) Compiler From  X86-Multicore To CUDA.... This way, the ION3 could completely miss the Atom part of the equation, and we would get one more player in the x86 field.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537846</id>
	<title>Doesn't sound like a compiler</title>
	<author>gnasher719</author>
	<datestamp>1259785200000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>Seems to be just a front-end for LLVM. And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards? That would actually make it useful; there is no need for a second CUDA compiler for NVidia cards.</htmltext>
<tokenext>Seems to be just a front-end for LLVM .
And if it is just a front-end for LLVM , then why does n't it support ATI graphics cards ?
That would actually make it useful ; there is no need for a second CUDA compiler for NVidia cards .</tokentext>
<sentencetext>Seems to be just a front-end for LLVM.
And if it is just a front-end for LLVM, then why doesn't it support ATI graphics cards?
That would actually make it useful; there is no need for a second CUDA compiler for NVidia cards.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538716</id>
	<title>Re:Alternative?</title>
	<author>TheRaven64</author>
	<datestamp>1259747400000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p> it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards</p></div><p>Actually, it does.  It lets CUDA code run on any processor that has an LLVM back end.  The open source Radeon drivers have an experimental LLVM back end and use LLVM for optimising shader code.</p></div>
	</htmltext>
<tokenext>it lets CUDA code run on x86 , but still does n't do anything for AMD graphics cardsActually , it does .
It lets CUDA code run on any processor that has an LLVM back end .
The open source Radeon drivers have an experimental LLVM back end and use LLVM for optimising shader code .</tokentext>
<sentencetext> it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cardsActually, it does.
It lets CUDA code run on any processor that has an LLVM back end.
The open source Radeon drivers have an experimental LLVM back end and use LLVM for optimising shader code.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30543172</id>
	<title>Re:OpenCL not a magic bullet</title>
	<author>complete loony</author>
	<datestamp>1261661700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>This is one of the strengths of LLVM. If your hardware performs better with some specific tweaks to the code, then write an optimising pass that makes the appropriate transformations. Then you can keep your back end machine code generator as simple as possible. Even better, write your optimiser in a generic way so anyone else tackling a similar problem can reuse your work. Heck if you're lucky someone else has already done so.</htmltext>
<tokenext>This is one of the strengths of LLVM .
If your hardware performs better with some specific tweaks to the code , then write an optimising pass that makes the appropriate transformations .
Then you can keep your back end machine code generator as simple as possible .
Even better , write your optimiser in a generic way so anyone else tackling a similar problem can reuse your work .
Heck if you 're lucky someone else has already done so .</tokentext>
<sentencetext>This is one of the strengths of LLVM.
If your hardware performs better with some specific tweaks to the code, then write an optimising pass that makes the appropriate transformations.
Then you can keep your back end machine code generator as simple as possible.
Even better, write your optimiser in a generic way so anyone else tackling a similar problem can reuse your work.
Heck if you're lucky someone else has already done so.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539248</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539802</id>
	<title>Re:Alternative?</title>
	<author>CDeity</author>
	<datestamp>1259755080000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>The greatest challenges lie in accommodating arbitrary control flow among threads within a cooperative thread array. NVIDIA GPUs are SIMD multiprocessors, but they include a thread activity stack that enables serialization of threads when they reach diverging branches. Without hardware support, this kind of thing becomes difficult on SIMD processors which is why Ocelot doesn't include support for SSE yet. It is also one of the obstacles for supporting AMD/ATI IL at the moment, though solutions are in order.</p><p>Translation from PTX to LLVM to multicore x86 does not necessarily throw away information concerning the PTX thread hierarchy initially. The first step is to express a PTX kernel using LLVM instructions and intrinsic function calls. This phase is [theoretically] invertible and no information concerning correctness or parallelism is lost.</p><p>To get to multicore from here, a second phase of transformations insert loops around blocks of code within the kernel to implement fine-grain multithreading. This is the part that isn't necessarily invertible or easy to translate back to GPU architectures and is what is referenced in the note you are citing.</p><p>Disclosure: I'm one of the core contributors to the Ocelot project.</p></htmltext>
<tokenext>The greatest challenges lie in accommodating arbitrary control flow among threads within a cooperative thread array .
NVIDIA GPUs are SIMD multiprocessors , but they include a thread activity stack that enables serialization of threads when they reach diverging branches .
Without hardware support , this kind of thing becomes difficult on SIMD processors which is why Ocelot does n't include support for SSE yet .
It is also one of the obstacles for supporting AMD/ATI IL at the moment , though solutions are in order.Translation from PTX to LLVM to multicore x86 does not necessarily throw away information concerning the PTX thread hierarchy initially .
The first step is to express a PTX kernel using LLVM instructions and intrinsic function calls .
This phase is [ theoretically ] invertible and no information concerning correctness or parallelism is lost.To get to multicore from here , a second phase of transformations insert loops around blocks of code within the kernel to implement fine-grain multithreading .
This is the part that is n't necessarily invertible or easy to translate back to GPU architectures and is what is referenced in the note you are citing.Disclosure : I 'm one of the core contributors to the Ocelot project .</tokentext>
<sentencetext>The greatest challenges lie in accommodating arbitrary control flow among threads within a cooperative thread array.
NVIDIA GPUs are SIMD multiprocessors, but they include a thread activity stack that enables serialization of threads when they reach diverging branches.
Without hardware support, this kind of thing becomes difficult on SIMD processors which is why Ocelot doesn't include support for SSE yet.
It is also one of the obstacles for supporting AMD/ATI IL at the moment, though solutions are in order.Translation from PTX to LLVM to multicore x86 does not necessarily throw away information concerning the PTX thread hierarchy initially.
The first step is to express a PTX kernel using LLVM instructions and intrinsic function calls.
This phase is [theoretically] invertible and no information concerning correctness or parallelism is lost.To get to multicore from here, a second phase of transformations insert loops around blocks of code within the kernel to implement fine-grain multithreading.
This is the part that isn't necessarily invertible or easy to translate back to GPU architectures and is what is referenced in the note you are citing.Disclosure: I'm one of the core contributors to the Ocelot project.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538038</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540880</id>
	<title>Re:Alternative?</title>
	<author>triso</author>
	<datestamp>1259763300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>What possible reason could you have to want to be locked into one GPU vendor?</p></div><p>Only that the other GPU Vendor, AMD/ATI, doesn't have a working Linux driver for 3-d, proprietary or open.  In addition there isn't much support for their older cards,</p></div>
	</htmltext>
<tokenext>What possible reason could you have to want to be locked into one GPU vendor ? Only that the other GPU Vendor , AMD/ATI , does n't have a working Linux driver for 3-d , proprietary or open .
In addition there is n't much support for their older cards ,</tokentext>
<sentencetext>What possible reason could you have to want to be locked into one GPU vendor?Only that the other GPU Vendor, AMD/ATI, doesn't have a working Linux driver for 3-d, proprietary or open.
In addition there isn't much support for their older cards,
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538594</id>
	<title>CUDA is only fast on some computers</title>
	<author>Anonymous</author>
	<datestamp>1259746800000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?</p></div></blockquote><p>Because if you don't have the right hardware, CUDA isn't fast floats.  It's a program that doesn't run at all.</p></div>
	</htmltext>
<tokenext>Why would you go from CUDA ( Fast Floating-points ) to x86 ( slower Floating-points ) ? Because if you do n't have the right hardware , CUDA is n't fast floats .
It 's a program that does n't run at all .</tokentext>
<sentencetext>Why would you go from CUDA(Fast Floating-points) to x86(slower Floating-points)?Because if you don't have the right hardware, CUDA isn't fast floats.
It's a program that doesn't run at all.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539838</id>
	<title>Re:Alternative?</title>
	<author>Anonymous</author>
	<datestamp>1259755380000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>This isn't an alternative to CUDA; it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards. In other words, your choices as a developer are to use OpenCL and have your code run everywhere (AMD, nVidia, x86 slowly), or use CUDA and have your code run on nVidia or x86 slowly.</p><p>What possible reason could you have to want to be locked into one GPU vendor?</p></div><p>Because the hardware doesn't suck?</p></div>
	</htmltext>
<tokenext>This is n't an alternative to CUDA ; it lets CUDA code run on x86 , but still does n't do anything for AMD graphics cards .
In other words , your choices as a developer are to use OpenCL and have your code run everywhere ( AMD , nVidia , x86 slowly ) , or use CUDA and have your code run on nVidia or x86 slowly.What possible reason could you have to want to be locked into one GPU vendor ? Because the hardware does n't suck ?</tokentext>
<sentencetext>This isn't an alternative to CUDA; it lets CUDA code run on x86, but still doesn't do anything for AMD graphics cards.
In other words, your choices as a developer are to use OpenCL and have your code run everywhere (AMD, nVidia, x86 slowly), or use CUDA and have your code run on nVidia or x86 slowly.What possible reason could you have to want to be locked into one GPU vendor?Because the hardware doesn't suck?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537990</id>
	<title>Re:Alternative?</title>
	<author>raftpeople</author>
	<datestamp>1259786220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><i>What possible reason could you have to want to be locked into one GPU vendor?</i> <br> <br>
The reason is that today CUDA has a headstart and is more mature.  Eventually things will probably shift to OpenCL but that takes time and people don't want to sacrifice features today.</htmltext>
<tokenext>What possible reason could you have to want to be locked into one GPU vendor ?
The reason is that today CUDA has a headstart and is more mature .
Eventually things will probably shift to OpenCL but that takes time and people do n't want to sacrifice features today .</tokentext>
<sentencetext>What possible reason could you have to want to be locked into one GPU vendor?
The reason is that today CUDA has a headstart and is more mature.
Eventually things will probably shift to OpenCL but that takes time and people don't want to sacrifice features today.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537790</id>
	<title>Performance?</title>
	<author>pablodiazgutierrez</author>
	<datestamp>1259784840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I wonder how the performance of the open source solution is compared to the proprietary compiler by NVidia. If it's good enough, they might be scared.</p></htmltext>
<tokenext>I wonder how the performance of the open source solution is compared to the proprietary compiler by NVidia .
If it 's good enough , they might be scared .</tokentext>
<sentencetext>I wonder how the performance of the open source solution is compared to the proprietary compiler by NVidia.
If it's good enough, they might be scared.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537722</id>
	<title>Open Source poll: AVR or PIC?</title>
	<author>Anonymous</author>
	<datestamp>1259784480000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Which one do you prefer, and why?</p></htmltext>
<tokenext>Which one do you prefer , and why ?</tokentext>
<sentencetext>Which one do you prefer, and why?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538198</id>
	<title>Re:Alternative?</title>
	<author>Guspaz</author>
	<datestamp>1259787300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Progressively more and more.</p><p>Example: Go to "ati.com" and you get redirected to the regular amd.com front page. Go to desktop graphics products and you get a page titled "AMD Graphics for Desktop PCs" inviting you to shop for "AMD Desktop Graphics Cards".</p><p>The actual cards themselves have as product name "ATI Radeon", but describing an "ATI Radeon" as an "AMD graphics card" is accurate.</p></htmltext>
<tokenext>Progressively more and more.Example : Go to " ati.com " and you get redirected to the regular amd.com front page .
Go to desktop graphics products and you get a page titled " AMD Graphics for Desktop PCs " inviting you to shop for " AMD Desktop Graphics Cards " .The actual cards themselves have as product name " ATI Radeon " , but describing an " ATI Radeon " as an " AMD graphics card " is accurate .</tokentext>
<sentencetext>Progressively more and more.Example: Go to "ati.com" and you get redirected to the regular amd.com front page.
Go to desktop graphics products and you get a page titled "AMD Graphics for Desktop PCs" inviting you to shop for "AMD Desktop Graphics Cards".The actual cards themselves have as product name "ATI Radeon", but describing an "ATI Radeon" as an "AMD graphics card" is accurate.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537852</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538160</id>
	<title>I'm betting..</title>
	<author>RightSaidFred99</author>
	<datestamp>1259787060000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>NVidia isn't real happy about this.  No Christmas cards for those guys!  In fact the developers should expect some insipid, obvious, and unfunny cartoons will be drawn about them.</htmltext>
<tokenext>NVidia is n't real happy about this .
No Christmas cards for those guys !
In fact the developers should expect some insipid , obvious , and unfunny cartoons will be drawn about them .</tokentext>
<sentencetext>NVidia isn't real happy about this.
No Christmas cards for those guys!
In fact the developers should expect some insipid, obvious, and unfunny cartoons will be drawn about them.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540068</id>
	<title>Because</title>
	<author>Groo Wanderer</author>
	<datestamp>1259756820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"What possible reason could you have to want to be locked into one GPU vendor?"</p><p>Perhaps because you are sick and tired of GPUs that don't die an early death, and love sitting on the phone and being told that it isn't covered by warranty by HP, Dell, Apple, Sony, and the rest.</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -Charlie</p></htmltext>
<tokenext>" What possible reason could you have to want to be locked into one GPU vendor ?
" Perhaps because you are sick and tired of GPUs that do n't die an early death , and love sitting on the phone and being told that it is n't covered by warranty by HP , Dell , Apple , Sony , and the rest .
                -Charlie</tokentext>
<sentencetext>"What possible reason could you have to want to be locked into one GPU vendor?
"Perhaps because you are sick and tired of GPUs that don't die an early death, and love sitting on the phone and being told that it isn't covered by warranty by HP, Dell, Apple, Sony, and the rest.
                -Charlie</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538904</id>
	<title>Re:Doesn't sound like a compiler</title>
	<author>MostAwesomeDude</author>
	<datestamp>1259748540000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>There is no LLVM backend for AMD/ATI cards. Of the few of us that actually understand ATI hardware, most of us are working on other things besides GPGPU. Sorry.</p></htmltext>
<tokenext>There is no LLVM backend for AMD/ATI cards .
Of the few of us that actually understand ATI hardware , most of us are working on other things besides GPGPU .
Sorry .</tokentext>
<sentencetext>There is no LLVM backend for AMD/ATI cards.
Of the few of us that actually understand ATI hardware, most of us are working on other things besides GPGPU.
Sorry.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537846</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30541266</id>
	<title>Re:Performance?</title>
	<author>Gregory Diamos</author>
	<datestamp>1259767980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Here's a graph <a href="http://gdiamos.net/files/cpusAndGpus.png" title="gdiamos.net" rel="nofollow">performance</a> [gdiamos.net].  The GPU version uses NVIDIA's JIT to generate native instructions for a particular GPU so the GPU results here should be more or less the same as if the program was compiled with NVIDIA's static compiler.</htmltext>
<tokenext>Here 's a graph performance [ gdiamos.net ] .
The GPU version uses NVIDIA 's JIT to generate native instructions for a particular GPU so the GPU results here should be more or less the same as if the program was compiled with NVIDIA 's static compiler .</tokentext>
<sentencetext>Here's a graph performance [gdiamos.net].
The GPU version uses NVIDIA's JIT to generate native instructions for a particular GPU so the GPU results here should be more or less the same as if the program was compiled with NVIDIA's static compiler.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537790</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30542168
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539112
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537990
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538198
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537852
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538152
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539802
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538038
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538344
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537846
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30552052
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538904
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537846
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537880
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30541266
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537790
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538036
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540068
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30543172
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539248
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539838
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30551588
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537920
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538716
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_12_23_191207_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538594
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_23_191207.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537790
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30541266
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_23_191207.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537808
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537880
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538594
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538036
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_23_191207.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539248
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30543172
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_23_191207.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537744
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30551588
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537920
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540068
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30540880
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538038
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539802
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537852
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538198
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538716
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539838
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537990
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538152
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_23_191207.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30537846
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538904
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30552052
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30538344
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_12_23_191207.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30539112
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_12_23_191207.30542168
</commentlist>
</conversation>
