<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_05_1735225</id>
	<title>Ryan Gordon Ends FatELF Universal Binary Effort</title>
	<author>Soulskill</author>
	<datestamp>1257445800000</datestamp>
	<htmltext>recoiledsnake writes <i>"A few years after the <a href="http://apcmag.com/why\_i\_quit\_kernel\_developer\_con\_kolivas.htm">Con Kolivas fiasco</a>, the FatELF project to implement the 'universal binaries' feature for Linux that allows a single binary file to run on multiple hardware platforms <a href="http://icculus.org/cgi-bin/finger/finger.pl?user=icculus&amp;date=2009-11-03&amp;time=19-08-04"> has been grounded</a>. Ryan C. Gordon, who has ported a <a href="http://en.wikipedia.org/wiki/Ryan\_C.\_Gordon#Ported\_Titles">number of popular games</a> and game servers to Linux, has this to say: 'It looks like the Linux kernel maintainers are frowning on the FatELF patches. Some got the idea and disagreed, some didn't seem to hear what I was saying, and some showed up just to be rude.' The launch of the project was <a href="http://linux.slashdot.org/story/09/10/25/0450232/Ryan-Gordon-Wants-To-Bring-Universal-Binaries-To-Linux?from=rss#">recently discussed here</a>.  The <a href="http://icculus.org/fatelf/">FatELF project page and FAQ</a> are still up."</i></htmltext>
<tokenext>recoiledsnake writes " A few years after the Con Kolivas fiasco , the FatELF project to implement the 'universal binaries ' feature for Linux that allows a single binary file to run on multiple hardware platforms has been grounded .
Ryan C. Gordon , who has ported a number of popular games and game servers to Linux , has this to say : 'It looks like the Linux kernel maintainers are frowning on the FatELF patches .
Some got the idea and disagreed , some did n't seem to hear what I was saying , and some showed up just to be rude .
' The launch of the project was recently discussed here .
The FatELF project page and FAQ are still up .
"</tokentext>
<sentencetext>recoiledsnake writes "A few years after the Con Kolivas fiasco, the FatELF project to implement the 'universal binaries' feature for Linux that allows a single binary file to run on multiple hardware platforms  has been grounded.
Ryan C. Gordon, who has ported a number of popular games and game servers to Linux, has this to say: 'It looks like the Linux kernel maintainers are frowning on the FatELF patches.
Some got the idea and disagreed, some didn't seem to hear what I was saying, and some showed up just to be rude.
' The launch of the project was recently discussed here.
The FatELF project page and FAQ are still up.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002906</id>
	<title>Why would there be a need?</title>
	<author>thatotherguy007</author>
	<datestamp>1257439920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Honestly, I can't understand why a fat ELF would be need. Just wrap a small shell script around two or more binaries. All it would have to do is detect the OS (uname anyone?) and unpack the right segment. It is messier, somewhat, but it works.</htmltext>
<tokenext>Honestly , I ca n't understand why a fat ELF would be need .
Just wrap a small shell script around two or more binaries .
All it would have to do is detect the OS ( uname anyone ?
) and unpack the right segment .
It is messier , somewhat , but it works .</tokentext>
<sentencetext>Honestly, I can't understand why a fat ELF would be need.
Just wrap a small shell script around two or more binaries.
All it would have to do is detect the OS (uname anyone?
) and unpack the right segment.
It is messier, somewhat, but it works.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002146</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>Anonymous</author>
	<datestamp>1257430260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>FatElf requires the distros to support it.  That means you have to get the crap compiled for every arch THEN joined into a single binary, which means any package needs to wait for every arch auto-builder to get its job done before it is uploaded (forget about cross-compiling, it doesn't work nearly as well as you might think).</p><p>Just for ia32 and amd64, which I suppose many people think are the only arches a distro supports, it would double the file sizes, which doubles download size.  Now, Bandwidth is *extremely* expensive.  We are not talking about your el-cheap-o ADSL line here, we're talking about the bandwidth used by a mirror network over a hundred hosts all over the world, sometimes in places where bandwidth is 10x more expensive than anywhere in the USA.  If that bandwidth is donated, it is even more valuable, because you lose the entire mirror site if it gets too hard for them to bear.</p><p>It is not going to happen.  We don't take extra complexity for stuff that won't happen.  It is that simple.</p></htmltext>
<tokenext>FatElf requires the distros to support it .
That means you have to get the crap compiled for every arch THEN joined into a single binary , which means any package needs to wait for every arch auto-builder to get its job done before it is uploaded ( forget about cross-compiling , it does n't work nearly as well as you might think ) .Just for ia32 and amd64 , which I suppose many people think are the only arches a distro supports , it would double the file sizes , which doubles download size .
Now , Bandwidth is * extremely * expensive .
We are not talking about your el-cheap-o ADSL line here , we 're talking about the bandwidth used by a mirror network over a hundred hosts all over the world , sometimes in places where bandwidth is 10x more expensive than anywhere in the USA .
If that bandwidth is donated , it is even more valuable , because you lose the entire mirror site if it gets too hard for them to bear.It is not going to happen .
We do n't take extra complexity for stuff that wo n't happen .
It is that simple .</tokentext>
<sentencetext>FatElf requires the distros to support it.
That means you have to get the crap compiled for every arch THEN joined into a single binary, which means any package needs to wait for every arch auto-builder to get its job done before it is uploaded (forget about cross-compiling, it doesn't work nearly as well as you might think).Just for ia32 and amd64, which I suppose many people think are the only arches a distro supports, it would double the file sizes, which doubles download size.
Now, Bandwidth is *extremely* expensive.
We are not talking about your el-cheap-o ADSL line here, we're talking about the bandwidth used by a mirror network over a hundred hosts all over the world, sometimes in places where bandwidth is 10x more expensive than anywhere in the USA.
If that bandwidth is donated, it is even more valuable, because you lose the entire mirror site if it gets too hard for them to bear.It is not going to happen.
We don't take extra complexity for stuff that won't happen.
It is that simple.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997866</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003720</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>JohnFluxx</author>
	<datestamp>1257498480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How did this get modded up to +5 without a single bit of support for his claim that the kernel developers were calling him stupid?</p></htmltext>
<tokenext>How did this get modded up to + 5 without a single bit of support for his claim that the kernel developers were calling him stupid ?</tokentext>
<sentencetext>How did this get modded up to +5 without a single bit of support for his claim that the kernel developers were calling him stupid?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000966</id>
	<title>This is not Java - its more like LLVM</title>
	<author>thaig</author>
	<datestamp>1257421320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://llvm.org/" title="llvm.org">http://llvm.org/</a> [llvm.org]</p></htmltext>
<tokenext>http : //llvm.org/ [ llvm.org ]</tokentext>
<sentencetext>http://llvm.org/ [llvm.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998066</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>Anonymous</author>
	<datestamp>1257451920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. </i> <br>
You should look at ppc and sparc distro : pretty much everything is<nobr> <wbr></nobr>... 32 bit.
That because 32bit is more efficiant than 64 bits (less code/data size). But on x86 that's different because that's on the same arch at all (not the same register/feature,<nobr> <wbr></nobr>...).</htmltext>
<tokenext>Especially since on a 64 bit distro pretty much everything , with very few exceptions is 64 bit .
You should look at ppc and sparc distro : pretty much everything is ... 32 bit .
That because 32bit is more efficiant than 64 bits ( less code/data size ) .
But on x86 that 's different because that 's on the same arch at all ( not the same register/feature , ... ) .</tokentext>
<sentencetext>Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit.
You should look at ppc and sparc distro : pretty much everything is ... 32 bit.
That because 32bit is more efficiant than 64 bits (less code/data size).
But on x86 that's different because that's on the same arch at all (not the same register/feature, ...).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004960</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>Anonymous</author>
	<datestamp>1257518100000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Mod parent up. I've done the exact same thing he outlines many times and he's precisely correct. The only caveat is using libraries that have an explicit GPL license, GPL rather than LGPL is applied on purpose to some libraries to force any software made with those libraries to be released as open source simply by linking UNLESS you obtain a separate license. This is actually a very good thing, and I should note most libraries I've dealt that were GPL when we needed a seperate license were obtained for a cheaper licensing fee than their straight up commercial counterparts. Closed source commercial software is easily a possibility on Linux, and it is not much harder to achieve than open sourced commercial software.</p></htmltext>
<tokenext>Mod parent up .
I 've done the exact same thing he outlines many times and he 's precisely correct .
The only caveat is using libraries that have an explicit GPL license , GPL rather than LGPL is applied on purpose to some libraries to force any software made with those libraries to be released as open source simply by linking UNLESS you obtain a separate license .
This is actually a very good thing , and I should note most libraries I 've dealt that were GPL when we needed a seperate license were obtained for a cheaper licensing fee than their straight up commercial counterparts .
Closed source commercial software is easily a possibility on Linux , and it is not much harder to achieve than open sourced commercial software .</tokentext>
<sentencetext>Mod parent up.
I've done the exact same thing he outlines many times and he's precisely correct.
The only caveat is using libraries that have an explicit GPL license, GPL rather than LGPL is applied on purpose to some libraries to force any software made with those libraries to be released as open source simply by linking UNLESS you obtain a separate license.
This is actually a very good thing, and I should note most libraries I've dealt that were GPL when we needed a seperate license were obtained for a cheaper licensing fee than their straight up commercial counterparts.
Closed source commercial software is easily a possibility on Linux, and it is not much harder to achieve than open sourced commercial software.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001036</id>
	<title>Re:Rejecting solutions to problems</title>
	<author>Anonymous</author>
	<datestamp>1257421620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types. You cannot solve that with package management because you need all packages installed at the same time [...] it just does not work because the binaries will conflict.</p></div></blockquote><p>"FatElf", for when appending ${NFS\_PATH}/$(/bin/uname -m)/ to the users path at login is too complex?</p><p>Sorry, I've yet to see a single compelling or rational argument for fat binaries on linux.</p></div>
	</htmltext>
<tokenext>The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types .
You can not solve that with package management because you need all packages installed at the same time [ ... ] it just does not work because the binaries will conflict .
" FatElf " , for when appending $ { NFS \ _PATH } / $ ( /bin/uname -m ) / to the users path at login is too complex ? Sorry , I 've yet to see a single compelling or rational argument for fat binaries on linux .</tokentext>
<sentencetext>The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types.
You cannot solve that with package management because you need all packages installed at the same time [...] it just does not work because the binaries will conflict.
"FatElf", for when appending ${NFS\_PATH}/$(/bin/uname -m)/ to the users path at login is too complex?Sorry, I've yet to see a single compelling or rational argument for fat binaries on linux.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998532</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997452</id>
	<title>fp</title>
	<author>Anonymous</author>
	<datestamp>1257449460000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>first post</p></htmltext>
<tokenext>first post</tokentext>
<sentencetext>first post</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002276</id>
	<title>Cut the SFLC some slack</title>
	<author>Qubit</author>
	<datestamp>1257431580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>It's all moot anyhow: the Software Freedom Law Center never replied to my<br>
&nbsp; request about the software patent thing. I suppose they still might; it's<br>
&nbsp; only been a few days, but for some reason, I fully expect to never hear from<br>
&nbsp; them.</p></div><p>Other people have covered other points, so I'm just going to talk about the <a href="http://www.softwarefreedom.org/" title="softwarefreedom.org">SFLC</a> [softwarefreedom.org]. It sounds like you haven't communicated with them before, so please cut them some slack. Just yesterday <a href="http://identi.ca/notice/13778471" title="identi.ca">Bradley Kuhn dented</a> [identi.ca]:</p><p><div class="quote"><p>FLOSS ppls: !sflc is a charity w/ limited resources. It can take up to a week for us to answer general contact email. Pls give us a break!</p></div></div>
	</htmltext>
<tokenext>It 's all moot anyhow : the Software Freedom Law Center never replied to my   request about the software patent thing .
I suppose they still might ; it 's   only been a few days , but for some reason , I fully expect to never hear from   them.Other people have covered other points , so I 'm just going to talk about the SFLC [ softwarefreedom.org ] .
It sounds like you have n't communicated with them before , so please cut them some slack .
Just yesterday Bradley Kuhn dented [ identi.ca ] : FLOSS ppls : ! sflc is a charity w/ limited resources .
It can take up to a week for us to answer general contact email .
Pls give us a break !</tokentext>
<sentencetext>It's all moot anyhow: the Software Freedom Law Center never replied to my
  request about the software patent thing.
I suppose they still might; it's
  only been a few days, but for some reason, I fully expect to never hear from
  them.Other people have covered other points, so I'm just going to talk about the SFLC [softwarefreedom.org].
It sounds like you haven't communicated with them before, so please cut them some slack.
Just yesterday Bradley Kuhn dented [identi.ca]:FLOSS ppls: !sflc is a charity w/ limited resources.
It can take up to a week for us to answer general contact email.
Pls give us a break!
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998692</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>Anonymous</author>
	<datestamp>1257454500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>This guy worked in the closed-source world of video games</p></div></blockquote><p>What are you talking about? Doom and Nethack are both open-source. What other games are worth playing?</p></div>
	</htmltext>
<tokenext>This guy worked in the closed-source world of video gamesWhat are you talking about ?
Doom and Nethack are both open-source .
What other games are worth playing ?</tokentext>
<sentencetext>This guy worked in the closed-source world of video gamesWhat are you talking about?
Doom and Nethack are both open-source.
What other games are worth playing?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30007576</id>
	<title>Re:32 bit processes make sense on 32 bit OSs!</title>
	<author>Grishnakh</author>
	<datestamp>1257535200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>On the AMD64 architecture, it's much faster to run a 64-bit binary rather than a 32-bit one, because in 64-bit mode there's a lot more registers.  x86 is a register-starved architecture.  This problem obviously doesn't apply to non-Intel architectures.</p></htmltext>
<tokenext>On the AMD64 architecture , it 's much faster to run a 64-bit binary rather than a 32-bit one , because in 64-bit mode there 's a lot more registers .
x86 is a register-starved architecture .
This problem obviously does n't apply to non-Intel architectures .</tokentext>
<sentencetext>On the AMD64 architecture, it's much faster to run a 64-bit binary rather than a 32-bit one, because in 64-bit mode there's a lot more registers.
x86 is a register-starved architecture.
This problem obviously doesn't apply to non-Intel architectures.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999674</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999838</id>
	<title>Re:Bloat-ELF</title>
	<author>OrangeTide</author>
	<datestamp>1257416100000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>Fat binaries are still smaller than the bloated<nobr> <wbr></nobr>.jar nightmare that Java creates. And we have real world examples of where fat binaries work, on OSX. Adding 300K or so to a package instead of having two or three different versions of a file saves disk space because a pretty large percentage of people just mirror everything they see (packrat human behavior).</p><p>The data files don't need to be duplicated just the executables and libraries. And what is even nicer is that fat binaries let you strip the extraneous crap off your binaries to save disk space, if you so choose.</p><p>Being a person who has to maintain cross-compile tools for multiple architectures and point different people to different shared directories for those tools has me wishing for some real fat binary support on Linux. Maybe it isn't useful to a Java lover like yourself, but frankly it's bullshit that you think your way is the only way to do development. And that kind of attitude one of the many reasons system programmers don't like Java/.NET programmers.</p></htmltext>
<tokenext>Fat binaries are still smaller than the bloated .jar nightmare that Java creates .
And we have real world examples of where fat binaries work , on OSX .
Adding 300K or so to a package instead of having two or three different versions of a file saves disk space because a pretty large percentage of people just mirror everything they see ( packrat human behavior ) .The data files do n't need to be duplicated just the executables and libraries .
And what is even nicer is that fat binaries let you strip the extraneous crap off your binaries to save disk space , if you so choose.Being a person who has to maintain cross-compile tools for multiple architectures and point different people to different shared directories for those tools has me wishing for some real fat binary support on Linux .
Maybe it is n't useful to a Java lover like yourself , but frankly it 's bullshit that you think your way is the only way to do development .
And that kind of attitude one of the many reasons system programmers do n't like Java/.NET programmers .</tokentext>
<sentencetext>Fat binaries are still smaller than the bloated .jar nightmare that Java creates.
And we have real world examples of where fat binaries work, on OSX.
Adding 300K or so to a package instead of having two or three different versions of a file saves disk space because a pretty large percentage of people just mirror everything they see (packrat human behavior).The data files don't need to be duplicated just the executables and libraries.
And what is even nicer is that fat binaries let you strip the extraneous crap off your binaries to save disk space, if you so choose.Being a person who has to maintain cross-compile tools for multiple architectures and point different people to different shared directories for those tools has me wishing for some real fat binary support on Linux.
Maybe it isn't useful to a Java lover like yourself, but frankly it's bullshit that you think your way is the only way to do development.
And that kind of attitude one of the many reasons system programmers don't like Java/.NET programmers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998330</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002034</id>
	<title>Re:Forget fat binaries for Linux</title>
	<author>Anonymous</author>
	<datestamp>1257429120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Various vendors have been shipping fat binaries for integrated microcontrollers for years. This sort of thing tends to be useful when you have various controllers integrated in a single block and wish to treat the codec for the block as an all-encompassing thing. Relocation takes care of mapping the various bits to the proper controllers. While it does make distribution easier, it also just encourages people to develop abstract binary blobs for poorly or completely undocumented hardware blocks, and it's almost always been more trouble than it was ever worth. TI's dynamic coff is a good example of this.</p><p>Also note that none of the ELF stuff would apply to your microcontroller examples since none of those will handle COW, and are thus unable to support ELF outside of something like FDPIC.</p></htmltext>
<tokenext>Various vendors have been shipping fat binaries for integrated microcontrollers for years .
This sort of thing tends to be useful when you have various controllers integrated in a single block and wish to treat the codec for the block as an all-encompassing thing .
Relocation takes care of mapping the various bits to the proper controllers .
While it does make distribution easier , it also just encourages people to develop abstract binary blobs for poorly or completely undocumented hardware blocks , and it 's almost always been more trouble than it was ever worth .
TI 's dynamic coff is a good example of this.Also note that none of the ELF stuff would apply to your microcontroller examples since none of those will handle COW , and are thus unable to support ELF outside of something like FDPIC .</tokentext>
<sentencetext>Various vendors have been shipping fat binaries for integrated microcontrollers for years.
This sort of thing tends to be useful when you have various controllers integrated in a single block and wish to treat the codec for the block as an all-encompassing thing.
Relocation takes care of mapping the various bits to the proper controllers.
While it does make distribution easier, it also just encourages people to develop abstract binary blobs for poorly or completely undocumented hardware blocks, and it's almost always been more trouble than it was ever worth.
TI's dynamic coff is a good example of this.Also note that none of the ELF stuff would apply to your microcontroller examples since none of those will handle COW, and are thus unable to support ELF outside of something like FDPIC.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998092</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696</id>
	<title>Story of binary compatibility is short and tragic</title>
	<author>Anonymous</author>
	<datestamp>1257450480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>
In the entire forked-up mess of the unix tree, there was only one thing that anybody &amp; everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week. For a good part of 4 years, I worked on portable.net, which had a support/ directory full of ifdefs and a configure script full of AC\_DEFINEs. It worked nearly everywhere too.
</p><p>
Binary compatibility never took off because there is so little stuff that can be shared between binary platforms. Sure, the same <i>file</i> could run on multiple archs, but in reality that is no different from a zip file with six binaries in them. Indeed, it needed someone to build 'em all in one place to actually end up with one of these. Which is actually more effort than actually letting each distro arch-maintainer do a build whenever they please. OS X build tools ship with the right cross-compilers in XCode and they have more of a monoculture in library versions, looking backwards.
</p><p>
Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah), is somehow a solution with no real problem attached. Unless you can make the default build-package workflow do this automatically, this simple step means a hell of a lot of work for the guy doing the build.
</p><p>
And that's just the problems with getting a universal binary. Further problems await as you try to run the created binaries<nobr> <wbr></nobr>... I like the idea and the fact that the guy is talking with his patches. But colour me uninterested in this particular problem he's trying to solve. If he manages to convince me that it's a real advantage over 4 binaries that I pick &amp; choose to download, hell<nobr> <wbr></nobr>... I'll change my opinion so quickly, it'll leave you spinning.
</p></htmltext>
<tokenext>In the entire forked-up mess of the unix tree , there was only one thing that anybody &amp; everybody cared about - source compatibilty .
C99 , POSIX , SuS v3 , so many ways you could ensure that your code would compile everywhere , with whatever compiler was popular that week .
For a good part of 4 years , I worked on portable.net , which had a support/ directory full of ifdefs and a configure script full of AC \ _DEFINEs .
It worked nearly everywhere too .
Binary compatibility never took off because there is so little stuff that can be shared between binary platforms .
Sure , the same file could run on multiple archs , but in reality that is no different from a zip file with six binaries in them .
Indeed , it needed someone to build 'em all in one place to actually end up with one of these .
Which is actually more effort than actually letting each distro arch-maintainer do a build whenever they please .
OS X build tools ship with the right cross-compilers in XCode and they have more of a monoculture in library versions , looking backwards .
Attempting this in a world where even an x86 binary would n't work on all x86-linux-pc boxes ( static linking , yeah...yeah ) , is somehow a solution with no real problem attached .
Unless you can make the default build-package workflow do this automatically , this simple step means a hell of a lot of work for the guy doing the build .
And that 's just the problems with getting a universal binary .
Further problems await as you try to run the created binaries ... I like the idea and the fact that the guy is talking with his patches .
But colour me uninterested in this particular problem he 's trying to solve .
If he manages to convince me that it 's a real advantage over 4 binaries that I pick &amp; choose to download , hell ... I 'll change my opinion so quickly , it 'll leave you spinning .</tokentext>
<sentencetext>
In the entire forked-up mess of the unix tree, there was only one thing that anybody &amp; everybody cared about - source compatibilty.
C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week.
For a good part of 4 years, I worked on portable.net, which had a support/ directory full of ifdefs and a configure script full of AC\_DEFINEs.
It worked nearly everywhere too.
Binary compatibility never took off because there is so little stuff that can be shared between binary platforms.
Sure, the same file could run on multiple archs, but in reality that is no different from a zip file with six binaries in them.
Indeed, it needed someone to build 'em all in one place to actually end up with one of these.
Which is actually more effort than actually letting each distro arch-maintainer do a build whenever they please.
OS X build tools ship with the right cross-compilers in XCode and they have more of a monoculture in library versions, looking backwards.
Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah), is somehow a solution with no real problem attached.
Unless you can make the default build-package workflow do this automatically, this simple step means a hell of a lot of work for the guy doing the build.
And that's just the problems with getting a universal binary.
Further problems await as you try to run the created binaries ... I like the idea and the fact that the guy is talking with his patches.
But colour me uninterested in this particular problem he's trying to solve.
If he manages to convince me that it's a real advantage over 4 binaries that I pick &amp; choose to download, hell ... I'll change my opinion so quickly, it'll leave you spinning.
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30007996</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>ogdenk</author>
	<datestamp>1257537720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit.</p><p>You're not quite getting it.  YOU may not have other arches around but I do.  This isn't about 32-bit vs 64-bit.  It's about SPARC vs ARM vs Intel vs MIPS, etc.  I would love to be able to use the same binaries.  This is especially useful when you don't have source and can't recompile.</p><p>This would also make it easier to have commercial software supported on more than 1 CPU architecture.  I get REAL pissed about Intel-only Linux and BSD software.  There's no reason for it.  And now that ARM may catch some steam in netbooks, this is pretty important to me.</p><p>I don't want to hear that "well, just run RMS-approved free software" or the "comershul softwear iz teh evil" crap either.</p></htmltext>
<tokenext>This in particular seems like a solution in search of a problem to me .
Especially since on a 64 bit distro pretty much everything , with very few exceptions is 64 bit.You 're not quite getting it .
YOU may not have other arches around but I do .
This is n't about 32-bit vs 64-bit .
It 's about SPARC vs ARM vs Intel vs MIPS , etc .
I would love to be able to use the same binaries .
This is especially useful when you do n't have source and ca n't recompile.This would also make it easier to have commercial software supported on more than 1 CPU architecture .
I get REAL pissed about Intel-only Linux and BSD software .
There 's no reason for it .
And now that ARM may catch some steam in netbooks , this is pretty important to me.I do n't want to hear that " well , just run RMS-approved free software " or the " comershul softwear iz teh evil " crap either .</tokentext>
<sentencetext>This in particular seems like a solution in search of a problem to me.
Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit.You're not quite getting it.
YOU may not have other arches around but I do.
This isn't about 32-bit vs 64-bit.
It's about SPARC vs ARM vs Intel vs MIPS, etc.
I would love to be able to use the same binaries.
This is especially useful when you don't have source and can't recompile.This would also make it easier to have commercial software supported on more than 1 CPU architecture.
I get REAL pissed about Intel-only Linux and BSD software.
There's no reason for it.
And now that ARM may catch some steam in netbooks, this is pretty important to me.I don't want to hear that "well, just run RMS-approved free software" or the "comershul softwear iz teh evil" crap either.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997878</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>cheesybagel</author>
	<datestamp>1257451140000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>It just shows Ryan isn't used to contributing free software to someone else's project. I once had to wait months before I got my code accepted into a free software project and it wasn't the kernel. If the maintainers add every submission to a project, it will end up in an unstable, unmaintainable mess. Code can last a long time and someone will have to maintain the code even after he's lost interest in it. I am especially leery of code that touches a lot of difference places at the same time, as is undoubtedly the case here.</htmltext>
<tokenext>It just shows Ryan is n't used to contributing free software to someone else 's project .
I once had to wait months before I got my code accepted into a free software project and it was n't the kernel .
If the maintainers add every submission to a project , it will end up in an unstable , unmaintainable mess .
Code can last a long time and someone will have to maintain the code even after he 's lost interest in it .
I am especially leery of code that touches a lot of difference places at the same time , as is undoubtedly the case here .</tokentext>
<sentencetext>It just shows Ryan isn't used to contributing free software to someone else's project.
I once had to wait months before I got my code accepted into a free software project and it wasn't the kernel.
If the maintainers add every submission to a project, it will end up in an unstable, unmaintainable mess.
Code can last a long time and someone will have to maintain the code even after he's lost interest in it.
I am especially leery of code that touches a lot of difference places at the same time, as is undoubtedly the case here.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003218</id>
	<title>Re:Structure should be at the filesystem level</title>
	<author>sjames</author>
	<datestamp>1257446280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Agreed, but there are some important semantic issues to work out. For example, If I access a directory as a file and for example scp some\_directory somewhere-else:, what happens? Does it do it file by file? Does it get marshaled into some sort of archive like a tar or cpio? Do we really want rm to imply -r?</p><p>How far do we take that? An ELF binary is a bunch of separate sections glued together, should it be a directory?</p><p>All answerable questions, but they need to be answered before we jump off the deep end.</p></htmltext>
<tokenext>Agreed , but there are some important semantic issues to work out .
For example , If I access a directory as a file and for example scp some \ _directory somewhere-else : , what happens ?
Does it do it file by file ?
Does it get marshaled into some sort of archive like a tar or cpio ?
Do we really want rm to imply -r ? How far do we take that ?
An ELF binary is a bunch of separate sections glued together , should it be a directory ? All answerable questions , but they need to be answered before we jump off the deep end .</tokentext>
<sentencetext>Agreed, but there are some important semantic issues to work out.
For example, If I access a directory as a file and for example scp some\_directory somewhere-else:, what happens?
Does it do it file by file?
Does it get marshaled into some sort of archive like a tar or cpio?
Do we really want rm to imply -r?How far do we take that?
An ELF binary is a bunch of separate sections glued together, should it be a directory?All answerable questions, but they need to be answered before we jump off the deep end.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003144</id>
	<title>Summary of LKML reaction</title>
	<author>sjames</author>
	<datestamp>1257444720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>First, Why? That's what package managers and directories are for. Just like having lib and lib64, just have multiple bin directories and link the correct ones while booting.</p><p>Or if you're just doing one package and not a whole system, just have a script detect the correct arch and run the appropriate binary.</p><p>Second, do we know for a fact that there are no Apple patents on this? [**crickets**]</p><p>So, not really needed and potentially dangerous legally. Two really good reasons not to do it.</p></htmltext>
<tokenext>First , Why ?
That 's what package managers and directories are for .
Just like having lib and lib64 , just have multiple bin directories and link the correct ones while booting.Or if you 're just doing one package and not a whole system , just have a script detect the correct arch and run the appropriate binary.Second , do we know for a fact that there are no Apple patents on this ?
[ * * crickets * * ] So , not really needed and potentially dangerous legally .
Two really good reasons not to do it .</tokentext>
<sentencetext>First, Why?
That's what package managers and directories are for.
Just like having lib and lib64, just have multiple bin directories and link the correct ones while booting.Or if you're just doing one package and not a whole system, just have a script detect the correct arch and run the appropriate binary.Second, do we know for a fact that there are no Apple patents on this?
[**crickets**]So, not really needed and potentially dangerous legally.
Two really good reasons not to do it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000680</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>Anonymous</author>
	<datestamp>1257419880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I've seen far more of "You are stupid" type comments in the Linux/insert-other-open-source-here then I have of "That's a stupid idea". So much so I think some members of the communities become members just to berate others and let themselves feel high and mighty about it.</p><p>This is why, for me personally, after some time I left. I even stopped using/following major open source projects. Not worth it at all IMO. I've personally told people who wanted to contribute to various known projects not too unless you like being burned at the stake for no reason.</p><p>Oh well, right?</p></htmltext>
<tokenext>I 've seen far more of " You are stupid " type comments in the Linux/insert-other-open-source-here then I have of " That 's a stupid idea " .
So much so I think some members of the communities become members just to berate others and let themselves feel high and mighty about it.This is why , for me personally , after some time I left .
I even stopped using/following major open source projects .
Not worth it at all IMO .
I 've personally told people who wanted to contribute to various known projects not too unless you like being burned at the stake for no reason.Oh well , right ?</tokentext>
<sentencetext>I've seen far more of "You are stupid" type comments in the Linux/insert-other-open-source-here then I have of "That's a stupid idea".
So much so I think some members of the communities become members just to berate others and let themselves feel high and mighty about it.This is why, for me personally, after some time I left.
I even stopped using/following major open source projects.
Not worth it at all IMO.
I've personally told people who wanted to contribute to various known projects not too unless you like being burned at the stake for no reason.Oh well, right?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998594</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>hairyfeet</author>
	<datestamp>1257454140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't forget one of the problems I've complained about Linux for ages, only to get told "researching your ass off" is actually a better "feature"...Driver Cds. Maybe if you had fat binaries it would finally give Linux a stable ABI and we would finally see a "fat penguin" on the box of devices at the local Walmart/Best Buy/Staples and retailers like me could finally carry your product.</p><p>As it is shopping for Linux devices at retail is a fricking nightmare, as by the time anything ends up on some approved list somewhere it usually isn't sold in stores anymore, or Deity help you if the approved device has firmware A and they have moved on to firmware G without changing the box (which of course they do all the time) as you can enjoy your new paperweight. It is just too damned hard to sell Linux when your customers are Joe Normals that shop at Walmrt. with Windows all I have to do is tell them to look for the logo and my after sale device support costs drop to zero.</p><p>

So if fat binaries would have finally allowed devices to show up with a fat penguin on the box I would have been dancing in the streets, of course its dead now so we'll never know. But until Linux is as easy to shop for as Windows and OSX at retail, which I truly believe won't ever happen without a stable ABI and easy to place "Linux 32/64" drivers on driver discs, well then Linux will always be the OS for geeks with a single digit adoption rate. Because Joe Normal isn't gonna study his ass off doing research just to buy a device at the Wally World. Just ain't gonna happen, and without Joe there will NEVER be a "year of the Linux desktop".</p></htmltext>
<tokenext>Do n't forget one of the problems I 've complained about Linux for ages , only to get told " researching your ass off " is actually a better " feature " ...Driver Cds .
Maybe if you had fat binaries it would finally give Linux a stable ABI and we would finally see a " fat penguin " on the box of devices at the local Walmart/Best Buy/Staples and retailers like me could finally carry your product.As it is shopping for Linux devices at retail is a fricking nightmare , as by the time anything ends up on some approved list somewhere it usually is n't sold in stores anymore , or Deity help you if the approved device has firmware A and they have moved on to firmware G without changing the box ( which of course they do all the time ) as you can enjoy your new paperweight .
It is just too damned hard to sell Linux when your customers are Joe Normals that shop at Walmrt .
with Windows all I have to do is tell them to look for the logo and my after sale device support costs drop to zero .
So if fat binaries would have finally allowed devices to show up with a fat penguin on the box I would have been dancing in the streets , of course its dead now so we 'll never know .
But until Linux is as easy to shop for as Windows and OSX at retail , which I truly believe wo n't ever happen without a stable ABI and easy to place " Linux 32/64 " drivers on driver discs , well then Linux will always be the OS for geeks with a single digit adoption rate .
Because Joe Normal is n't gon na study his ass off doing research just to buy a device at the Wally World .
Just ai n't gon na happen , and without Joe there will NEVER be a " year of the Linux desktop " .</tokentext>
<sentencetext>Don't forget one of the problems I've complained about Linux for ages, only to get told "researching your ass off" is actually a better "feature"...Driver Cds.
Maybe if you had fat binaries it would finally give Linux a stable ABI and we would finally see a "fat penguin" on the box of devices at the local Walmart/Best Buy/Staples and retailers like me could finally carry your product.As it is shopping for Linux devices at retail is a fricking nightmare, as by the time anything ends up on some approved list somewhere it usually isn't sold in stores anymore, or Deity help you if the approved device has firmware A and they have moved on to firmware G without changing the box (which of course they do all the time) as you can enjoy your new paperweight.
It is just too damned hard to sell Linux when your customers are Joe Normals that shop at Walmrt.
with Windows all I have to do is tell them to look for the logo and my after sale device support costs drop to zero.
So if fat binaries would have finally allowed devices to show up with a fat penguin on the box I would have been dancing in the streets, of course its dead now so we'll never know.
But until Linux is as easy to shop for as Windows and OSX at retail, which I truly believe won't ever happen without a stable ABI and easy to place "Linux 32/64" drivers on driver discs, well then Linux will always be the OS for geeks with a single digit adoption rate.
Because Joe Normal isn't gonna study his ass off doing research just to buy a device at the Wally World.
Just ain't gonna happen, and without Joe there will NEVER be a "year of the Linux desktop".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998090</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348</id>
	<title>Re:Solution in search of a problem</title>
	<author>nxtw</author>
	<datestamp>1257453000000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p>The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).</p></div></blockquote><p>I disagree.  Solaris and Mac OS X are the only operating systems I would say handle it well.</p><p>OS X 10.6 includes i386 and x86\_64 versions of almost everything.  By default it runs the x86\_64 versions on compatible CPUs and compiles software as x86\_64.  It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.</p><p>One can reuse the same OS X installation from a system with a 64-bit CPU on a system with a 32-bit CPU.</p><p>Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries.  It includes 32- and 64-bit kernels as well, all in the same installation media.</p></div>
	</htmltext>
<tokenext>The 32-bit vs. 64-bit split is handled pretty well on Linux ( well , Debian drug its heels a bit on multiarch handling in packages , but even they seem to be getting with the programme ) .I disagree .
Solaris and Mac OS X are the only operating systems I would say handle it well.OS X 10.6 includes i386 and x86 \ _64 versions of almost everything .
By default it runs the x86 \ _64 versions on compatible CPUs and compiles software as x86 \ _64 .
It runs the i386 kernel by default , but the OS X i386 kernel is capable of running 64 bit processes.One can reuse the same OS X installation from a system with a 64-bit CPU on a system with a 32-bit CPU.Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries .
It includes 32- and 64-bit kernels as well , all in the same installation media .</tokentext>
<sentencetext>The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).I disagree.
Solaris and Mac OS X are the only operating systems I would say handle it well.OS X 10.6 includes i386 and x86\_64 versions of almost everything.
By default it runs the x86\_64 versions on compatible CPUs and compiles software as x86\_64.
It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.One can reuse the same OS X installation from a system with a 64-bit CPU on a system with a 32-bit CPU.Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries.
It includes 32- and 64-bit kernels as well, all in the same installation media.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998226</id>
	<title>Re:Kind of broken by design</title>
	<author>Daniel\_Staal</author>
	<datestamp>1257452640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just wanted to say that MacOS for a while there supported 4 architectures: i386 and PPC, both in 32 and 64 bit.  Very few apps actually shipped with all four, since there were also fallbacks in place.  (A few high-end apps did, but only a few.)</p><p>And even then there were a half-dozen utilities out there for 'cleaning' the architectures you didn't need out of the files.  Which could get back a fair amount of disk space.</p></htmltext>
<tokenext>Just wanted to say that MacOS for a while there supported 4 architectures : i386 and PPC , both in 32 and 64 bit .
Very few apps actually shipped with all four , since there were also fallbacks in place .
( A few high-end apps did , but only a few .
) And even then there were a half-dozen utilities out there for 'cleaning ' the architectures you did n't need out of the files .
Which could get back a fair amount of disk space .</tokentext>
<sentencetext>Just wanted to say that MacOS for a while there supported 4 architectures: i386 and PPC, both in 32 and 64 bit.
Very few apps actually shipped with all four, since there were also fallbacks in place.
(A few high-end apps did, but only a few.
)And even then there were a half-dozen utilities out there for 'cleaning' the architectures you didn't need out of the files.
Which could get back a fair amount of disk space.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997484</id>
	<title>He needs thicker skin</title>
	<author>Anonymous</author>
	<datestamp>1257449640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>He needs thicker skin if he's going to deal with the LKML crowd. I wouldn't give up just because it's not merged into the official tree.</p></htmltext>
<tokenext>He needs thicker skin if he 's going to deal with the LKML crowd .
I would n't give up just because it 's not merged into the official tree .</tokentext>
<sentencetext>He needs thicker skin if he's going to deal with the LKML crowd.
I wouldn't give up just because it's not merged into the official tree.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000352</id>
	<title>too big or not enough?</title>
	<author>Anonymous</author>
	<datestamp>1257418440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yes. It is a "solution" which adds costs in many, many places for a problem that doesn't exist. I don't see why people even spend a second thinking about this.<br>-- Ulrich Drepper (2009)</p><p>640K ought to be enough for anybody.<br>-- Bill Gates (unattributed)</p></htmltext>
<tokenext>Yes .
It is a " solution " which adds costs in many , many places for a problem that does n't exist .
I do n't see why people even spend a second thinking about this.-- Ulrich Drepper ( 2009 ) 640K ought to be enough for anybody.-- Bill Gates ( unattributed )</tokentext>
<sentencetext>Yes.
It is a "solution" which adds costs in many, many places for a problem that doesn't exist.
I don't see why people even spend a second thinking about this.-- Ulrich Drepper (2009)640K ought to be enough for anybody.-- Bill Gates (unattributed)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002076</id>
	<title>stupid idea</title>
	<author>jipn4</author>
	<datestamp>1257429600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>FatELF is a stupid implementation of a stupid idea.  I.e., even if you want fat binaries, modifying the ELF format is the wrong way of doing it.</p><p>Yeah for the Linux kernel developers for keeping this kind of crap out of the kernel.</p></htmltext>
<tokenext>FatELF is a stupid implementation of a stupid idea .
I.e. , even if you want fat binaries , modifying the ELF format is the wrong way of doing it.Yeah for the Linux kernel developers for keeping this kind of crap out of the kernel .</tokentext>
<sentencetext>FatELF is a stupid implementation of a stupid idea.
I.e., even if you want fat binaries, modifying the ELF format is the wrong way of doing it.Yeah for the Linux kernel developers for keeping this kind of crap out of the kernel.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998538</id>
	<title>Re:a better idea..</title>
	<author>jaclu</author>
	<datestamp>1257453780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There is already a method for supporting multiple binary formats.</p><p>It's called source code</p></htmltext>
<tokenext>There is already a method for supporting multiple binary formats.It 's called source code</tokentext>
<sentencetext>There is already a method for supporting multiple binary formats.It's called source code</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999100</id>
	<title>Re:a better idea..</title>
	<author>cheesybagel</author>
	<datestamp>1257413040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I agree that this is the best option. This one of the major reasons Java and<nobr> <wbr></nobr>.NET get so much exposure. You shouldn't need to use a special language to have this facility, you should be able to code in any language including C, which is the language most free software is written in.</htmltext>
<tokenext>I agree that this is the best option .
This one of the major reasons Java and .NET get so much exposure .
You should n't need to use a special language to have this facility , you should be able to code in any language including C , which is the language most free software is written in .</tokentext>
<sentencetext>I agree that this is the best option.
This one of the major reasons Java and .NET get so much exposure.
You shouldn't need to use a special language to have this facility, you should be able to code in any language including C, which is the language most free software is written in.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999190</id>
	<title>Re:Kind of broken by design</title>
	<author>Anonymous</author>
	<datestamp>1257413460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This problem's been solved for decades now:</p><p>On the clients:</p><blockquote><div><p> <tt>export PATH=/net/server/tools/`uname -m`/bin:$PATH</tt></p></div> </blockquote><p>On the server, create:</p><blockquote><div><p><nobr> <wbr></nobr><tt>/blah/tools/i386/bin<br>/blah/tools/i686/bin</tt></p></div> </blockquote><p>and so on.</p></div>
	</htmltext>
<tokenext>This problem 's been solved for decades now : On the clients : export PATH = /net/server/tools/ ` uname -m ` /bin : $ PATH On the server , create : /blah/tools/i386/bin/blah/tools/i686/bin and so on .</tokentext>
<sentencetext>This problem's been solved for decades now:On the clients: export PATH=/net/server/tools/`uname -m`/bin:$PATH On the server, create: /blah/tools/i386/bin/blah/tools/i686/bin and so on.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003590</id>
	<title>ZOMG! No more Christmas!</title>
	<author>funwithBSD</author>
	<datestamp>1257538920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>They killed the jolly old fatELF.</p></htmltext>
<tokenext>They killed the jolly old fatELF .</tokentext>
<sentencetext>They killed the jolly old fatELF.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003386</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>Anonymous</author>
	<datestamp>1257449340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>How is this insightful? I read the lkml thread <a href="http://lkml.org/lkml/2009/10/19/147" title="lkml.org" rel="nofollow">here</a> [lkml.org] and no one said anyone was stupid. In fact there wasn't a single insult delivered unless you consider someone saying your idea isn't very good to be insulting.</p></htmltext>
<tokenext>How is this insightful ?
I read the lkml thread here [ lkml.org ] and no one said anyone was stupid .
In fact there was n't a single insult delivered unless you consider someone saying your idea is n't very good to be insulting .</tokentext>
<sentencetext>How is this insightful?
I read the lkml thread here [lkml.org] and no one said anyone was stupid.
In fact there wasn't a single insult delivered unless you consider someone saying your idea isn't very good to be insulting.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998092</id>
	<title>Forget fat binaries for Linux</title>
	<author>Anonymous</author>
	<datestamp>1257452040000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>I want fat binaries for microcontrollers! Give me binaries that can run on PIC16F88, eZ80 and 68HC11!</p><p>There's nothing worst than having to replace a 0.50$ chip with another that cost 0.51$!</p></htmltext>
<tokenext>I want fat binaries for microcontrollers !
Give me binaries that can run on PIC16F88 , eZ80 and 68HC11 ! There 's nothing worst than having to replace a 0.50 $ chip with another that cost 0.51 $ !</tokentext>
<sentencetext>I want fat binaries for microcontrollers!
Give me binaries that can run on PIC16F88, eZ80 and 68HC11!There's nothing worst than having to replace a 0.50$ chip with another that cost 0.51$!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997998</id>
	<title>Re:Kind of broken by design</title>
	<author>Anonymous</author>
	<datestamp>1257451560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>Back in the OpenStep day, some fat executables were 68000, x86, sparc, and pa-risc.
<p>
Also consider x64 is x64 (sure, there are amd/intel differences, but gcc doesn't go there).  But a 386 is very different than a core 2 duo running in 32-bit mode.  x86 binaries are often targetted at the 486, which eliminates a lot of new opcodes and goodies in the pentium and beyond.</p></htmltext>
<tokenext>Back in the OpenStep day , some fat executables were 68000 , x86 , sparc , and pa-risc .
Also consider x64 is x64 ( sure , there are amd/intel differences , but gcc does n't go there ) .
But a 386 is very different than a core 2 duo running in 32-bit mode .
x86 binaries are often targetted at the 486 , which eliminates a lot of new opcodes and goodies in the pentium and beyond .</tokentext>
<sentencetext>Back in the OpenStep day, some fat executables were 68000, x86, sparc, and pa-risc.
Also consider x64 is x64 (sure, there are amd/intel differences, but gcc doesn't go there).
But a 386 is very different than a core 2 duo running in 32-bit mode.
x86 binaries are often targetted at the 486, which eliminates a lot of new opcodes and goodies in the pentium and beyond.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</id>
	<title>Wait, what does Con Kolivas have to do with this?</title>
	<author>Anonymous</author>
	<datestamp>1257450240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>I don't get the point in bringing it up.</p><p>Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.</p><p>This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.</p></htmltext>
<tokenext>I do n't get the point in bringing it up.Things get rejected from the kernel all the time -- because not all things are good , useful , well coded , or solve a problem that needs solving .
It 's not new in any way.This in particular seems like a solution in search of a problem to me .
Especially since on a 64 bit distro pretty much everything , with very few exceptions is 64 bit .
In fact I do n't think 64 bit distributions contain any 32 bit software except for closed source that ca n't be ported , and compatibility libraries for any applications the user would like to install manually .
So to me there does n't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs .</tokentext>
<sentencetext>I don't get the point in bringing it up.Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving.
It's not new in any way.This in particular seems like a solution in search of a problem to me.
Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit.
In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually.
So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004540</id>
	<title>What's the point again?</title>
	<author>Hurricane78</author>
	<datestamp>1257513180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I mean despite the obvious bloat.</p><p>I think it's way simpler to just compile one version for each architecture and put it on the package mirror. Less download time, smaller installation media, less disk space used for no reason, faster startup times...</p><p>The idea of a multi-arch binary makes sense for closed source that does not allow re-compilation only. And even there, you can offer different binaries for different architectures.</p><p>Hell if you *have* to get a "universal binary, just bzip all the different binaries into one executable archive that runs whichever binary inside fits the architecture.</p><p>But I see not point in it, would just unpack all the binaries for my arch on the first run, and be done with it.</p></htmltext>
<tokenext>I mean despite the obvious bloat.I think it 's way simpler to just compile one version for each architecture and put it on the package mirror .
Less download time , smaller installation media , less disk space used for no reason , faster startup times...The idea of a multi-arch binary makes sense for closed source that does not allow re-compilation only .
And even there , you can offer different binaries for different architectures.Hell if you * have * to get a " universal binary , just bzip all the different binaries into one executable archive that runs whichever binary inside fits the architecture.But I see not point in it , would just unpack all the binaries for my arch on the first run , and be done with it .</tokentext>
<sentencetext>I mean despite the obvious bloat.I think it's way simpler to just compile one version for each architecture and put it on the package mirror.
Less download time, smaller installation media, less disk space used for no reason, faster startup times...The idea of a multi-arch binary makes sense for closed source that does not allow re-compilation only.
And even there, you can offer different binaries for different architectures.Hell if you *have* to get a "universal binary, just bzip all the different binaries into one executable archive that runs whichever binary inside fits the architecture.But I see not point in it, would just unpack all the binaries for my arch on the first run, and be done with it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000778</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>RiotingPacifist</author>
	<datestamp>1257420300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Links or it never happened!</p></htmltext>
<tokenext>Links or it never happened !</tokentext>
<sentencetext>Links or it never happened!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998082</id>
	<title>Universal binaries? How about universal installs</title>
	<author>Anonymous</author>
	<datestamp>1257451980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Instead of pushing for universal binaries (which means interpreted code across different CPU architectures), how about a universal installer (instead of RPM, DEB,<nobr> <wbr></nobr>.TAR.GZ)...and speeding up the install process instead of having to recompile every file?</p></htmltext>
<tokenext>Instead of pushing for universal binaries ( which means interpreted code across different CPU architectures ) , how about a universal installer ( instead of RPM , DEB , .TAR.GZ ) ...and speeding up the install process instead of having to recompile every file ?</tokentext>
<sentencetext>Instead of pushing for universal binaries (which means interpreted code across different CPU architectures), how about a universal installer (instead of RPM, DEB, .TAR.GZ)...and speeding up the install process instead of having to recompile every file?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001352</id>
	<title>Re:Solution in search of a problem</title>
	<author>Blakey Rat</author>
	<datestamp>1257423240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming. To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC. Plus possibly MIPS, SPARC, and Itanium. Most of those in 32-bit and 64-bit flavours.</i></p><p>So, to summarize: we shouldn't do it because it's hard.</p><p>To which I reply: everything worth doing is hard, the easy things have already all been done.</p><p>Or alternatively: aw, poor babies! Want your blankie and pacifier?</p></htmltext>
<tokenext>Real multi-arch could be useful , but the number of arches on Linux is just too overwhelming .
To get somewhat decent coverage for Linux binaries , they 'd have to run on x86 , ARM , and PPC .
Plus possibly MIPS , SPARC , and Itanium .
Most of those in 32-bit and 64-bit flavours.So , to summarize : we should n't do it because it 's hard.To which I reply : everything worth doing is hard , the easy things have already all been done.Or alternatively : aw , poor babies !
Want your blankie and pacifier ?</tokentext>
<sentencetext>Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming.
To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC.
Plus possibly MIPS, SPARC, and Itanium.
Most of those in 32-bit and 64-bit flavours.So, to summarize: we shouldn't do it because it's hard.To which I reply: everything worth doing is hard, the easy things have already all been done.Or alternatively: aw, poor babies!
Want your blankie and pacifier?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998400</id>
	<title>Re:The wrong Solution to the problem.</title>
	<author>morgauxo</author>
	<datestamp>1257453300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>OK,
<br> <br>
I really don't care as I use ebuilds.  I wonder though, wouldn't the logical conclusion of your argument be to get rid of one of either rpm or deb and standardize on the other?
<br> <br>
Actually come to think of it... if that happened, if binary packages were run-anywhere I MIGHT not even use ebuilds anymore.  Hmm...  Of course, there are so many choices to be made at compile time.  I realize most of the optimization is not worth an adult's time but choosing optional features can be.</htmltext>
<tokenext>OK , I really do n't care as I use ebuilds .
I wonder though , would n't the logical conclusion of your argument be to get rid of one of either rpm or deb and standardize on the other ?
Actually come to think of it... if that happened , if binary packages were run-anywhere I MIGHT not even use ebuilds anymore .
Hmm... Of course , there are so many choices to be made at compile time .
I realize most of the optimization is not worth an adult 's time but choosing optional features can be .</tokentext>
<sentencetext>OK,
 
I really don't care as I use ebuilds.
I wonder though, wouldn't the logical conclusion of your argument be to get rid of one of either rpm or deb and standardize on the other?
Actually come to think of it... if that happened, if binary packages were run-anywhere I MIGHT not even use ebuilds anymore.
Hmm...  Of course, there are so many choices to be made at compile time.
I realize most of the optimization is not worth an adult's time but choosing optional features can be.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978</id>
	<title>The wrong Solution to the problem.</title>
	<author>Anonymous</author>
	<datestamp>1257451560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>FatELF was the wrong solution to the problem. In the Linux community, we do have a cross distribution application issue. But its one of pure stubbornness.</p><p>What do I mean? Suse has its way of setting up RPMs, Mandriva has its, RedHat (Fedora) has its. The three big names in RPM all fight each other over stupid things like RPM Macros, when RPMs are all 95\% the same. We can't decide what to classify anything, so we fight over stuff like Amusements/Arcade vs. Games/Arcade. To some degree the same issue exists between mainline Ubuntu and Debian. Then we have the wonderful: I refuse to use DEB or RPM. e.g. Gentoo, Slackware.</p><p>We have propaganda circulating that RPM is proprietary. We have application makers who provide a binary installer for the Windows platform, yet hand Linux users a completely unpackaged BZ2 Type Tarball and say "Good luck!"</p><p>It should be policy that application makers UPSTREAM should provide an Source RPM AND a Source DEB.</p><p>It should be the case that I should be able to install any RedHat Fedora package, or any Suse Package on my Mandriva box. The people who make these decisions should be locked in a room together until they can come to a consensus how to solve this dispute. The same should be done on the DEB Side. I'm tired of having to take Suse or Fedora Packages and "converting" them by hand to make them acceptable and vice versa. This headache can be resolved if we all sit down and play nice together.</p></htmltext>
<tokenext>FatELF was the wrong solution to the problem .
In the Linux community , we do have a cross distribution application issue .
But its one of pure stubbornness.What do I mean ?
Suse has its way of setting up RPMs , Mandriva has its , RedHat ( Fedora ) has its .
The three big names in RPM all fight each other over stupid things like RPM Macros , when RPMs are all 95 \ % the same .
We ca n't decide what to classify anything , so we fight over stuff like Amusements/Arcade vs. Games/Arcade. To some degree the same issue exists between mainline Ubuntu and Debian .
Then we have the wonderful : I refuse to use DEB or RPM .
e.g. Gentoo , Slackware.We have propaganda circulating that RPM is proprietary .
We have application makers who provide a binary installer for the Windows platform , yet hand Linux users a completely unpackaged BZ2 Type Tarball and say " Good luck !
" It should be policy that application makers UPSTREAM should provide an Source RPM AND a Source DEB.It should be the case that I should be able to install any RedHat Fedora package , or any Suse Package on my Mandriva box .
The people who make these decisions should be locked in a room together until they can come to a consensus how to solve this dispute .
The same should be done on the DEB Side .
I 'm tired of having to take Suse or Fedora Packages and " converting " them by hand to make them acceptable and vice versa .
This headache can be resolved if we all sit down and play nice together .</tokentext>
<sentencetext>FatELF was the wrong solution to the problem.
In the Linux community, we do have a cross distribution application issue.
But its one of pure stubbornness.What do I mean?
Suse has its way of setting up RPMs, Mandriva has its, RedHat (Fedora) has its.
The three big names in RPM all fight each other over stupid things like RPM Macros, when RPMs are all 95\% the same.
We can't decide what to classify anything, so we fight over stuff like Amusements/Arcade vs. Games/Arcade. To some degree the same issue exists between mainline Ubuntu and Debian.
Then we have the wonderful: I refuse to use DEB or RPM.
e.g. Gentoo, Slackware.We have propaganda circulating that RPM is proprietary.
We have application makers who provide a binary installer for the Windows platform, yet hand Linux users a completely unpackaged BZ2 Type Tarball and say "Good luck!
"It should be policy that application makers UPSTREAM should provide an Source RPM AND a Source DEB.It should be the case that I should be able to install any RedHat Fedora package, or any Suse Package on my Mandriva box.
The people who make these decisions should be locked in a room together until they can come to a consensus how to solve this dispute.
The same should be done on the DEB Side.
I'm tired of having to take Suse or Fedora Packages and "converting" them by hand to make them acceptable and vice versa.
This headache can be resolved if we all sit down and play nice together.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004890</id>
	<title>Re:Solution in search of a problem</title>
	<author>Ash-Fox</author>
	<datestamp>1257517560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Well, since I've recently had to wrestle with getting a binary-only proprietary 32-bit application to run on a 64-bit linux</p></div></blockquote><p>Whoever built it was not very good at building. <a href="http://slashdot.org/comments.pl?sid=1432870&amp;cid=29998686" title="slashdot.org">It's not hard to build a 32bit application that works cross distribution and relies on it's self</a> [slashdot.org]. Said application would run on 64bit Linux fine. From the behavior of the original developer (of that proprietary application), I don't see how the developer would have built the application to even work with fatelf.</p></div>
	</htmltext>
<tokenext>Well , since I 've recently had to wrestle with getting a binary-only proprietary 32-bit application to run on a 64-bit linuxWhoever built it was not very good at building .
It 's not hard to build a 32bit application that works cross distribution and relies on it 's self [ slashdot.org ] .
Said application would run on 64bit Linux fine .
From the behavior of the original developer ( of that proprietary application ) , I do n't see how the developer would have built the application to even work with fatelf .</tokentext>
<sentencetext>Well, since I've recently had to wrestle with getting a binary-only proprietary 32-bit application to run on a 64-bit linuxWhoever built it was not very good at building.
It's not hard to build a 32bit application that works cross distribution and relies on it's self [slashdot.org].
Said application would run on 64bit Linux fine.
From the behavior of the original developer (of that proprietary application), I don't see how the developer would have built the application to even work with fatelf.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999342</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997856</id>
	<title>Isn't someone going to ask ...</title>
	<author>Anonymous</author>
	<datestamp>1257451080000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>...who the hell distributes Linux binaries anyway?  On OSX, most software you get is a binary.  As you said, 2 platforms (one dying), universal binaries sortof make sense just so vendors can put things in a single box, use a single icon, and not care about writing detection code for something so fundamental.</p><p>On Linux the main binaries you get are either from your distribution (which already knows all about your architecture, so why bother), or <i>maybe</i> the occasional third party (nvidia? who already maintains all this separately).  Maintaining N binaries is, as you said, difficult if not impossible for Linux.  As is distributing binaries <i>anyway</i>.</p><p>As a poster above said: solution in search of a problem.</p></htmltext>
<tokenext>...who the hell distributes Linux binaries anyway ?
On OSX , most software you get is a binary .
As you said , 2 platforms ( one dying ) , universal binaries sortof make sense just so vendors can put things in a single box , use a single icon , and not care about writing detection code for something so fundamental.On Linux the main binaries you get are either from your distribution ( which already knows all about your architecture , so why bother ) , or maybe the occasional third party ( nvidia ?
who already maintains all this separately ) .
Maintaining N binaries is , as you said , difficult if not impossible for Linux .
As is distributing binaries anyway.As a poster above said : solution in search of a problem .</tokentext>
<sentencetext>...who the hell distributes Linux binaries anyway?
On OSX, most software you get is a binary.
As you said, 2 platforms (one dying), universal binaries sortof make sense just so vendors can put things in a single box, use a single icon, and not care about writing detection code for something so fundamental.On Linux the main binaries you get are either from your distribution (which already knows all about your architecture, so why bother), or maybe the occasional third party (nvidia?
who already maintains all this separately).
Maintaining N binaries is, as you said, difficult if not impossible for Linux.
As is distributing binaries anyway.As a poster above said: solution in search of a problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</id>
	<title>"That's a stupid idea" vs. "You are stupid"</title>
	<author>Anonymous</author>
	<datestamp>1257452460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>The issue wasn't that there were lots of people saying "That's a stupid idea" or "That's a stupid implementation of an otherwise good idea."</p><p>The issue was lots of people saying "You are stupid."</p><p>There is a big difference.</p><p>I'd weighed in on this, because in the embedded systems I design this actually would have been useful - I have to support different processor types with what is, ideally, the same software load. (Just because MY embedded systems are much larger than some 4-bit microcontroller running 16K of code doesn't make them any less embedded.) People called ME stupid - not "That's a stupid design" or "That's a stupid reason to want FatELF", but "You are stupid."</p><p>Yes, developing a thick skin, so that when somebody says "That's a stupid idea" you realize that it is the IDEA, and not YOU, that they are criticizing, is important to any engineer.</p><p>But at the same time, saying to somebody "You are stupid" just because you don't like their idea, or don't see how it applies to your needs, is immature and unprofessional.</p></htmltext>
<tokenext>The issue was n't that there were lots of people saying " That 's a stupid idea " or " That 's a stupid implementation of an otherwise good idea .
" The issue was lots of people saying " You are stupid .
" There is a big difference.I 'd weighed in on this , because in the embedded systems I design this actually would have been useful - I have to support different processor types with what is , ideally , the same software load .
( Just because MY embedded systems are much larger than some 4-bit microcontroller running 16K of code does n't make them any less embedded .
) People called ME stupid - not " That 's a stupid design " or " That 's a stupid reason to want FatELF " , but " You are stupid .
" Yes , developing a thick skin , so that when somebody says " That 's a stupid idea " you realize that it is the IDEA , and not YOU , that they are criticizing , is important to any engineer.But at the same time , saying to somebody " You are stupid " just because you do n't like their idea , or do n't see how it applies to your needs , is immature and unprofessional .</tokentext>
<sentencetext>The issue wasn't that there were lots of people saying "That's a stupid idea" or "That's a stupid implementation of an otherwise good idea.
"The issue was lots of people saying "You are stupid.
"There is a big difference.I'd weighed in on this, because in the embedded systems I design this actually would have been useful - I have to support different processor types with what is, ideally, the same software load.
(Just because MY embedded systems are much larger than some 4-bit microcontroller running 16K of code doesn't make them any less embedded.
) People called ME stupid - not "That's a stupid design" or "That's a stupid reason to want FatELF", but "You are stupid.
"Yes, developing a thick skin, so that when somebody says "That's a stupid idea" you realize that it is the IDEA, and not YOU, that they are criticizing, is important to any engineer.But at the same time, saying to somebody "You are stupid" just because you don't like their idea, or don't see how it applies to your needs, is immature and unprofessional.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998372</id>
	<title>Re:Kind of broken by design</title>
	<author>volsung</author>
	<datestamp>1257453180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Minor nit: Mac OS X (until Snow Leopard) had to deal with 4 architectures

$ file<nobr> <wbr></nobr>/usr/lib/libbz2.dylib<nobr> <wbr></nobr>/usr/lib/libbz2.dylib: Mach-O universal binary with 4 architectures<nobr> <wbr></nobr>/usr/lib/libbz2.dylib (for architecture ppc7400):	Mach-O dynamically linked shared library ppc<nobr> <wbr></nobr>/usr/lib/libbz2.dylib (for architecture ppc64):	Mach-O 64-bit dynamically linked shared library ppc64<nobr> <wbr></nobr>/usr/lib/libbz2.dylib (for architecture i386):	Mach-O dynamically linked shared library i386<nobr> <wbr></nobr>/usr/lib/libbz2.dylib (for architecture x86\_64):	Mach-O 64-bit dynamically linked shared library x86\_64</htmltext>
<tokenext>Minor nit : Mac OS X ( until Snow Leopard ) had to deal with 4 architectures $ file /usr/lib/libbz2.dylib /usr/lib/libbz2.dylib : Mach-O universal binary with 4 architectures /usr/lib/libbz2.dylib ( for architecture ppc7400 ) : Mach-O dynamically linked shared library ppc /usr/lib/libbz2.dylib ( for architecture ppc64 ) : Mach-O 64-bit dynamically linked shared library ppc64 /usr/lib/libbz2.dylib ( for architecture i386 ) : Mach-O dynamically linked shared library i386 /usr/lib/libbz2.dylib ( for architecture x86 \ _64 ) : Mach-O 64-bit dynamically linked shared library x86 \ _64</tokentext>
<sentencetext>Minor nit: Mac OS X (until Snow Leopard) had to deal with 4 architectures

$ file /usr/lib/libbz2.dylib /usr/lib/libbz2.dylib: Mach-O universal binary with 4 architectures /usr/lib/libbz2.dylib (for architecture ppc7400):	Mach-O dynamically linked shared library ppc /usr/lib/libbz2.dylib (for architecture ppc64):	Mach-O 64-bit dynamically linked shared library ppc64 /usr/lib/libbz2.dylib (for architecture i386):	Mach-O dynamically linked shared library i386 /usr/lib/libbz2.dylib (for architecture x86\_64):	Mach-O 64-bit dynamically linked shared library x86\_64</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000480</id>
	<title>Re:Kind of broken by design</title>
	<author>RiotingPacifist</author>
	<datestamp>1257418920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The user shouldn't worry you should, separate<nobr> <wbr></nobr>/usr/bin and<nobr> <wbr></nobr>/usr/lib is all you need, FatELF won't even save you much space.</p></htmltext>
<tokenext>The user should n't worry you should , separate /usr/bin and /usr/lib is all you need , FatELF wo n't even save you much space .</tokentext>
<sentencetext>The user shouldn't worry you should, separate /usr/bin and /usr/lib is all you need, FatELF won't even save you much space.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999276</id>
	<title>Re:Solution in search of a problem</title>
	<author>Nimey</author>
	<datestamp>1257413760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Is the 10.6 kernel really i386 or have they done the Right Thing &amp; compiled it for i686?  The weakest Intel CPUs they sold were IIRC Core Solos, which are an evolutionary improvement on the i686.</p></htmltext>
<tokenext>Is the 10.6 kernel really i386 or have they done the Right Thing &amp; compiled it for i686 ?
The weakest Intel CPUs they sold were IIRC Core Solos , which are an evolutionary improvement on the i686 .</tokentext>
<sentencetext>Is the 10.6 kernel really i386 or have they done the Right Thing &amp; compiled it for i686?
The weakest Intel CPUs they sold were IIRC Core Solos, which are an evolutionary improvement on the i686.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808</id>
	<title>Structure should be at the filesystem level</title>
	<author>spitzak</author>
	<datestamp>1257450900000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>My objection is that any such hierarchy of data could be stored as files.</p><p>Linux needs tools so that a directory can be manipulated as a file more easily. For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default. Then programs are free to treat any point in the hierarchy as a "file". A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself. Also need filesystems designed for huge numbers of very small files and to make such manipulations efficient.</p><p>We need the tools to be advanced into the next century. Not use the workarounds of the previous ones as currently practiced on Unix and Windows.</p></htmltext>
<tokenext>My objection is that any such hierarchy of data could be stored as files.Linux needs tools so that a directory can be manipulated as a file more easily .
For instance cp/mv/etc should pretty much act like -r/-a is on all the time , and such recursive operations should be provided by libc and the kernel by default .
Then programs are free to treat any point in the hierarchy as a " file " .
A fat binary would just be a bunch of binaries stuck in the same directory , and you would run it by exec of the directory itself .
Also need filesystems designed for huge numbers of very small files and to make such manipulations efficient.We need the tools to be advanced into the next century .
Not use the workarounds of the previous ones as currently practiced on Unix and Windows .</tokentext>
<sentencetext>My objection is that any such hierarchy of data could be stored as files.Linux needs tools so that a directory can be manipulated as a file more easily.
For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default.
Then programs are free to treat any point in the hierarchy as a "file".
A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself.
Also need filesystems designed for huge numbers of very small files and to make such manipulations efficient.We need the tools to be advanced into the next century.
Not use the workarounds of the previous ones as currently practiced on Unix and Windows.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999342</id>
	<title>Re:Solution in search of a problem</title>
	<author>Mattsson</author>
	<datestamp>1257414060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, since I've recently had to wrestle with getting a binary-only proprietary 32-bit application to run on a 64-bit linux, I'd really have appreciated at least a unified binary for both 64 and 32 bit x86...</p></htmltext>
<tokenext>Well , since I 've recently had to wrestle with getting a binary-only proprietary 32-bit application to run on a 64-bit linux , I 'd really have appreciated at least a unified binary for both 64 and 32 bit x86.. .</tokentext>
<sentencetext>Well, since I've recently had to wrestle with getting a binary-only proprietary 32-bit application to run on a 64-bit linux, I'd really have appreciated at least a unified binary for both 64 and 32 bit x86...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998174</id>
	<title>You want people to quit whining about RPM?</title>
	<author>Anonymous</author>
	<datestamp>1257452400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>We have application makers who provide a binary installer for the Windows platform, yet hand Linux users a completely unpackaged BZ2 Type Tarball and say "Good luck!"</i></p><p>That's because you don't have to descend into the hell that is rpmbuild, which was a pile of rotting dingo fetuses ten years ago and hasn't gotten one bit better since.</p><p>It's long past time that they gave up that ghastly binary blob and defined a new "rpmx" format, that would look kind of like this:</p><p>A gzipped or bzipped tarball, containing:</p><p>1. a directory "common", containing roughly what you currently get from rpm2cpio.<br>2. a file "common.files", containing processed and massaged \%files<br>3. a file "common.pre", containing \%pre<br>4. a file "common.post", containing \%post<nobr> <wbr></nobr>... etc<br>N. a directory "i386" and "i386.files" etc, containing the platform specific x86 stuff<br>N+1. same for "x86\_64".</p><p>No weird binary format. No spec file full of a decade and a half of broken historical cruft. No requirement that you set up a chrooted environment to be sure you've got a clean environment for building the frigging RPM. Build it in perl or your scripting language of choice (even with a Makefile and shell scripts if you're old-school).</p></htmltext>
<tokenext>We have application makers who provide a binary installer for the Windows platform , yet hand Linux users a completely unpackaged BZ2 Type Tarball and say " Good luck !
" That 's because you do n't have to descend into the hell that is rpmbuild , which was a pile of rotting dingo fetuses ten years ago and has n't gotten one bit better since.It 's long past time that they gave up that ghastly binary blob and defined a new " rpmx " format , that would look kind of like this : A gzipped or bzipped tarball , containing : 1. a directory " common " , containing roughly what you currently get from rpm2cpio.2 .
a file " common.files " , containing processed and massaged \ % files3 .
a file " common.pre " , containing \ % pre4 .
a file " common.post " , containing \ % post ... etcN. a directory " i386 " and " i386.files " etc , containing the platform specific x86 stuffN + 1 .
same for " x86 \ _64 " .No weird binary format .
No spec file full of a decade and a half of broken historical cruft .
No requirement that you set up a chrooted environment to be sure you 've got a clean environment for building the frigging RPM .
Build it in perl or your scripting language of choice ( even with a Makefile and shell scripts if you 're old-school ) .</tokentext>
<sentencetext>We have application makers who provide a binary installer for the Windows platform, yet hand Linux users a completely unpackaged BZ2 Type Tarball and say "Good luck!
"That's because you don't have to descend into the hell that is rpmbuild, which was a pile of rotting dingo fetuses ten years ago and hasn't gotten one bit better since.It's long past time that they gave up that ghastly binary blob and defined a new "rpmx" format, that would look kind of like this:A gzipped or bzipped tarball, containing:1. a directory "common", containing roughly what you currently get from rpm2cpio.2.
a file "common.files", containing processed and massaged \%files3.
a file "common.pre", containing \%pre4.
a file "common.post", containing \%post ... etcN. a directory "i386" and "i386.files" etc, containing the platform specific x86 stuffN+1.
same for "x86\_64".No weird binary format.
No spec file full of a decade and a half of broken historical cruft.
No requirement that you set up a chrooted environment to be sure you've got a clean environment for building the frigging RPM.
Build it in perl or your scripting language of choice (even with a Makefile and shell scripts if you're old-school).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998090</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>ejtttje</author>
	<datestamp>1257451980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>There are places this would be very useful to have.  Anytime we're distributing binaries to users, hosting binaries on a network file share, or carrying portable media, it's a big pain in the butt to maintain completely separate architecture trees.  In some cases it wastes a lot of space too if there's significant data files along with the executables, because we generally wind up replicating that in each arch install tree.<br>
<br>
I've definitely appreciated OS X's universal binaries in the past, it's a shame to lose an opportunity for having that on Linux.  Guess I'm not going to see bundled, versioned libraries like OS X Frameworks anytime either, sigh.</htmltext>
<tokenext>There are places this would be very useful to have .
Anytime we 're distributing binaries to users , hosting binaries on a network file share , or carrying portable media , it 's a big pain in the butt to maintain completely separate architecture trees .
In some cases it wastes a lot of space too if there 's significant data files along with the executables , because we generally wind up replicating that in each arch install tree .
I 've definitely appreciated OS X 's universal binaries in the past , it 's a shame to lose an opportunity for having that on Linux .
Guess I 'm not going to see bundled , versioned libraries like OS X Frameworks anytime either , sigh .</tokentext>
<sentencetext>There are places this would be very useful to have.
Anytime we're distributing binaries to users, hosting binaries on a network file share, or carrying portable media, it's a big pain in the butt to maintain completely separate architecture trees.
In some cases it wastes a lot of space too if there's significant data files along with the executables, because we generally wind up replicating that in each arch install tree.
I've definitely appreciated OS X's universal binaries in the past, it's a shame to lose an opportunity for having that on Linux.
Guess I'm not going to see bundled, versioned libraries like OS X Frameworks anytime either, sigh.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998232</id>
	<title>Why did he even talk to the kernel people?</title>
	<author>pclminion</author>
	<datestamp>1257452640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I don't understand what the kernel has to do with any of this. Fat binaries can be (almost) completely implemented at the userspace level by extending the dynamic loader (ld-linux.so). The way this would work is that the fat binary would have a boilerplate ELF header that contains just enough information to convince the kernel to load it and launch its interpreter program, which could piggyback on the standard dynamic loader. The fat binary interpretter would locate the correct architecture within the fat binary, map its ELF header into memory, then call out to the regular dynamic loader to finish the job. The only hitch is that a 64-bit kernel will refuse to load a 32-bit ELF, and vice-versa, so you would need an EXTREMELY minor patch to the kernel to allow it to happen. I mean like a one-liner.</htmltext>
<tokenext>I do n't understand what the kernel has to do with any of this .
Fat binaries can be ( almost ) completely implemented at the userspace level by extending the dynamic loader ( ld-linux.so ) .
The way this would work is that the fat binary would have a boilerplate ELF header that contains just enough information to convince the kernel to load it and launch its interpreter program , which could piggyback on the standard dynamic loader .
The fat binary interpretter would locate the correct architecture within the fat binary , map its ELF header into memory , then call out to the regular dynamic loader to finish the job .
The only hitch is that a 64-bit kernel will refuse to load a 32-bit ELF , and vice-versa , so you would need an EXTREMELY minor patch to the kernel to allow it to happen .
I mean like a one-liner .</tokentext>
<sentencetext>I don't understand what the kernel has to do with any of this.
Fat binaries can be (almost) completely implemented at the userspace level by extending the dynamic loader (ld-linux.so).
The way this would work is that the fat binary would have a boilerplate ELF header that contains just enough information to convince the kernel to load it and launch its interpreter program, which could piggyback on the standard dynamic loader.
The fat binary interpretter would locate the correct architecture within the fat binary, map its ELF header into memory, then call out to the regular dynamic loader to finish the job.
The only hitch is that a 64-bit kernel will refuse to load a 32-bit ELF, and vice-versa, so you would need an EXTREMELY minor patch to the kernel to allow it to happen.
I mean like a one-liner.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001954</id>
	<title>The rude comment isn't an isolated event</title>
	<author>Dudeman\_Jones</author>
	<datestamp>1257428040000</datestamp>
	<modclass>Troll</modclass>
	<modscore>0</modscore>
	<htmltext><p>I remember back when I was trying to make the full switch to linux, and I updated my kernel and drivers and to my surprise my video card had decent drivers out of the box.  I went onto the local irc channel to ask if anyone knew anything about it out of curiosity.  I think my question was worded something like, "Yea I was impressed, I didn't have to jump through the usual hoops to get my video card working fully this time.  It was as easy as when I install Windows."  I suddenly found myself being berated by both the chatters and the IRC mod at the time.  The one comment that sticks out in my mind was, "You deserve to use Windows."  All I was doing was asking a simple damn question, in praise of my latest linux install working out of the box!  I can't help but think that this same bigoted mindset helped to doom this fairly admirable project, because after attempting to deal with people such as this, it's not even a stretch for me to imagine some linux flavor's project manager going, "Why would we want to impliment a universal standard with Windows?  Micro$oft should just do what we do.  Windows is a crappy operating system anyway and if you are trying to enable it's use then you deserve it too."</p></htmltext>
<tokenext>I remember back when I was trying to make the full switch to linux , and I updated my kernel and drivers and to my surprise my video card had decent drivers out of the box .
I went onto the local irc channel to ask if anyone knew anything about it out of curiosity .
I think my question was worded something like , " Yea I was impressed , I did n't have to jump through the usual hoops to get my video card working fully this time .
It was as easy as when I install Windows .
" I suddenly found myself being berated by both the chatters and the IRC mod at the time .
The one comment that sticks out in my mind was , " You deserve to use Windows .
" All I was doing was asking a simple damn question , in praise of my latest linux install working out of the box !
I ca n't help but think that this same bigoted mindset helped to doom this fairly admirable project , because after attempting to deal with people such as this , it 's not even a stretch for me to imagine some linux flavor 's project manager going , " Why would we want to impliment a universal standard with Windows ?
Micro $ oft should just do what we do .
Windows is a crappy operating system anyway and if you are trying to enable it 's use then you deserve it too .
"</tokentext>
<sentencetext>I remember back when I was trying to make the full switch to linux, and I updated my kernel and drivers and to my surprise my video card had decent drivers out of the box.
I went onto the local irc channel to ask if anyone knew anything about it out of curiosity.
I think my question was worded something like, "Yea I was impressed, I didn't have to jump through the usual hoops to get my video card working fully this time.
It was as easy as when I install Windows.
"  I suddenly found myself being berated by both the chatters and the IRC mod at the time.
The one comment that sticks out in my mind was, "You deserve to use Windows.
"  All I was doing was asking a simple damn question, in praise of my latest linux install working out of the box!
I can't help but think that this same bigoted mindset helped to doom this fairly admirable project, because after attempting to deal with people such as this, it's not even a stretch for me to imagine some linux flavor's project manager going, "Why would we want to impliment a universal standard with Windows?
Micro$oft should just do what we do.
Windows is a crappy operating system anyway and if you are trying to enable it's use then you deserve it too.
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998908</id>
	<title>Re:Solution in search of a problem</title>
	<author>Stu22</author>
	<datestamp>1257412380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Someone running SPARC and Itanium can probably cope without FatELF, however, people running Ubuntu, that don't even know their computer has an architecture could be helped when they, unbeknownst to them, upgrade from 32 to 64 bit, or from one architecture to another.</htmltext>
<tokenext>Someone running SPARC and Itanium can probably cope without FatELF , however , people running Ubuntu , that do n't even know their computer has an architecture could be helped when they , unbeknownst to them , upgrade from 32 to 64 bit , or from one architecture to another .</tokentext>
<sentencetext>Someone running SPARC and Itanium can probably cope without FatELF, however, people running Ubuntu, that don't even know their computer has an architecture could be helped when they, unbeknownst to them, upgrade from 32 to 64 bit, or from one architecture to another.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998124</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>Anonymous</author>
	<datestamp>1257452220000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I understand the problem they are trying to solve.  I ran into it myself.  Downloaded a 64 bit x86 distro.  For my 32 bit computer.  However, if I had spent like 2 seconds reading the page I would have realized my error 4 hours earlier...</p><p>The reality is if you are distributing the source code 'fat' binaries do not make a lot of sense.  A distro should be lowest common denominator.  Then in the background automatically recompile the code for the current computer with optimizations cranked out for it.  Now that is an interesting project...  Or do like the apple/.net guys and JIT it.</p></htmltext>
<tokenext>I understand the problem they are trying to solve .
I ran into it myself .
Downloaded a 64 bit x86 distro .
For my 32 bit computer .
However , if I had spent like 2 seconds reading the page I would have realized my error 4 hours earlier...The reality is if you are distributing the source code 'fat ' binaries do not make a lot of sense .
A distro should be lowest common denominator .
Then in the background automatically recompile the code for the current computer with optimizations cranked out for it .
Now that is an interesting project... Or do like the apple/.net guys and JIT it .</tokentext>
<sentencetext>I understand the problem they are trying to solve.
I ran into it myself.
Downloaded a 64 bit x86 distro.
For my 32 bit computer.
However, if I had spent like 2 seconds reading the page I would have realized my error 4 hours earlier...The reality is if you are distributing the source code 'fat' binaries do not make a lot of sense.
A distro should be lowest common denominator.
Then in the background automatically recompile the code for the current computer with optimizations cranked out for it.
Now that is an interesting project...  Or do like the apple/.net guys and JIT it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000564</id>
	<title>Re:Kind of broken by design</title>
	<author>PeterBrett</author>
	<datestamp>1257419400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>True, but the ability to handle such things can come in handy. As an example, suppose you've got a setup where you're running apps off a server. You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at. A fat binary is pretty much the only way to solve that problem.</p></div><p>The correct solution to this problem is <a href="http://modules.sourceforge.net/" title="sourceforge.net">environment modules</a> [sourceforge.net]. One of their many applications is setting up the scheme you describe -- totally transparently to the user, and in a easily maintainable way for the administrator, and I've seen them used successfully company-wide at a large semiconductor engineering corporation I worked for in the past.</p></div>
	</htmltext>
<tokenext>True , but the ability to handle such things can come in handy .
As an example , suppose you 've got a setup where you 're running apps off a server .
You 've got several different hardware platforms going , but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at .
A fat binary is pretty much the only way to solve that problem.The correct solution to this problem is environment modules [ sourceforge.net ] .
One of their many applications is setting up the scheme you describe -- totally transparently to the user , and in a easily maintainable way for the administrator , and I 've seen them used successfully company-wide at a large semiconductor engineering corporation I worked for in the past .</tokentext>
<sentencetext>True, but the ability to handle such things can come in handy.
As an example, suppose you've got a setup where you're running apps off a server.
You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at.
A fat binary is pretty much the only way to solve that problem.The correct solution to this problem is environment modules [sourceforge.net].
One of their many applications is setting up the scheme you describe -- totally transparently to the user, and in a easily maintainable way for the administrator, and I've seen them used successfully company-wide at a large semiconductor engineering corporation I worked for in the past.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001980</id>
	<title>Re:Structure should be at the filesystem level</title>
	<author>Anonymous</author>
	<datestamp>1257428400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You would like mv and rm to have the -R flag enabled by default?  I think that is a poor idea.</p></htmltext>
<tokenext>You would like mv and rm to have the -R flag enabled by default ?
I think that is a poor idea .</tokentext>
<sentencetext>You would like mv and rm to have the -R flag enabled by default?
I think that is a poor idea.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999528</id>
	<title>Re:Solution in search of a problem</title>
	<author>squallbsr</author>
	<datestamp>1257414780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries. It includes 32- and 64-bit kernels as well, all in the same installation media.</p></div><p>I have been thinking for a while now that Solaris would benefit greatly from FatELF binaries.  They already do some goofy magic with their execution of binaries to figure out which version to run (the one from<nobr> <wbr></nobr>/usr/bin -or- the one from<nobr> <wbr></nobr>/usr/bin/amd64)?</p></div>
	</htmltext>
<tokenext>Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries .
It includes 32- and 64-bit kernels as well , all in the same installation media.I have been thinking for a while now that Solaris would benefit greatly from FatELF binaries .
They already do some goofy magic with their execution of binaries to figure out which version to run ( the one from /usr/bin -or- the one from /usr/bin/amd64 ) ?</tokentext>
<sentencetext>Solaris includes 32-bit binaries for most applications but includes 32- and 64-bit libraries.
It includes 32- and 64-bit kernels as well, all in the same installation media.I have been thinking for a while now that Solaris would benefit greatly from FatELF binaries.
They already do some goofy magic with their execution of binaries to figure out which version to run (the one from /usr/bin -or- the one from /usr/bin/amd64)?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>adisakp</author>
	<datestamp>1257452160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>In the entire forked-up mess of the unix tree, there was only one thing that anybody &amp; everybody cared about - source compatibilty. C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week.</p></div><p>
This guy worked in the closed-source world of video games where it's often not even legal to share your source code (due to middle-ware licensing and trade secrets) and even when it is legal, it's often not feasible for business or gameplay reasons (competitive coding advantage, preventing cheating hacks, disallowing "free content" mods, etc).  It's exactly this reason that high-end cutting-edge games and other closed-source software will <b>NEVER</b> be viable on Linux unless there are major changes to the entire model of gaming development.</p></div>
	</htmltext>
<tokenext>In the entire forked-up mess of the unix tree , there was only one thing that anybody &amp; everybody cared about - source compatibilty .
C99 , POSIX , SuS v3 , so many ways you could ensure that your code would compile everywhere , with whatever compiler was popular that week .
This guy worked in the closed-source world of video games where it 's often not even legal to share your source code ( due to middle-ware licensing and trade secrets ) and even when it is legal , it 's often not feasible for business or gameplay reasons ( competitive coding advantage , preventing cheating hacks , disallowing " free content " mods , etc ) .
It 's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entire model of gaming development .</tokentext>
<sentencetext>In the entire forked-up mess of the unix tree, there was only one thing that anybody &amp; everybody cared about - source compatibilty.
C99, POSIX, SuS v3, so many ways you could ensure that your code would compile everywhere, with whatever compiler was popular that week.
This guy worked in the closed-source world of video games where it's often not even legal to share your source code (due to middle-ware licensing and trade secrets) and even when it is legal, it's often not feasible for business or gameplay reasons (competitive coding advantage, preventing cheating hacks, disallowing "free content" mods, etc).
It's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entire model of gaming development.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000578</id>
	<title>Re:Solution in search of a problem</title>
	<author>chowdahhead</author>
	<datestamp>1257419460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You're confusing multiarch that Debian has been developing for quite some time with AMD64 running 32-bit binaries. It's not the same <a href="https://wiki.ubuntu.com/MultiarchSpec" title="ubuntu.com" rel="nofollow">https://wiki.ubuntu.com/MultiarchSpec</a> [ubuntu.com]</htmltext>
<tokenext>You 're confusing multiarch that Debian has been developing for quite some time with AMD64 running 32-bit binaries .
It 's not the same https : //wiki.ubuntu.com/MultiarchSpec [ ubuntu.com ]</tokentext>
<sentencetext>You're confusing multiarch that Debian has been developing for quite some time with AMD64 running 32-bit binaries.
It's not the same https://wiki.ubuntu.com/MultiarchSpec [ubuntu.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998510</id>
	<title>Re:Isn't someone going to ask ...</title>
	<author>Jaysyn</author>
	<datestamp>1257453660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ryan Gordon does.  He's ported a metric buttload of commercial apps for Linux.  See Loki games &amp; icculus.org for more information.</p></htmltext>
<tokenext>Ryan Gordon does .
He 's ported a metric buttload of commercial apps for Linux .
See Loki games &amp; icculus.org for more information .</tokentext>
<sentencetext>Ryan Gordon does.
He's ported a metric buttload of commercial apps for Linux.
See Loki games &amp; icculus.org for more information.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997856</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001734</id>
	<title>Re:Solution in search of a problem</title>
	<author>pembo13</author>
	<datestamp>1257426300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I would image that package management for FatELF packages would suck, as there would be no clean and simple way to specific architecture.</p></htmltext>
<tokenext>I would image that package management for FatELF packages would suck , as there would be no clean and simple way to specific architecture .</tokentext>
<sentencetext>I would image that package management for FatELF packages would suck, as there would be no clean and simple way to specific architecture.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002374</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>Anonymous</author>
	<datestamp>1257432480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Of course they insulted you! Your reputation precedes you, Wowbagger. The Great Prophet Zarquon's gonna put you in your place, just as soon he gets back. Aaaaaaany day now.</p></htmltext>
<tokenext>Of course they insulted you !
Your reputation precedes you , Wowbagger .
The Great Prophet Zarquon 's gon na put you in your place , just as soon he gets back .
Aaaaaaany day now .</tokentext>
<sentencetext>Of course they insulted you!
Your reputation precedes you, Wowbagger.
The Great Prophet Zarquon's gonna put you in your place, just as soon he gets back.
Aaaaaaany day now.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998314</id>
	<title>You FAIL 1t...</title>
	<author>Anonymous</author>
	<datestamp>1257452820000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>Bought the farm... To the original Achieve any of the Indecision and</htmltext>
<tokenext>Bought the farm... To the original Achieve any of the Indecision and</tokentext>
<sentencetext>Bought the farm... To the original Achieve any of the Indecision and</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001694</id>
	<title>Not worth the performance hit?</title>
	<author>w0mprat</author>
	<datestamp>1257425820000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Programming languages are already far two high level which incurrs a performance hit. We should all be coding in assembler. Personally, for fast executing binaries I prefer to tap the bits into the hard drive platter with magnetized needle.</htmltext>
<tokenext>Programming languages are already far two high level which incurrs a performance hit .
We should all be coding in assembler .
Personally , for fast executing binaries I prefer to tap the bits into the hard drive platter with magnetized needle .</tokentext>
<sentencetext>Programming languages are already far two high level which incurrs a performance hit.
We should all be coding in assembler.
Personally, for fast executing binaries I prefer to tap the bits into the hard drive platter with magnetized needle.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999056</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>Anonymous</author>
	<datestamp>1257412860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>FWIW, the driver situation seems almost by choice to be designed such that it works more smoothly when you have the source code for the drivers.<br>
<br>
Which as far as I'm concerned device manufacturers are crazy not to do... the best a driver can hope for is to simply not get in the way, yet hardware manufacturers seem to have a really hard time writing decent drivers that work correctly.  It's not a competitive advantage, all it does is screw themselves over when I can't use their device.  Why they want to turn down help from users who can fix their drivers for them is beyond me.<br>
<br>
So fat binaries would be of limited help for drivers... it's less of an issue for an installer to pick the right architecture to copy into place, as it is an issue that the Linux design philosophy is simply better suited for open source drivers. (which could be transparently compiled by the installer during installation for the target system)</htmltext>
<tokenext>FWIW , the driver situation seems almost by choice to be designed such that it works more smoothly when you have the source code for the drivers .
Which as far as I 'm concerned device manufacturers are crazy not to do... the best a driver can hope for is to simply not get in the way , yet hardware manufacturers seem to have a really hard time writing decent drivers that work correctly .
It 's not a competitive advantage , all it does is screw themselves over when I ca n't use their device .
Why they want to turn down help from users who can fix their drivers for them is beyond me .
So fat binaries would be of limited help for drivers... it 's less of an issue for an installer to pick the right architecture to copy into place , as it is an issue that the Linux design philosophy is simply better suited for open source drivers .
( which could be transparently compiled by the installer during installation for the target system )</tokentext>
<sentencetext>FWIW, the driver situation seems almost by choice to be designed such that it works more smoothly when you have the source code for the drivers.
Which as far as I'm concerned device manufacturers are crazy not to do... the best a driver can hope for is to simply not get in the way, yet hardware manufacturers seem to have a really hard time writing decent drivers that work correctly.
It's not a competitive advantage, all it does is screw themselves over when I can't use their device.
Why they want to turn down help from users who can fix their drivers for them is beyond me.
So fat binaries would be of limited help for drivers... it's less of an issue for an installer to pick the right architecture to copy into place, as it is an issue that the Linux design philosophy is simply better suited for open source drivers.
(which could be transparently compiled by the installer during installation for the target system)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998594</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003794</id>
	<title>Why FatELF is not a good idea</title>
	<author>Anonymous</author>
	<datestamp>1257500160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There is a very thorough blog post about why nobody really wants FatELF at</p><p><a href="http://blog.flameeyes.eu/2009/11/04/elf-should-rather-be-on-a-diet" title="flameeyes.eu" rel="nofollow">http://blog.flameeyes.eu/2009/11/04/elf-should-rather-be-on-a-diet</a> [flameeyes.eu].</p></htmltext>
<tokenext>There is a very thorough blog post about why nobody really wants FatELF athttp : //blog.flameeyes.eu/2009/11/04/elf-should-rather-be-on-a-diet [ flameeyes.eu ] .</tokentext>
<sentencetext>There is a very thorough blog post about why nobody really wants FatELF athttp://blog.flameeyes.eu/2009/11/04/elf-should-rather-be-on-a-diet [flameeyes.eu].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998936</id>
	<title>"some showed up just to be rude..."</title>
	<author>Anita Coney</author>
	<datestamp>1257412440000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p>Rude linuxheads?!  I find <i>that</i> hard to believe.</p><p>Btw, modding me down, as you most certainly will, only proves my sarcasm was justified.</p></htmltext>
<tokenext>Rude linuxheads ? !
I find that hard to believe.Btw , modding me down , as you most certainly will , only proves my sarcasm was justified .</tokentext>
<sentencetext>Rude linuxheads?!
I find that hard to believe.Btw, modding me down, as you most certainly will, only proves my sarcasm was justified.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000046</id>
	<title>Binary Vs Source</title>
	<author>Anonymous</author>
	<datestamp>1257417000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Doesn't anyone else see the polar opposites here?</p><p>I mean, Linux is all about open source and there is a guy pushing in the opposite direction by encouraging binary packaging making it possible to lock up source and LKML wasn't too kind to him, and he is surprized/upset? This may not be the only reason he and his stuff wasn't taken seriously, but I am positive this should be reason enough.</p></htmltext>
<tokenext>Does n't anyone else see the polar opposites here ? I mean , Linux is all about open source and there is a guy pushing in the opposite direction by encouraging binary packaging making it possible to lock up source and LKML was n't too kind to him , and he is surprized/upset ?
This may not be the only reason he and his stuff was n't taken seriously , but I am positive this should be reason enough .</tokentext>
<sentencetext>Doesn't anyone else see the polar opposites here?I mean, Linux is all about open source and there is a guy pushing in the opposite direction by encouraging binary packaging making it possible to lock up source and LKML wasn't too kind to him, and he is surprized/upset?
This may not be the only reason he and his stuff wasn't taken seriously, but I am positive this should be reason enough.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998868</id>
	<title>Re:Structure should be at the filesystem level</title>
	<author>Anonymous</author>
	<datestamp>1257412200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Umm.. 'mv' does work the same for files and directories (at least as the source), and is supported for directories directly in the kernel and libc.</p></htmltext>
<tokenext>Umm.. 'mv ' does work the same for files and directories ( at least as the source ) , and is supported for directories directly in the kernel and libc .</tokentext>
<sentencetext>Umm.. 'mv' does work the same for files and directories (at least as the source), and is supported for directories directly in the kernel and libc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000616</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>PeterBrett</author>
	<datestamp>1257419580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah)</p></div><p>Laugh it up, but my <a href="ftp://ftp.idsoftware.com/idstuff/quake4/linux/" title="idsoftware.com">Quake 4 binaries</a> [idsoftware.com] I downloaded in 2007 work absolutely flawlessly on my mid-2009 Linux distro.</p></div>
	</htmltext>
<tokenext>Attempting this in a world where even an x86 binary would n't work on all x86-linux-pc boxes ( static linking , yeah...yeah ) Laugh it up , but my Quake 4 binaries [ idsoftware.com ] I downloaded in 2007 work absolutely flawlessly on my mid-2009 Linux distro .</tokentext>
<sentencetext>Attempting this in a world where even an x86 binary wouldn't work on all x86-linux-pc boxes (static linking, yeah...yeah)Laugh it up, but my Quake 4 binaries [idsoftware.com] I downloaded in 2007 work absolutely flawlessly on my mid-2009 Linux distro.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001976</id>
	<title>Re:Kind of broken by design</title>
	<author>Yaztromo</author>
	<datestamp>1257428340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This idea is kind of broken for Linux. On MacOS, with 2 architectures, it makes some sense, since the actual executable code is not huge compared to data.</p></div><p>Mac OS X has more than two architectures to worry about.  In my somewhat outdated version of XCode 3, I have six options that can be included when creating fat binaries:  i386, x86\_64, ppc, ppc64, ppc7400, and ppc970.  And this doesn't include any of the ARM types used for the iPhone/iPod touch.
</p><p>Even within a single processor family, sometimes it's desirable to target optimizations in certain specific processors.
</p><p>Yaz.</p></div>
	</htmltext>
<tokenext>This idea is kind of broken for Linux .
On MacOS , with 2 architectures , it makes some sense , since the actual executable code is not huge compared to data.Mac OS X has more than two architectures to worry about .
In my somewhat outdated version of XCode 3 , I have six options that can be included when creating fat binaries : i386 , x86 \ _64 , ppc , ppc64 , ppc7400 , and ppc970 .
And this does n't include any of the ARM types used for the iPhone/iPod touch .
Even within a single processor family , sometimes it 's desirable to target optimizations in certain specific processors .
Yaz .</tokentext>
<sentencetext>This idea is kind of broken for Linux.
On MacOS, with 2 architectures, it makes some sense, since the actual executable code is not huge compared to data.Mac OS X has more than two architectures to worry about.
In my somewhat outdated version of XCode 3, I have six options that can be included when creating fat binaries:  i386, x86\_64, ppc, ppc64, ppc7400, and ppc970.
And this doesn't include any of the ARM types used for the iPhone/iPod touch.
Even within a single processor family, sometimes it's desirable to target optimizations in certain specific processors.
Yaz.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001072</id>
	<title>Was there a point to this idea?</title>
	<author>Crosseyed &amp; Painless</author>
	<datestamp>1257421800000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>I mean, really, Apple did it back in the day because their customers were too stupid to know what a CPU was-- I mean, were too busy creating and thinking differently to care whether they had a 68K or a PowerPC computer.</p><p>But why now?  And for Linux?!!?  Sweet fancy Moses, if you can't figure out what type of binary you need, you're just not going to get too far with the average Linux distribution.</p><p>The fact that this guy got as far down the development path as he did, before he noticed all the people screaming "GO AWAY!  WE DON'T NEED THIS!!" is a clear sign that he's got some kind of cognitive issue.</p><p>Ryan-- dude-- go solve a real problem.  This wasn't one.</p></htmltext>
<tokenext>I mean , really , Apple did it back in the day because their customers were too stupid to know what a CPU was-- I mean , were too busy creating and thinking differently to care whether they had a 68K or a PowerPC computer.But why now ?
And for Linux ? ! ! ?
Sweet fancy Moses , if you ca n't figure out what type of binary you need , you 're just not going to get too far with the average Linux distribution.The fact that this guy got as far down the development path as he did , before he noticed all the people screaming " GO AWAY !
WE DO N'T NEED THIS ! !
" is a clear sign that he 's got some kind of cognitive issue.Ryan-- dude-- go solve a real problem .
This was n't one .</tokentext>
<sentencetext>I mean, really, Apple did it back in the day because their customers were too stupid to know what a CPU was-- I mean, were too busy creating and thinking differently to care whether they had a 68K or a PowerPC computer.But why now?
And for Linux?!!?
Sweet fancy Moses, if you can't figure out what type of binary you need, you're just not going to get too far with the average Linux distribution.The fact that this guy got as far down the development path as he did, before he noticed all the people screaming "GO AWAY!
WE DON'T NEED THIS!!
" is a clear sign that he's got some kind of cognitive issue.Ryan-- dude-- go solve a real problem.
This wasn't one.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002698</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>npsimons</author>
	<datestamp>1257436380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.</p></div></blockquote><p>This is so true and people don't seem to make the connection - in order to have high quality, you have to discriminate.  All these people hear how wonderful Linux is, and they think, "I've got a great idea to make it better!".  Then when they get turned away, they whine about it like they're the only ones it ever happened to.  Perhaps their idea just wasn't well designed, or it's badly coded, or it doesn't solve a problem.  Or maybe it does just plain suck.  Sorry, but Linux didn't get to be good by accepting every hair-brained non-solution to a non-problem that came along.  People who whine about having their submissions rejected from the Linux kernel are probably the same types that would try to start a business with a "great idea", then whine when no one buys it.</p></div>
	</htmltext>
<tokenext>Things get rejected from the kernel all the time -- because not all things are good , useful , well coded , or solve a problem that needs solving .
It 's not new in any way.This is so true and people do n't seem to make the connection - in order to have high quality , you have to discriminate .
All these people hear how wonderful Linux is , and they think , " I 've got a great idea to make it better ! " .
Then when they get turned away , they whine about it like they 're the only ones it ever happened to .
Perhaps their idea just was n't well designed , or it 's badly coded , or it does n't solve a problem .
Or maybe it does just plain suck .
Sorry , but Linux did n't get to be good by accepting every hair-brained non-solution to a non-problem that came along .
People who whine about having their submissions rejected from the Linux kernel are probably the same types that would try to start a business with a " great idea " , then whine when no one buys it .</tokentext>
<sentencetext>Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving.
It's not new in any way.This is so true and people don't seem to make the connection - in order to have high quality, you have to discriminate.
All these people hear how wonderful Linux is, and they think, "I've got a great idea to make it better!".
Then when they get turned away, they whine about it like they're the only ones it ever happened to.
Perhaps their idea just wasn't well designed, or it's badly coded, or it doesn't solve a problem.
Or maybe it does just plain suck.
Sorry, but Linux didn't get to be good by accepting every hair-brained non-solution to a non-problem that came along.
People who whine about having their submissions rejected from the Linux kernel are probably the same types that would try to start a business with a "great idea", then whine when no one buys it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</id>
	<title>Solution in search of a problem</title>
	<author>amorsen</author>
	<datestamp>1257450060000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).</p><p>Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming. To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC. Plus possibly MIPS, SPARC, and Itanium. Most of those in 32-bit and 64-bit flavours. Those elves are going to be very fat indeed.</p></htmltext>
<tokenext>The 32-bit vs. 64-bit split is handled pretty well on Linux ( well , Debian drug its heels a bit on multiarch handling in packages , but even they seem to be getting with the programme ) .Real multi-arch could be useful , but the number of arches on Linux is just too overwhelming .
To get somewhat decent coverage for Linux binaries , they 'd have to run on x86 , ARM , and PPC .
Plus possibly MIPS , SPARC , and Itanium .
Most of those in 32-bit and 64-bit flavours .
Those elves are going to be very fat indeed .</tokentext>
<sentencetext>The 32-bit vs. 64-bit split is handled pretty well on Linux (well, Debian drug its heels a bit on multiarch handling in packages, but even they seem to be getting with the programme).Real multi-arch could be useful, but the number of arches on Linux is just too overwhelming.
To get somewhat decent coverage for Linux binaries, they'd have to run on x86, ARM, and PPC.
Plus possibly MIPS, SPARC, and Itanium.
Most of those in 32-bit and 64-bit flavours.
Those elves are going to be very fat indeed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30010876</id>
	<title>Re:Kind of broken by design</title>
	<author>Anonymous</author>
	<datestamp>1257508200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>....On MacOS, with 2 architectures,....</p></div><p>More like 6, I've lost count.</p></div>
	</htmltext>
<tokenext>....On MacOS , with 2 architectures,....More like 6 , I 've lost count .</tokentext>
<sentencetext> ....On MacOS, with 2 architectures,....More like 6, I've lost count.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998848</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>crispytwo</author>
	<datestamp>1257412140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I disagree</p><p>For the most part, the game engine is quite separate from the game-script in modern game development and can be compiled independently for each system, and run the same scripts.</p><p>I understand what Ryan was trying to do, but, to be fair, is a low-level solution that a higher level solution could do instead... i.e. make it a desktop environment fix rather than a kernel fix.</p><p>closed-source software is viable on Linux from the hardware/software point of view.... it seems it is the user-end that is missing from the purchasing stream. I've purchased and played (many of) the games Ryan has helped port to Linux. I'm glad he did them.</p><p>As far as rudeness and smugness goes, there's no need for that. I think it is quite sad that there is so much disrespect floating around.</p></htmltext>
<tokenext>I disagreeFor the most part , the game engine is quite separate from the game-script in modern game development and can be compiled independently for each system , and run the same scripts.I understand what Ryan was trying to do , but , to be fair , is a low-level solution that a higher level solution could do instead... i.e. make it a desktop environment fix rather than a kernel fix.closed-source software is viable on Linux from the hardware/software point of view.... it seems it is the user-end that is missing from the purchasing stream .
I 've purchased and played ( many of ) the games Ryan has helped port to Linux .
I 'm glad he did them.As far as rudeness and smugness goes , there 's no need for that .
I think it is quite sad that there is so much disrespect floating around .</tokentext>
<sentencetext>I disagreeFor the most part, the game engine is quite separate from the game-script in modern game development and can be compiled independently for each system, and run the same scripts.I understand what Ryan was trying to do, but, to be fair, is a low-level solution that a higher level solution could do instead... i.e. make it a desktop environment fix rather than a kernel fix.closed-source software is viable on Linux from the hardware/software point of view.... it seems it is the user-end that is missing from the purchasing stream.
I've purchased and played (many of) the games Ryan has helped port to Linux.
I'm glad he did them.As far as rudeness and smugness goes, there's no need for that.
I think it is quite sad that there is so much disrespect floating around.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998746</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>Anonymous</author>
	<datestamp>1257411600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>[citation needed]</p></htmltext>
<tokenext>[ citation needed ]</tokentext>
<sentencetext>[citation needed]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998992</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>epine</author>
	<datestamp>1257412620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving. It's not new in any way.</p></div></blockquote><p>There's a view the world according to father knows best.  And I'm sure it's true much of the time or the project would have foundered long ago.</p><p>I'm also sure that worthwhile patches fall through the cracks, personalities matter in how the decisions are made, and the communication of the decisions made often falls far short of the ideal, wasting valuable time and energy of people who wished to make a positive contribution.</p><p>That said, Con is not my personal poster child for the unfairly rejected: he seems to understand neither software engineering nor macro economics in the large.  For example, while Con's patches might improve things for the desktop user in the short term, if it comes at the cost of continued enterprise support for the people who continue to develop the kernel, the victory will be short lived; in this scenario, at the end of the day, *everyone* loses.  Fair enough, one might say, at least it's equitable.</p><p>The moral of this story as I read it is that he should have chosen his battle better in the first place, such as becoming a founding developer for Haiku, who likely would have embraced his audio-glitch-free window dragging aspirations as a founding precept of digital justice.  From the Linux perspective, that enterprise machine that didn't see Con's reported glitches now looks like next year's entry model (welcome to Westmere).  As a software engineer, one must cast an extremely critical eye on the carrying cost of modifications designed to avert a problem where there is already a great deal of writing on the wall.</p><p>I'd say there's a good chance that same logic applies to fat binaries.</p></div>
	</htmltext>
<tokenext>Things get rejected from the kernel all the time -- because not all things are good , useful , well coded , or solve a problem that needs solving .
It 's not new in any way.There 's a view the world according to father knows best .
And I 'm sure it 's true much of the time or the project would have foundered long ago.I 'm also sure that worthwhile patches fall through the cracks , personalities matter in how the decisions are made , and the communication of the decisions made often falls far short of the ideal , wasting valuable time and energy of people who wished to make a positive contribution.That said , Con is not my personal poster child for the unfairly rejected : he seems to understand neither software engineering nor macro economics in the large .
For example , while Con 's patches might improve things for the desktop user in the short term , if it comes at the cost of continued enterprise support for the people who continue to develop the kernel , the victory will be short lived ; in this scenario , at the end of the day , * everyone * loses .
Fair enough , one might say , at least it 's equitable.The moral of this story as I read it is that he should have chosen his battle better in the first place , such as becoming a founding developer for Haiku , who likely would have embraced his audio-glitch-free window dragging aspirations as a founding precept of digital justice .
From the Linux perspective , that enterprise machine that did n't see Con 's reported glitches now looks like next year 's entry model ( welcome to Westmere ) .
As a software engineer , one must cast an extremely critical eye on the carrying cost of modifications designed to avert a problem where there is already a great deal of writing on the wall.I 'd say there 's a good chance that same logic applies to fat binaries .</tokentext>
<sentencetext>Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving.
It's not new in any way.There's a view the world according to father knows best.
And I'm sure it's true much of the time or the project would have foundered long ago.I'm also sure that worthwhile patches fall through the cracks, personalities matter in how the decisions are made, and the communication of the decisions made often falls far short of the ideal, wasting valuable time and energy of people who wished to make a positive contribution.That said, Con is not my personal poster child for the unfairly rejected: he seems to understand neither software engineering nor macro economics in the large.
For example, while Con's patches might improve things for the desktop user in the short term, if it comes at the cost of continued enterprise support for the people who continue to develop the kernel, the victory will be short lived; in this scenario, at the end of the day, *everyone* loses.
Fair enough, one might say, at least it's equitable.The moral of this story as I read it is that he should have chosen his battle better in the first place, such as becoming a founding developer for Haiku, who likely would have embraced his audio-glitch-free window dragging aspirations as a founding precept of digital justice.
From the Linux perspective, that enterprise machine that didn't see Con's reported glitches now looks like next year's entry model (welcome to Westmere).
As a software engineer, one must cast an extremely critical eye on the carrying cost of modifications designed to avert a problem where there is already a great deal of writing on the wall.I'd say there's a good chance that same logic applies to fat binaries.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998946</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>ckaminski</author>
	<datestamp>1257412500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No, that major reason is publisher effort, and the inconsistency of video driver support.  And the first will never happen without the latter.  Oh, and most games being written in DirectX.<br><br>Kudos to anyone still writing in OpenGL.</htmltext>
<tokenext>No , that major reason is publisher effort , and the inconsistency of video driver support .
And the first will never happen without the latter .
Oh , and most games being written in DirectX.Kudos to anyone still writing in OpenGL .</tokentext>
<sentencetext>No, that major reason is publisher effort, and the inconsistency of video driver support.
And the first will never happen without the latter.
Oh, and most games being written in DirectX.Kudos to anyone still writing in OpenGL.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998000</id>
	<title>Petty fiefdoms and not invented here...</title>
	<author>sbeckstead</author>
	<datestamp>1257451620000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Petty fiefdoms and not invented here syndrome will continue to torpedo any chance for a decent Linux on the desktop.  Until Linux has a single binary and a universal installation strategy they will continue to be mostly harmless and largely irrelevant to the desktop market at large.</htmltext>
<tokenext>Petty fiefdoms and not invented here syndrome will continue to torpedo any chance for a decent Linux on the desktop .
Until Linux has a single binary and a universal installation strategy they will continue to be mostly harmless and largely irrelevant to the desktop market at large .</tokentext>
<sentencetext>Petty fiefdoms and not invented here syndrome will continue to torpedo any chance for a decent Linux on the desktop.
Until Linux has a single binary and a universal installation strategy they will continue to be mostly harmless and largely irrelevant to the desktop market at large.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998532</id>
	<title>Rejecting solutions to problems</title>
	<author>Anonymous</author>
	<datestamp>1257453780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Actually, not a solution in search of a problem. The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types. You cannot solve that with package management because you need all packages installed at the same time -- i.e., your Emacs binary must run on x86, amd64, sparc, whatever and you simply cannot install three different packages from three different architectures onto the same file server and then mount that share as the<nobr> <wbr></nobr>/usr/local share of your workstation network, it just does not work because the binaries will conflict. The issue is that this is only a problem if you are wanting to deploy Linux on the desktop in a network installation similar to the old Unix networked workstations of yore, and the desktop is a place where the current Linux developers don't really care (see XKCD #619). It is extremely frustrating to me to see Linux developers reject the experience that we Unix old-timers have regarding how to reduce the management and maintenance costs of large networked deployments of workstations simply because a) it was Not Invented Here (in the insular incestuous Linux world), and b) because they don't care about the workstation in the first place other than perhaps as a stand-alone workstation as home, certainly not corporate deployments of workstations, which bore them utterly.
<p>
It's the same reason why Android is a user interface disaster compared to the iPhone and Palm Pre -- geeks thinking like geeks, instead of geeks thinking like users. The package management thing is a hack, a hack which is useful only on stand-alone servers or stand-alone workstations and utterly useless at getting workstation administrative costs down, which requires a networked software installation and where fat binaries mean you install *one* package rather than needing several different filesystem shares with multiple package installations that are largely identical. But workstation administration costs, while a concern for users of Linux, aren't a concern for core Linux developers because they don't know, understand, or care about the workstation other than their personal development machine at home. So it goes.</p></htmltext>
<tokenext>Actually , not a solution in search of a problem .
The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types .
You can not solve that with package management because you need all packages installed at the same time -- i.e. , your Emacs binary must run on x86 , amd64 , sparc , whatever and you simply can not install three different packages from three different architectures onto the same file server and then mount that share as the /usr/local share of your workstation network , it just does not work because the binaries will conflict .
The issue is that this is only a problem if you are wanting to deploy Linux on the desktop in a network installation similar to the old Unix networked workstations of yore , and the desktop is a place where the current Linux developers do n't really care ( see XKCD # 619 ) .
It is extremely frustrating to me to see Linux developers reject the experience that we Unix old-timers have regarding how to reduce the management and maintenance costs of large networked deployments of workstations simply because a ) it was Not Invented Here ( in the insular incestuous Linux world ) , and b ) because they do n't care about the workstation in the first place other than perhaps as a stand-alone workstation as home , certainly not corporate deployments of workstations , which bore them utterly .
It 's the same reason why Android is a user interface disaster compared to the iPhone and Palm Pre -- geeks thinking like geeks , instead of geeks thinking like users .
The package management thing is a hack , a hack which is useful only on stand-alone servers or stand-alone workstations and utterly useless at getting workstation administrative costs down , which requires a networked software installation and where fat binaries mean you install * one * package rather than needing several different filesystem shares with multiple package installations that are largely identical .
But workstation administration costs , while a concern for users of Linux , are n't a concern for core Linux developers because they do n't know , understand , or care about the workstation other than their personal development machine at home .
So it goes .</tokentext>
<sentencetext>Actually, not a solution in search of a problem.
The fundamental problem is allowing program installation on a shared disk for use by networked workstations despite the various systems using that disk being of varying types.
You cannot solve that with package management because you need all packages installed at the same time -- i.e., your Emacs binary must run on x86, amd64, sparc, whatever and you simply cannot install three different packages from three different architectures onto the same file server and then mount that share as the /usr/local share of your workstation network, it just does not work because the binaries will conflict.
The issue is that this is only a problem if you are wanting to deploy Linux on the desktop in a network installation similar to the old Unix networked workstations of yore, and the desktop is a place where the current Linux developers don't really care (see XKCD #619).
It is extremely frustrating to me to see Linux developers reject the experience that we Unix old-timers have regarding how to reduce the management and maintenance costs of large networked deployments of workstations simply because a) it was Not Invented Here (in the insular incestuous Linux world), and b) because they don't care about the workstation in the first place other than perhaps as a stand-alone workstation as home, certainly not corporate deployments of workstations, which bore them utterly.
It's the same reason why Android is a user interface disaster compared to the iPhone and Palm Pre -- geeks thinking like geeks, instead of geeks thinking like users.
The package management thing is a hack, a hack which is useful only on stand-alone servers or stand-alone workstations and utterly useless at getting workstation administrative costs down, which requires a networked software installation and where fat binaries mean you install *one* package rather than needing several different filesystem shares with multiple package installations that are largely identical.
But workstation administration costs, while a concern for users of Linux, aren't a concern for core Linux developers because they don't know, understand, or care about the workstation other than their personal development machine at home.
So it goes.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</id>
	<title>Kind of broken by design</title>
	<author>bcmm</author>
	<datestamp>1257450420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>This idea is kind of broken for Linux. On MacOS, with 2 architectures, it makes some sense, since the actual executable code is not huge compared to data. On Linux, withe a couple of dozen architectures, executable code *is* going to start to take relevant amounts of space, and the effort involved in preparing them will be nontrivial. If this system were adopted, virtually no binaries would be made to support all available architectures, meaning that anyone not on x86 (32 bit) would need to check what archs a binary supported before downloading it, which is about as difficult as choosing which one to download would've been.</htmltext>
<tokenext>This idea is kind of broken for Linux .
On MacOS , with 2 architectures , it makes some sense , since the actual executable code is not huge compared to data .
On Linux , withe a couple of dozen architectures , executable code * is * going to start to take relevant amounts of space , and the effort involved in preparing them will be nontrivial .
If this system were adopted , virtually no binaries would be made to support all available architectures , meaning that anyone not on x86 ( 32 bit ) would need to check what archs a binary supported before downloading it , which is about as difficult as choosing which one to download would 've been .</tokentext>
<sentencetext>This idea is kind of broken for Linux.
On MacOS, with 2 architectures, it makes some sense, since the actual executable code is not huge compared to data.
On Linux, withe a couple of dozen architectures, executable code *is* going to start to take relevant amounts of space, and the effort involved in preparing them will be nontrivial.
If this system were adopted, virtually no binaries would be made to support all available architectures, meaning that anyone not on x86 (32 bit) would need to check what archs a binary supported before downloading it, which is about as difficult as choosing which one to download would've been.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998672</id>
	<title>Silly</title>
	<author>wasabii</author>
	<datestamp>1257454380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is silly. Nobody even distributes Linux binaries. They distribute Linux packages. Hell, even on Windows, the number of distributed<nobr> <wbr></nobr>.exe's has gone down. Most things get packaged into MSI. This is fine.</p><p>Maybe what he wants is an easier way for developers to package their stuff for many distros.</p></htmltext>
<tokenext>This is silly .
Nobody even distributes Linux binaries .
They distribute Linux packages .
Hell , even on Windows , the number of distributed .exe 's has gone down .
Most things get packaged into MSI .
This is fine.Maybe what he wants is an easier way for developers to package their stuff for many distros .</tokentext>
<sentencetext>This is silly.
Nobody even distributes Linux binaries.
They distribute Linux packages.
Hell, even on Windows, the number of distributed .exe's has gone down.
Most things get packaged into MSI.
This is fine.Maybe what he wants is an easier way for developers to package their stuff for many distros.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998686</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>Ash-Fox</author>
	<datestamp>1257454500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>This guy worked in the closed-source world of video games where it's often not even legal to share your source code (due to middle-ware licensing and trade secrets) and even when it is legal, it's often not feasible for business or gameplay reasons (competitive coding advantage, preventing cheating hacks, disallowing "free content" mods, etc).</p></div></blockquote><p>I have built cross-distro binary-only applications before.</p><p>Some notes on doing so:</p><p>Make sure you compile, link against a old version of glibc, this prevents issues of running applications built against newer versions of glibc spitting out "undefined reference errors" on systems with older glibc (much like when you compile a windows program against the winvista platform sdk and try to run that program on XP).</p><p>If you must link against the C++ runtime library (libstdc++), then provide a custom version of it with your software. That's about all you need (much like how many Windows applications come with the msvc runtime dlls they were compiled against).</p><p>This is legal, no source requirement (outside of providing the source to libstdc++ when requested - which wouldn't reveal anything).</p><blockquote><div><p>It's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entire model of gaming development.</p></div></blockquote><p>I honestly don't see how this is a substantial difference from Windows, could you explain it better, please?</p></div>
	</htmltext>
<tokenext>This guy worked in the closed-source world of video games where it 's often not even legal to share your source code ( due to middle-ware licensing and trade secrets ) and even when it is legal , it 's often not feasible for business or gameplay reasons ( competitive coding advantage , preventing cheating hacks , disallowing " free content " mods , etc ) .I have built cross-distro binary-only applications before.Some notes on doing so : Make sure you compile , link against a old version of glibc , this prevents issues of running applications built against newer versions of glibc spitting out " undefined reference errors " on systems with older glibc ( much like when you compile a windows program against the winvista platform sdk and try to run that program on XP ) .If you must link against the C + + runtime library ( libstdc + + ) , then provide a custom version of it with your software .
That 's about all you need ( much like how many Windows applications come with the msvc runtime dlls they were compiled against ) .This is legal , no source requirement ( outside of providing the source to libstdc + + when requested - which would n't reveal anything ) .It 's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entire model of gaming development.I honestly do n't see how this is a substantial difference from Windows , could you explain it better , please ?</tokentext>
<sentencetext>This guy worked in the closed-source world of video games where it's often not even legal to share your source code (due to middle-ware licensing and trade secrets) and even when it is legal, it's often not feasible for business or gameplay reasons (competitive coding advantage, preventing cheating hacks, disallowing "free content" mods, etc).I have built cross-distro binary-only applications before.Some notes on doing so:Make sure you compile, link against a old version of glibc, this prevents issues of running applications built against newer versions of glibc spitting out "undefined reference errors" on systems with older glibc (much like when you compile a windows program against the winvista platform sdk and try to run that program on XP).If you must link against the C++ runtime library (libstdc++), then provide a custom version of it with your software.
That's about all you need (much like how many Windows applications come with the msvc runtime dlls they were compiled against).This is legal, no source requirement (outside of providing the source to libstdc++ when requested - which wouldn't reveal anything).It's exactly this reason that high-end cutting-edge games and other closed-source software will NEVER be viable on Linux unless there are major changes to the entire model of gaming development.I honestly don't see how this is a substantial difference from Windows, could you explain it better, please?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003080</id>
	<title>Re:"That's a stupid idea" vs. "You are stupid"</title>
	<author>Anonymous</author>
	<datestamp>1257443580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>But someone smart will not bother the LKML with a stupid idea, therefore he must be stupid. Simple deduction.</p></htmltext>
<tokenext>But someone smart will not bother the LKML with a stupid idea , therefore he must be stupid .
Simple deduction .</tokentext>
<sentencetext>But someone smart will not bother the LKML with a stupid idea, therefore he must be stupid.
Simple deduction.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003956</id>
	<title>Re:Solution in search of a problem</title>
	<author>Carewolf</author>
	<datestamp>1257503040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>OS X 10.6 includes i386 and x86\_64 versions of almost everything. By default it runs the x86\_64 versions on compatible CPUs and compiles software as x86\_64. It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.</p></div></blockquote><p>Ehhmm no.. A IA32 operating system can not run AMD64 processes, it is physically impossible. A AMD64 kernel can however run IA32 processes. No matter how awesome you think Apple are they are still using the same CPUs and 64bit mode is simply not available from 32bit mode (what would be the point?). Compatibility mode is however available from 64bit mode.</p></div>
	</htmltext>
<tokenext>OS X 10.6 includes i386 and x86 \ _64 versions of almost everything .
By default it runs the x86 \ _64 versions on compatible CPUs and compiles software as x86 \ _64 .
It runs the i386 kernel by default , but the OS X i386 kernel is capable of running 64 bit processes.Ehhmm no.. A IA32 operating system can not run AMD64 processes , it is physically impossible .
A AMD64 kernel can however run IA32 processes .
No matter how awesome you think Apple are they are still using the same CPUs and 64bit mode is simply not available from 32bit mode ( what would be the point ? ) .
Compatibility mode is however available from 64bit mode .</tokentext>
<sentencetext>OS X 10.6 includes i386 and x86\_64 versions of almost everything.
By default it runs the x86\_64 versions on compatible CPUs and compiles software as x86\_64.
It runs the i386 kernel by default, but the OS X i386 kernel is capable of running 64 bit processes.Ehhmm no.. A IA32 operating system can not run AMD64 processes, it is physically impossible.
A AMD64 kernel can however run IA32 processes.
No matter how awesome you think Apple are they are still using the same CPUs and 64bit mode is simply not available from 32bit mode (what would be the point?).
Compatibility mode is however available from 64bit mode.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998312</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>morgauxo</author>
	<datestamp>1257452820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't have a 64 bit system.  I do have multiple ARM devices and 32 bit systems.</htmltext>
<tokenext>I do n't have a 64 bit system .
I do have multiple ARM devices and 32 bit systems .</tokentext>
<sentencetext>I don't have a 64 bit system.
I do have multiple ARM devices and 32 bit systems.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998702</id>
	<title>Re:Kind of broken by design</title>
	<author>Andy Dodd</author>
	<datestamp>1257454560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's also the fact that the package management systems of most distributions have made architecture variants a non-issue.  They will automatically choose the appropriate package for their architecture.  For x86 vs x86\_64, most distros have solved that problem with multilib approaches and multilib-aware package managers.  I know Ubuntu has.</p></htmltext>
<tokenext>There 's also the fact that the package management systems of most distributions have made architecture variants a non-issue .
They will automatically choose the appropriate package for their architecture .
For x86 vs x86 \ _64 , most distros have solved that problem with multilib approaches and multilib-aware package managers .
I know Ubuntu has .</tokentext>
<sentencetext>There's also the fact that the package management systems of most distributions have made architecture variants a non-issue.
They will automatically choose the appropriate package for their architecture.
For x86 vs x86\_64, most distros have solved that problem with multilib approaches and multilib-aware package managers.
I know Ubuntu has.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998462</id>
	<title>Re:The wrong Solution to the problem.</title>
	<author>morgauxo</author>
	<datestamp>1257453420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The differences go deeper than just how an RPM or DEB is set up (I assume you mean directory structure).  Actually some well placed symbolic links can be a workaround for that problem.

The problem is all the libraries which are not binary compatible from version to version.  Even if you re-arrange all the files I'd only give you about 50\% odds of being able to run a package from another distro w/o problems.</htmltext>
<tokenext>The differences go deeper than just how an RPM or DEB is set up ( I assume you mean directory structure ) .
Actually some well placed symbolic links can be a workaround for that problem .
The problem is all the libraries which are not binary compatible from version to version .
Even if you re-arrange all the files I 'd only give you about 50 \ % odds of being able to run a package from another distro w/o problems .</tokentext>
<sentencetext>The differences go deeper than just how an RPM or DEB is set up (I assume you mean directory structure).
Actually some well placed symbolic links can be a workaround for that problem.
The problem is all the libraries which are not binary compatible from version to version.
Even if you re-arrange all the files I'd only give you about 50\% odds of being able to run a package from another distro w/o problems.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998964</id>
	<title>Whatever happened to "everything is a file"?</title>
	<author>zooblethorpe</author>
	<datestamp>1257412560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>My objection is that any such hierarchy of data could be stored as files.</p><p>Linux needs tools so that a directory can be manipulated as a file more easily. For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default. Then programs are free to treat any point in the hierarchy as a "file". A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself. </p></div> </blockquote><p>I've often wondered about that -- what usefulness is there in trying to mv or cp a populated directory *without* the -r/-a flags?  Is this some Unix appendix, a leftover with no remaining useful function?  Or is there some residual utility in requiring the flags, that I'm simply unaware of?  Seriously, if anyone has any insight, by all means please post it.</p><p>And being able to set a directory itself as executable (such that the contents are run) sounds an awful lot like what Apple's done with their<nobr> <wbr></nobr>.app packages -- but there I have to wonder if there might not be some security implications.</p><p>Cheers,</p></div>
	</htmltext>
<tokenext>My objection is that any such hierarchy of data could be stored as files.Linux needs tools so that a directory can be manipulated as a file more easily .
For instance cp/mv/etc should pretty much act like -r/-a is on all the time , and such recursive operations should be provided by libc and the kernel by default .
Then programs are free to treat any point in the hierarchy as a " file " .
A fat binary would just be a bunch of binaries stuck in the same directory , and you would run it by exec of the directory itself .
I 've often wondered about that -- what usefulness is there in trying to mv or cp a populated directory * without * the -r/-a flags ?
Is this some Unix appendix , a leftover with no remaining useful function ?
Or is there some residual utility in requiring the flags , that I 'm simply unaware of ?
Seriously , if anyone has any insight , by all means please post it.And being able to set a directory itself as executable ( such that the contents are run ) sounds an awful lot like what Apple 's done with their .app packages -- but there I have to wonder if there might not be some security implications.Cheers ,</tokentext>
<sentencetext>My objection is that any such hierarchy of data could be stored as files.Linux needs tools so that a directory can be manipulated as a file more easily.
For instance cp/mv/etc should pretty much act like -r/-a is on all the time, and such recursive operations should be provided by libc and the kernel by default.
Then programs are free to treat any point in the hierarchy as a "file".
A fat binary would just be a bunch of binaries stuck in the same directory, and you would run it by exec of the directory itself.
I've often wondered about that -- what usefulness is there in trying to mv or cp a populated directory *without* the -r/-a flags?
Is this some Unix appendix, a leftover with no remaining useful function?
Or is there some residual utility in requiring the flags, that I'm simply unaware of?
Seriously, if anyone has any insight, by all means please post it.And being able to set a directory itself as executable (such that the contents are run) sounds an awful lot like what Apple's done with their .app packages -- but there I have to wonder if there might not be some security implications.Cheers,
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001370</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>Anonymous</author>
	<datestamp>1257423420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving."</p><p>You forgot the other reasons: NIH, jealousy, personal issues, etc.</p></htmltext>
<tokenext>" Things get rejected from the kernel all the time -- because not all things are good , useful , well coded , or solve a problem that needs solving .
" You forgot the other reasons : NIH , jealousy , personal issues , etc .</tokentext>
<sentencetext>"Things get rejected from the kernel all the time -- because not all things are good, useful, well coded, or solve a problem that needs solving.
"You forgot the other reasons: NIH, jealousy, personal issues, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998578</id>
	<title>Re:Kind of broken by design</title>
	<author>99BottlesOfBeerInMyF</author>
	<datestamp>1257454020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>On Linux, withe a couple of dozen architectures, executable code *is* going to start to take relevant amounts of space, and the effort involved in preparing them will be nontrivial.</p></div><p>If disk space becomes a problem (not likely given how cheap disk is these days) you can always have your package manager or another tool delete unused binary parts, just like OS X users can. As for the difficulty of preparing them, if it is the norm, won't the tools you use to create software quickly automate the process?</p><p><div class="quote"><p>If this system were adopted, virtually no binaries would be made to support all available architectures, meaning that anyone not on x86 (32 bit) would need to check what archs a binary supported before downloading it, which is about as difficult as choosing which one to download would've been.</p></div><p>No the user would just assume everything works which it should for most people downloading commercial apps where this provides a real advantage. Besides the workflow of download it and run it and it works or doesn't is understandable to most users. The workflow of trying to figure out what architecture they are using and/or download each version and try it one at a time is a lot harder and more frustrating.</p></div>
	</htmltext>
<tokenext>On Linux , withe a couple of dozen architectures , executable code * is * going to start to take relevant amounts of space , and the effort involved in preparing them will be nontrivial.If disk space becomes a problem ( not likely given how cheap disk is these days ) you can always have your package manager or another tool delete unused binary parts , just like OS X users can .
As for the difficulty of preparing them , if it is the norm , wo n't the tools you use to create software quickly automate the process ? If this system were adopted , virtually no binaries would be made to support all available architectures , meaning that anyone not on x86 ( 32 bit ) would need to check what archs a binary supported before downloading it , which is about as difficult as choosing which one to download would 've been.No the user would just assume everything works which it should for most people downloading commercial apps where this provides a real advantage .
Besides the workflow of download it and run it and it works or does n't is understandable to most users .
The workflow of trying to figure out what architecture they are using and/or download each version and try it one at a time is a lot harder and more frustrating .</tokentext>
<sentencetext>On Linux, withe a couple of dozen architectures, executable code *is* going to start to take relevant amounts of space, and the effort involved in preparing them will be nontrivial.If disk space becomes a problem (not likely given how cheap disk is these days) you can always have your package manager or another tool delete unused binary parts, just like OS X users can.
As for the difficulty of preparing them, if it is the norm, won't the tools you use to create software quickly automate the process?If this system were adopted, virtually no binaries would be made to support all available architectures, meaning that anyone not on x86 (32 bit) would need to check what archs a binary supported before downloading it, which is about as difficult as choosing which one to download would've been.No the user would just assume everything works which it should for most people downloading commercial apps where this provides a real advantage.
Besides the workflow of download it and run it and it works or doesn't is understandable to most users.
The workflow of trying to figure out what architecture they are using and/or download each version and try it one at a time is a lot harder and more frustrating.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999290</id>
	<title>But it's so much harder to write exploits</title>
	<author>puddles</author>
	<datestamp>1257413820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Now instead of one easy target, your sploits have to work on 32- and 64-bit kernel, as well as on SPARC, ARM, PPC, MIPS.  Where does one find the time?!?</p></htmltext>
<tokenext>Now instead of one easy target , your sploits have to work on 32- and 64-bit kernel , as well as on SPARC , ARM , PPC , MIPS .
Where does one find the time ? !
?</tokentext>
<sentencetext>Now instead of one easy target, your sploits have to work on 32- and 64-bit kernel, as well as on SPARC, ARM, PPC, MIPS.
Where does one find the time?!
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982</id>
	<title>Re:Kind of broken by design</title>
	<author>Anonymous</author>
	<datestamp>1257451560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>True, but the ability to handle such things can come in handy.  As an example, suppose you've got a setup where you're running apps off a server.  You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at.  A fat binary is pretty much the only way to solve that problem.</p></htmltext>
<tokenext>True , but the ability to handle such things can come in handy .
As an example , suppose you 've got a setup where you 're running apps off a server .
You 've got several different hardware platforms going , but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at .
A fat binary is pretty much the only way to solve that problem .</tokentext>
<sentencetext>True, but the ability to handle such things can come in handy.
As an example, suppose you've got a setup where you're running apps off a server.
You've got several different hardware platforms going, but you want your users to be able to double click the server hosted apps without worrying about picking the right one for the computer they happen to be sitting at.
A fat binary is pretty much the only way to solve that problem.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997530</id>
	<title>Re:He needs thicker skin</title>
	<author>pak9rabid</author>
	<datestamp>1257449820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Exactly...given enough time, if enough people find fatELF binaries useful, they may just rethink its usefulness in the kernel source tree.</htmltext>
<tokenext>Exactly...given enough time , if enough people find fatELF binaries useful , they may just rethink its usefulness in the kernel source tree .</tokentext>
<sentencetext>Exactly...given enough time, if enough people find fatELF binaries useful, they may just rethink its usefulness in the kernel source tree.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997484</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004258</id>
	<title>Re:Universal binaries? How about universal install</title>
	<author>Anonymous</author>
	<datestamp>1257508200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>Instead of pushing for universal binaries (which means interpreted code across different CPU architectures)</p></div></blockquote><p>No it doesn't.</p><p>Fat binaries is when you compile your program for multiple CPU architectures, and then use a program that puts all those programs into one big ELF file.</p><p>Unlike the normal situation, where you compile your program for multiple CPU architectures, and then use a program that puts all of those into one big ZIP file.</p><p>With fat binaries, the kernel needs to know which parts of the ELF file it's supposed to load. Where as with ZIP files (or tar.gz or Loki Installer<nobr> <wbr></nobr>.run), only the installer needs to care about which file is the right one.</p><p>The LKML people apparently believe that the job is best left to the installer.</p></div>
	</htmltext>
<tokenext>Instead of pushing for universal binaries ( which means interpreted code across different CPU architectures ) No it does n't.Fat binaries is when you compile your program for multiple CPU architectures , and then use a program that puts all those programs into one big ELF file.Unlike the normal situation , where you compile your program for multiple CPU architectures , and then use a program that puts all of those into one big ZIP file.With fat binaries , the kernel needs to know which parts of the ELF file it 's supposed to load .
Where as with ZIP files ( or tar.gz or Loki Installer .run ) , only the installer needs to care about which file is the right one.The LKML people apparently believe that the job is best left to the installer .</tokentext>
<sentencetext>Instead of pushing for universal binaries (which means interpreted code across different CPU architectures)No it doesn't.Fat binaries is when you compile your program for multiple CPU architectures, and then use a program that puts all those programs into one big ELF file.Unlike the normal situation, where you compile your program for multiple CPU architectures, and then use a program that puts all of those into one big ZIP file.With fat binaries, the kernel needs to know which parts of the ELF file it's supposed to load.
Where as with ZIP files (or tar.gz or Loki Installer .run), only the installer needs to care about which file is the right one.The LKML people apparently believe that the job is best left to the installer.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998082</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30013476</id>
	<title>Re:Solution in search of a problem</title>
	<author>Anonymous</author>
	<datestamp>1257596400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"with a 64-bit Intel CPU on a system with a 32-bit Intel CPU"</p><p>Fixed that for ya!</p><p>so lets take a linux example where you have 32bit intel, arm, ppc, sparc and also 64bit variants.  If the original program size was 1MB, you've already got a 8MB executable, thats just the program executable, what about the libraries it might depend upon? those have to be packaged up into the same package also, instead of having a 10MB download, you might be with a 100MB download or similar</p><p>I guess this is why people are against it, because actually, almost nobody needs this fatelf, software distribution with linux is radically different to that or MacOSX or windows, you basically have an internet connected machine, it's installed with a 32bit or 64bit cpu and you download and install packages onto it from whatever type you need.</p><p>the \% of people who install, then require a 64bit version, is frighteningly small, only those who upgrade from 32bit to 64bit need to worry, most computers bought now are 64bit (intels in consumer devices)  So these people never have to worry about it, those running 32bit installation who want to take advantage of 64bit, well, it's a bit harder, but pushing the workload of that small \% of people onto the great majority of people who DONT NEED IT, is just asking for trouble, of course people would say no.</p><p>those people would just say "reinstall with 64bit versions" bingo, problem goes away without troubling the rest of us.</p><p>MacOSX runs like this because they had a transition period and their software distribution is based on DVD's and downloading files from the websites, you can't ask the user to select 32 or 64, because they have no idea what that is, so the operating system is installed in 32 or 64 bit and macosx knows which you have, then when you download a program, macosx runs the correct version.</p><p>With linux, your operating system can tell directly which version of the software to download, so does so correctly and the problem never existed in the first place, only for those environments without internet, but in those environments, people are smart enough to figure out what they need to do, so the work is pushed onto the small \% of those who can do it.</p><p>So I dont see a need for fatelf, nor a problem with people refusing it</p></htmltext>
<tokenext>" with a 64-bit Intel CPU on a system with a 32-bit Intel CPU " Fixed that for ya ! so lets take a linux example where you have 32bit intel , arm , ppc , sparc and also 64bit variants .
If the original program size was 1MB , you 've already got a 8MB executable , thats just the program executable , what about the libraries it might depend upon ?
those have to be packaged up into the same package also , instead of having a 10MB download , you might be with a 100MB download or similarI guess this is why people are against it , because actually , almost nobody needs this fatelf , software distribution with linux is radically different to that or MacOSX or windows , you basically have an internet connected machine , it 's installed with a 32bit or 64bit cpu and you download and install packages onto it from whatever type you need.the \ % of people who install , then require a 64bit version , is frighteningly small , only those who upgrade from 32bit to 64bit need to worry , most computers bought now are 64bit ( intels in consumer devices ) So these people never have to worry about it , those running 32bit installation who want to take advantage of 64bit , well , it 's a bit harder , but pushing the workload of that small \ % of people onto the great majority of people who DONT NEED IT , is just asking for trouble , of course people would say no.those people would just say " reinstall with 64bit versions " bingo , problem goes away without troubling the rest of us.MacOSX runs like this because they had a transition period and their software distribution is based on DVD 's and downloading files from the websites , you ca n't ask the user to select 32 or 64 , because they have no idea what that is , so the operating system is installed in 32 or 64 bit and macosx knows which you have , then when you download a program , macosx runs the correct version.With linux , your operating system can tell directly which version of the software to download , so does so correctly and the problem never existed in the first place , only for those environments without internet , but in those environments , people are smart enough to figure out what they need to do , so the work is pushed onto the small \ % of those who can do it.So I dont see a need for fatelf , nor a problem with people refusing it</tokentext>
<sentencetext>"with a 64-bit Intel CPU on a system with a 32-bit Intel CPU"Fixed that for ya!so lets take a linux example where you have 32bit intel, arm, ppc, sparc and also 64bit variants.
If the original program size was 1MB, you've already got a 8MB executable, thats just the program executable, what about the libraries it might depend upon?
those have to be packaged up into the same package also, instead of having a 10MB download, you might be with a 100MB download or similarI guess this is why people are against it, because actually, almost nobody needs this fatelf, software distribution with linux is radically different to that or MacOSX or windows, you basically have an internet connected machine, it's installed with a 32bit or 64bit cpu and you download and install packages onto it from whatever type you need.the \% of people who install, then require a 64bit version, is frighteningly small, only those who upgrade from 32bit to 64bit need to worry, most computers bought now are 64bit (intels in consumer devices)  So these people never have to worry about it, those running 32bit installation who want to take advantage of 64bit, well, it's a bit harder, but pushing the workload of that small \% of people onto the great majority of people who DONT NEED IT, is just asking for trouble, of course people would say no.those people would just say "reinstall with 64bit versions" bingo, problem goes away without troubling the rest of us.MacOSX runs like this because they had a transition period and their software distribution is based on DVD's and downloading files from the websites, you can't ask the user to select 32 or 64, because they have no idea what that is, so the operating system is installed in 32 or 64 bit and macosx knows which you have, then when you download a program, macosx runs the correct version.With linux, your operating system can tell directly which version of the software to download, so does so correctly and the problem never existed in the first place, only for those environments without internet, but in those environments, people are smart enough to figure out what they need to do, so the work is pushed onto the small \% of those who can do it.So I dont see a need for fatelf, nor a problem with people refusing it</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998330</id>
	<title>Bloat-ELF</title>
	<author>Anonymous</author>
	<datestamp>1257452880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Someone should tell that guy to stop reinventing the wheel and use Java.<br>Virtual Machine? JIT? no... no... lets distribute EVERY architecture's native code in one big file instead.</p><p>His idea is idiotic and the kernel dev's had every right to call him out.<br>Having a thought-out idea doesn't entitle you to JACK CRAP</p></htmltext>
<tokenext>Someone should tell that guy to stop reinventing the wheel and use Java.Virtual Machine ?
JIT ? no... no... lets distribute EVERY architecture 's native code in one big file instead.His idea is idiotic and the kernel dev 's had every right to call him out.Having a thought-out idea does n't entitle you to JACK CRAP</tokentext>
<sentencetext>Someone should tell that guy to stop reinventing the wheel and use Java.Virtual Machine?
JIT? no... no... lets distribute EVERY architecture's native code in one big file instead.His idea is idiotic and the kernel dev's had every right to call him out.Having a thought-out idea doesn't entitle you to JACK CRAP</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997648</id>
	<title>Rude?</title>
	<author>Profane MuthaFucka</author>
	<datestamp>1257450300000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>What a pussy. Those guys were tough, not rude.</p><p>You want an example of rude? If I wanted a fat executable I would put my COCK into it. Now fuck off.</p><p>See? The LKML guys did not say anything like that. Therefore, we're not impressed by your crying.</p></htmltext>
<tokenext>What a pussy .
Those guys were tough , not rude.You want an example of rude ?
If I wanted a fat executable I would put my COCK into it .
Now fuck off.See ?
The LKML guys did not say anything like that .
Therefore , we 're not impressed by your crying .</tokentext>
<sentencetext>What a pussy.
Those guys were tough, not rude.You want an example of rude?
If I wanted a fat executable I would put my COCK into it.
Now fuck off.See?
The LKML guys did not say anything like that.
Therefore, we're not impressed by your crying.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001500</id>
	<title>As a developer</title>
	<author>Anonymous</author>
	<datestamp>1257424260000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>i dont see anything that would streamline or help my distribute binaries on linux.<br>I would still need to crosscompile. If there was architecture specific hacks i would still need to maintain these just as much... and well, i would be doing exactly the same things i would to create a 32 and 64 bit binary, which are then merged into a fat binary. So, i guess it would help me just need to upload one file?</p><p>It's still not like i can test the 64bit code, unless i had a system to run it on (or 32bit if it was the other way around). And if i had, then i could easily just clone my repo and compile it on both machines.</p><p>What WOULD be nice would be to be able to release some llvm bitcode version, which is compiled instantly at runtime (or at install time). That would be the nice way to ship things (and it would have other nice features).</p></htmltext>
<tokenext>i dont see anything that would streamline or help my distribute binaries on linux.I would still need to crosscompile .
If there was architecture specific hacks i would still need to maintain these just as much... and well , i would be doing exactly the same things i would to create a 32 and 64 bit binary , which are then merged into a fat binary .
So , i guess it would help me just need to upload one file ? It 's still not like i can test the 64bit code , unless i had a system to run it on ( or 32bit if it was the other way around ) .
And if i had , then i could easily just clone my repo and compile it on both machines.What WOULD be nice would be to be able to release some llvm bitcode version , which is compiled instantly at runtime ( or at install time ) .
That would be the nice way to ship things ( and it would have other nice features ) .</tokentext>
<sentencetext>i dont see anything that would streamline or help my distribute binaries on linux.I would still need to crosscompile.
If there was architecture specific hacks i would still need to maintain these just as much... and well, i would be doing exactly the same things i would to create a 32 and 64 bit binary, which are then merged into a fat binary.
So, i guess it would help me just need to upload one file?It's still not like i can test the 64bit code, unless i had a system to run it on (or 32bit if it was the other way around).
And if i had, then i could easily just clone my repo and compile it on both machines.What WOULD be nice would be to be able to release some llvm bitcode version, which is compiled instantly at runtime (or at install time).
That would be the nice way to ship things (and it would have other nice features).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998020</id>
	<title>maybe the idea was just bad...</title>
	<author>xianthax</author>
	<datestamp>1257451680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>maybe its just me but i see 0 advantages for an executable with multiple binaries.</p><p>shouldn't this all be handled by the package manager? isn't including all these binaries just jacking up download sizes for no gain?</p><p>a boot CD that can run on multiple archs is the only real use i see for this, but i would have to think there is a better way handle that than changing the fundamentals of executables and libraries.</p><p>maybe he received a less than warm reception from other devs because his idea provided virtually no benefit to the end user and required more work by the devs.</p></htmltext>
<tokenext>maybe its just me but i see 0 advantages for an executable with multiple binaries.should n't this all be handled by the package manager ?
is n't including all these binaries just jacking up download sizes for no gain ? a boot CD that can run on multiple archs is the only real use i see for this , but i would have to think there is a better way handle that than changing the fundamentals of executables and libraries.maybe he received a less than warm reception from other devs because his idea provided virtually no benefit to the end user and required more work by the devs .</tokentext>
<sentencetext>maybe its just me but i see 0 advantages for an executable with multiple binaries.shouldn't this all be handled by the package manager?
isn't including all these binaries just jacking up download sizes for no gain?a boot CD that can run on multiple archs is the only real use i see for this, but i would have to think there is a better way handle that than changing the fundamentals of executables and libraries.maybe he received a less than warm reception from other devs because his idea provided virtually no benefit to the end user and required more work by the devs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997866</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>Auroch</author>
	<datestamp>1257451080000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>This in particular seems like a solution in search of a problem to me. Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit. In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually. So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.</p></div><p>EXACTLY! We don't want choice, we want it to just work! Damnit, force people to do things the way they ought to do them, don't give them choice, they'll just screw it up. <br> <br>Especially when that choice makes things EASY!</p></div>
	</htmltext>
<tokenext>This in particular seems like a solution in search of a problem to me .
Especially since on a 64 bit distro pretty much everything , with very few exceptions is 64 bit .
In fact I do n't think 64 bit distributions contain any 32 bit software except for closed source that ca n't be ported , and compatibility libraries for any applications the user would like to install manually .
So to me there does n't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.EXACTLY !
We do n't want choice , we want it to just work !
Damnit , force people to do things the way they ought to do them , do n't give them choice , they 'll just screw it up .
Especially when that choice makes things EASY !</tokentext>
<sentencetext>This in particular seems like a solution in search of a problem to me.
Especially since on a 64 bit distro pretty much everything, with very few exceptions is 64 bit.
In fact I don't think 64 bit distributions contain any 32 bit software except for closed source that can't be ported, and compatibility libraries for any applications the user would like to install manually.
So to me there doesn't seem to be a point to try to solve a problem that exists less and less as the time passes and proprietary vendors make 64 bit versions of their programs.EXACTLY!
We don't want choice, we want it to just work!
Damnit, force people to do things the way they ought to do them, don't give them choice, they'll just screw it up.
Especially when that choice makes things EASY!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000098</id>
	<title>Re:Wait, what does Con Kolivas have to do with thi</title>
	<author>Gothmolly</author>
	<datestamp>1257417180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Do you work in government ?</p></htmltext>
<tokenext>Do you work in government ?</tokentext>
<sentencetext>Do you work in government ?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997866</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004292</id>
	<title>Lets to the accounting here...</title>
	<author>Anonymous</author>
	<datestamp>1257508920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Lets to the accounting here.  From Ryan himself there were these groups:</p><p>1) Some got the idea and disagreed,</p><p>2) some didn't seem to hear what I was saying,</p><p>3) and some showed up just to be rude.</p><p>So if #1 were more than the number who liked the idea, the idea is generally unwanted and grounding it isn't wrong.</p><p>NOTHING ELSE MATTERS.</p><p>Who CARES if someone turned up to be rude or just gainsay everything? Ignore them and you still have a reason to fail the project. Group 2 can be construed as "they didn't understand me cos they're thick" but could also be explained by "I didn't sell the idea well". And group 3 seems to be "I'm being unfairly harangued". Guess what, kid? This is the internet. Live with it. People being rude is no reason for the project to fail, so why bring it up? Bring up "stop being rude" under "how to listen to ideas". Not "Why we're pulling the project" because the rudeness shouldn't have had an impact on it and if it did, then you're the problem there.</p></htmltext>
<tokenext>Lets to the accounting here .
From Ryan himself there were these groups : 1 ) Some got the idea and disagreed,2 ) some did n't seem to hear what I was saying,3 ) and some showed up just to be rude.So if # 1 were more than the number who liked the idea , the idea is generally unwanted and grounding it is n't wrong.NOTHING ELSE MATTERS.Who CARES if someone turned up to be rude or just gainsay everything ?
Ignore them and you still have a reason to fail the project .
Group 2 can be construed as " they did n't understand me cos they 're thick " but could also be explained by " I did n't sell the idea well " .
And group 3 seems to be " I 'm being unfairly harangued " .
Guess what , kid ?
This is the internet .
Live with it .
People being rude is no reason for the project to fail , so why bring it up ?
Bring up " stop being rude " under " how to listen to ideas " .
Not " Why we 're pulling the project " because the rudeness should n't have had an impact on it and if it did , then you 're the problem there .</tokentext>
<sentencetext>Lets to the accounting here.
From Ryan himself there were these groups:1) Some got the idea and disagreed,2) some didn't seem to hear what I was saying,3) and some showed up just to be rude.So if #1 were more than the number who liked the idea, the idea is generally unwanted and grounding it isn't wrong.NOTHING ELSE MATTERS.Who CARES if someone turned up to be rude or just gainsay everything?
Ignore them and you still have a reason to fail the project.
Group 2 can be construed as "they didn't understand me cos they're thick" but could also be explained by "I didn't sell the idea well".
And group 3 seems to be "I'm being unfairly harangued".
Guess what, kid?
This is the internet.
Live with it.
People being rude is no reason for the project to fail, so why bring it up?
Bring up "stop being rude" under "how to listen to ideas".
Not "Why we're pulling the project" because the rudeness shouldn't have had an impact on it and if it did, then you're the problem there.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000188</id>
	<title>Re:Structure should be at the filesystem level</title>
	<author>Ambush Commander</author>
	<datestamp>1257417600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You may be interested to know that AFS has implemented <a href="http://docs.openafs.org/Reference/1/fs\_sysname.html" title="openafs.org">a variant of this feature.</a> [openafs.org] The conceit is that filenames can contain a magic string @sys, which gets substituted with the "sysname" of a particular system.  This means if someone publishing software over AFS wants to have multi-platform support, they merely have to setup a directory divided by sysname and have compiled versions of the software for each system type they wish to support.</p></htmltext>
<tokenext>You may be interested to know that AFS has implemented a variant of this feature .
[ openafs.org ] The conceit is that filenames can contain a magic string @ sys , which gets substituted with the " sysname " of a particular system .
This means if someone publishing software over AFS wants to have multi-platform support , they merely have to setup a directory divided by sysname and have compiled versions of the software for each system type they wish to support .</tokentext>
<sentencetext>You may be interested to know that AFS has implemented a variant of this feature.
[openafs.org] The conceit is that filenames can contain a magic string @sys, which gets substituted with the "sysname" of a particular system.
This means if someone publishing software over AFS wants to have multi-platform support, they merely have to setup a directory divided by sysname and have compiled versions of the software for each system type they wish to support.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842</id>
	<title>a better idea..</title>
	<author>Eravnrekaree</author>
	<datestamp>1257451020000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Fatelf was never really a great idea in my opinion. Putting two binaries in a file is not a really good way to solve the problem as there are many more variations of CpU type including all of the x86 variation than one or two. it would be a better idea to do something similar to the AS/400, include, an intermediate form in the file, such as a syntax tree, convert it to native at runtime on the users system, and then store the native code inside the file next to the intermediate code. if the binary is moved to a new system, the native code can be regenerated again from the intermediate code. This does not even requite kernel support, the front of the file put shell code to call the code generator installed on the system, and generate the native code, and then run it. This way, things like various x86 extensions can also be supported and so on.</p></htmltext>
<tokenext>Fatelf was never really a great idea in my opinion .
Putting two binaries in a file is not a really good way to solve the problem as there are many more variations of CpU type including all of the x86 variation than one or two .
it would be a better idea to do something similar to the AS/400 , include , an intermediate form in the file , such as a syntax tree , convert it to native at runtime on the users system , and then store the native code inside the file next to the intermediate code .
if the binary is moved to a new system , the native code can be regenerated again from the intermediate code .
This does not even requite kernel support , the front of the file put shell code to call the code generator installed on the system , and generate the native code , and then run it .
This way , things like various x86 extensions can also be supported and so on .</tokentext>
<sentencetext>Fatelf was never really a great idea in my opinion.
Putting two binaries in a file is not a really good way to solve the problem as there are many more variations of CpU type including all of the x86 variation than one or two.
it would be a better idea to do something similar to the AS/400, include, an intermediate form in the file, such as a syntax tree, convert it to native at runtime on the users system, and then store the native code inside the file next to the intermediate code.
if the binary is moved to a new system, the native code can be regenerated again from the intermediate code.
This does not even requite kernel support, the front of the file put shell code to call the code generator installed on the system, and generate the native code, and then run it.
This way, things like various x86 extensions can also be supported and so on.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998310</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>Anonymous</author>
	<datestamp>1257452820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Think IT guy who works on fifteen different architectures, has a "universal" USB stick, and walks up to a random guy in his office without having to figure out what platform the random guy is on in order to fix his problem.  Do you think<nobr> <wbr></nobr>/he/ would like this?</p><p>If you Linux folks out there want broader acceptance, you have to recognize that your own forking across architecture, distribution, patch level, ABI compatibility, etc., just isn't cutting the mustard with many larger businesses.  Those that do, you will notice, generally standardize on a single platform (distro, arch, etc.), and are just as subject to vendor lock-in as anyone running Microsoft is, if only because of support costs.  Those minor differences in how various distros do package management are huge differences to IT departments who already don't have the bandwidth to meet their customer (business) needs.</p><p>That IT guy I mentioned above doesn't want to take the time to compile that source code or find the right USB stick for the guy's laptop, he just wants it to<nobr> <wbr></nobr>/work/.</p></htmltext>
<tokenext>Think IT guy who works on fifteen different architectures , has a " universal " USB stick , and walks up to a random guy in his office without having to figure out what platform the random guy is on in order to fix his problem .
Do you think /he/ would like this ? If you Linux folks out there want broader acceptance , you have to recognize that your own forking across architecture , distribution , patch level , ABI compatibility , etc. , just is n't cutting the mustard with many larger businesses .
Those that do , you will notice , generally standardize on a single platform ( distro , arch , etc .
) , and are just as subject to vendor lock-in as anyone running Microsoft is , if only because of support costs .
Those minor differences in how various distros do package management are huge differences to IT departments who already do n't have the bandwidth to meet their customer ( business ) needs.That IT guy I mentioned above does n't want to take the time to compile that source code or find the right USB stick for the guy 's laptop , he just wants it to /work/ .</tokentext>
<sentencetext>Think IT guy who works on fifteen different architectures, has a "universal" USB stick, and walks up to a random guy in his office without having to figure out what platform the random guy is on in order to fix his problem.
Do you think /he/ would like this?If you Linux folks out there want broader acceptance, you have to recognize that your own forking across architecture, distribution, patch level, ABI compatibility, etc., just isn't cutting the mustard with many larger businesses.
Those that do, you will notice, generally standardize on a single platform (distro, arch, etc.
), and are just as subject to vendor lock-in as anyone running Microsoft is, if only because of support costs.
Those minor differences in how various distros do package management are huge differences to IT departments who already don't have the bandwidth to meet their customer (business) needs.That IT guy I mentioned above doesn't want to take the time to compile that source code or find the right USB stick for the guy's laptop, he just wants it to /work/.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004544</id>
	<title>Prove Ulrich wrong</title>
	<author>Anonymous</author>
	<datestamp>1257513300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Prove Ulrich wrong.</p><p>"When you drop an apple it will fall down"</p><p>-- Sir Isaac Newton</p><p>"640k ought to be enough for anybody."</p><p>-- Sir Bill Gates</p><p>(one is right one is wrong. That one is wrong doesn't make the other wrong).</p></htmltext>
<tokenext>Prove Ulrich wrong .
" When you drop an apple it will fall down " -- Sir Isaac Newton " 640k ought to be enough for anybody .
" -- Sir Bill Gates ( one is right one is wrong .
That one is wrong does n't make the other wrong ) .</tokentext>
<sentencetext>Prove Ulrich wrong.
"When you drop an apple it will fall down"-- Sir Isaac Newton"640k ought to be enough for anybody.
"-- Sir Bill Gates(one is right one is wrong.
That one is wrong doesn't make the other wrong).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000352</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998606</id>
	<title>Re:Story of binary compatibility is short and trag</title>
	<author>jedidiah</author>
	<datestamp>1257454140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...except that guy only has to worry about ONE architecture because of the lock in nature of commercial desktop software.</p><p>So the moment he leaves the server room he needs only ONE option.</p><p>If he plugged in such a USB drive inside the server room he might get immediately walked out of the building.</p></htmltext>
<tokenext>...except that guy only has to worry about ONE architecture because of the lock in nature of commercial desktop software.So the moment he leaves the server room he needs only ONE option.If he plugged in such a USB drive inside the server room he might get immediately walked out of the building .</tokentext>
<sentencetext>...except that guy only has to worry about ONE architecture because of the lock in nature of commercial desktop software.So the moment he leaves the server room he needs only ONE option.If he plugged in such a USB drive inside the server room he might get immediately walked out of the building.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998310</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999674</id>
	<title>32 bit processes make sense on 32 bit OSs!</title>
	<author>faragon</author>
	<datestamp>1257415380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Running 32-bit processes on a 64 bit OS makes sense, in order to save memory on "normal" software that doesn't have pointer optimizations in mind (e.g. all software that use pointers instead of indexes).<br>
<br>
In my opinion, 32-bit processes it is not only a "must", but also having the tools build in 64-bit mode is a huge error. What's the problem of a 64-bit OS running most processes in 32-bit mode? In the end, there are just few processes that could need more than 2/3GB of memory, and the extra CPU registers doesn't make the difference, yet (<a href="http://slashdot.org/comments.pl?sid=1344897&amp;cid=29164637" title="slashdot.org">1</a> [slashdot.org], <a href="http://slashdot.org/comments.pl?sid=1344897&amp;cid=29165951" title="slashdot.org">2</a> [slashdot.org]).</htmltext>
<tokenext>Running 32-bit processes on a 64 bit OS makes sense , in order to save memory on " normal " software that does n't have pointer optimizations in mind ( e.g .
all software that use pointers instead of indexes ) .
In my opinion , 32-bit processes it is not only a " must " , but also having the tools build in 64-bit mode is a huge error .
What 's the problem of a 64-bit OS running most processes in 32-bit mode ?
In the end , there are just few processes that could need more than 2/3GB of memory , and the extra CPU registers does n't make the difference , yet ( 1 [ slashdot.org ] , 2 [ slashdot.org ] ) .</tokentext>
<sentencetext>Running 32-bit processes on a 64 bit OS makes sense, in order to save memory on "normal" software that doesn't have pointer optimizations in mind (e.g.
all software that use pointers instead of indexes).
In my opinion, 32-bit processes it is not only a "must", but also having the tools build in 64-bit mode is a huge error.
What's the problem of a 64-bit OS running most processes in 32-bit mode?
In the end, there are just few processes that could need more than 2/3GB of memory, and the extra CPU registers doesn't make the difference, yet (1 [slashdot.org], 2 [slashdot.org]).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998400
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000188
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003218
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30010876
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998946
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999528
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999190
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000616
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998082
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004258
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998992
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997484
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997530
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000966
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_59</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000480
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997866
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000098
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998532
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001036
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003386
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998312
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003720
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999276
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_54</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999342
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004890
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_56</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997998
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_61</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998174
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001976
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001980
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999674
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30007576
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001370
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002698
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998310
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998606
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998908
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998090
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998594
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999056
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000778
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997878
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998868
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002374
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998848
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998066
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001734
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001352
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998330
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999838
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000352
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004544
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998124
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000564
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_55</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998462
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_57</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998538
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003080
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_60</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002034
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30013476
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000680
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004960
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998692
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000578
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_58</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998964
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998746
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999100
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998372
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998702
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003956
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997856
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997866
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002146
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998226
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998578
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_05_1735225_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30007996
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997484
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997530
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998936
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998082
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004258
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998672
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997686
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997856
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998510
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998702
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001976
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998372
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30010876
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998578
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997982
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000564
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999190
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000480
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997998
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998226
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000352
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004544
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997696
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998124
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998310
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998606
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000616
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998114
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998946
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998692
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998686
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004960
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998848
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998330
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999838
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998194
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998746
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003386
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000680
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000778
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003720
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002374
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003080
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997634
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998090
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998594
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999056
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997878
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002698
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001370
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998312
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999674
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30007576
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998066
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30007996
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998992
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997866
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000098
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002146
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997842
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998538
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000966
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999100
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998092
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30002034
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997978
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998462
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998174
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998400
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997606
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001734
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998532
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001036
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001352
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999342
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30004890
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000578
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998348
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999528
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003956
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30013476
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29999276
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998908
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997648
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_05_1735225.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29997808
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998868
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30001980
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.29998964
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30000188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_05_1735225.30003218
</commentlist>
</conversation>
