<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_10_25_0450232</id>
	<title>Ryan Gordon Wants To Bring Universal Binaries To Linux</title>
	<author>timothy</author>
	<datestamp>1256472720000</datestamp>
	<htmltext>wisesifu writes <i>"One of the interesting features of Mac OS X is its 'universal binaries' feature that allows a single binary file to run natively on both PowerPC and Intel x86 platforms. While this comes at a cost of a larger binary file, it's convenient on the end-user and on software vendors for distributing their applications. While Linux has lacked such support for fat binaries, <a href="http://www.phoronix.com/scan.php?page=news\_item&amp;px=NzYyNQ">Ryan Gordon has decided this should be changed</a>."</i></htmltext>
<tokenext>wisesifu writes " One of the interesting features of Mac OS X is its 'universal binaries ' feature that allows a single binary file to run natively on both PowerPC and Intel x86 platforms .
While this comes at a cost of a larger binary file , it 's convenient on the end-user and on software vendors for distributing their applications .
While Linux has lacked such support for fat binaries , Ryan Gordon has decided this should be changed .
"</tokentext>
<sentencetext>wisesifu writes "One of the interesting features of Mac OS X is its 'universal binaries' feature that allows a single binary file to run natively on both PowerPC and Intel x86 platforms.
While this comes at a cost of a larger binary file, it's convenient on the end-user and on software vendors for distributing their applications.
While Linux has lacked such support for fat binaries, Ryan Gordon has decided this should be changed.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864801</id>
	<title>Re:Only useful for non-free applications</title>
	<author>icebraining</author>
	<datestamp>1256488320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Distributions no longer need to have separate downloads for various platforms. Given enough disc space, there's no reason you couldn't have one DVD<nobr> <wbr></nobr>.iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system, doing the right thing at boot time. You can remove all the confusing text from your website about "which installer is right for me?"</p></div></blockquote><p>Yeah. On the other hand, the DVD will only have 1/7 of the apps it has right now, because the other 6/7 will be full with useless binaries.</p></div>
	</htmltext>
<tokenext>Distributions no longer need to have separate downloads for various platforms .
Given enough disc space , there 's no reason you could n't have one DVD .iso that installs an x86-64 , x86 , PowerPC , SPARC , and MIPS system , doing the right thing at boot time .
You can remove all the confusing text from your website about " which installer is right for me ? " Yeah .
On the other hand , the DVD will only have 1/7 of the apps it has right now , because the other 6/7 will be full with useless binaries .</tokentext>
<sentencetext>Distributions no longer need to have separate downloads for various platforms.
Given enough disc space, there's no reason you couldn't have one DVD .iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system, doing the right thing at boot time.
You can remove all the confusing text from your website about "which installer is right for me?"Yeah.
On the other hand, the DVD will only have 1/7 of the apps it has right now, because the other 6/7 will be full with useless binaries.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864001</id>
	<title>Universal Source ?</title>
	<author>obarthelemy</author>
	<datestamp>1256480400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm already amazed we have a universal x86 binary. With the architectural differences between an Atom and a Core7 or 9... I dare not think of all the inefficiencies this creates.</p><p>Wouldn't it be better to shoot for a Universal Source, with the install step integrating a compile+link step ? I know Gentoo does this, but Gentoo is marginal within the marginality that is Linux, on the desktop.</p><p>I'm amazed you can do real-time x86 emulation on non-x86 CPUs, but still can"t have a Universal Source.</p></htmltext>
<tokenext>I 'm already amazed we have a universal x86 binary .
With the architectural differences between an Atom and a Core7 or 9... I dare not think of all the inefficiencies this creates.Would n't it be better to shoot for a Universal Source , with the install step integrating a compile + link step ?
I know Gentoo does this , but Gentoo is marginal within the marginality that is Linux , on the desktop.I 'm amazed you can do real-time x86 emulation on non-x86 CPUs , but still can " t have a Universal Source .</tokentext>
<sentencetext>I'm already amazed we have a universal x86 binary.
With the architectural differences between an Atom and a Core7 or 9... I dare not think of all the inefficiencies this creates.Wouldn't it be better to shoot for a Universal Source, with the install step integrating a compile+link step ?
I know Gentoo does this, but Gentoo is marginal within the marginality that is Linux, on the desktop.I'm amazed you can do real-time x86 emulation on non-x86 CPUs, but still can"t have a Universal Source.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867457</id>
	<title>Re:Apple dropped it</title>
	<author>Anonymous</author>
	<datestamp>1256469240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wait, if an an application requires a feature/API/library which is only available in the latest version of the OS, and the latest version of the OS is not available for your platform, how, exactly, do you build a Universal binary for a feature or library which doesn't exist for the target platform?</p><p>I'm sure Apple still supports Universal binaries, but I'm pretty certain this must a problem for new application versions? (Note, I'm not a Mac user or developer, just trying to think logically here)?</p></htmltext>
<tokenext>Wait , if an an application requires a feature/API/library which is only available in the latest version of the OS , and the latest version of the OS is not available for your platform , how , exactly , do you build a Universal binary for a feature or library which does n't exist for the target platform ? I 'm sure Apple still supports Universal binaries , but I 'm pretty certain this must a problem for new application versions ?
( Note , I 'm not a Mac user or developer , just trying to think logically here ) ?</tokentext>
<sentencetext>Wait, if an an application requires a feature/API/library which is only available in the latest version of the OS, and the latest version of the OS is not available for your platform, how, exactly, do you build a Universal binary for a feature or library which doesn't exist for the target platform?I'm sure Apple still supports Universal binaries, but I'm pretty certain this must a problem for new application versions?
(Note, I'm not a Mac user or developer, just trying to think logically here)?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863809</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864245</id>
	<title>Re:Apple dropped it</title>
	<author>Nimey</author>
	<datestamp>1256482620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>However, I noticed that Snow Leopard doesn't install Rosetta by default.  At least this was so when I updated my work Macbook last week.</p><p>Seems a bit silly, since the installer claimed Rosetta took up only about 1.5MB of disk space.</p></htmltext>
<tokenext>However , I noticed that Snow Leopard does n't install Rosetta by default .
At least this was so when I updated my work Macbook last week.Seems a bit silly , since the installer claimed Rosetta took up only about 1.5MB of disk space .</tokentext>
<sentencetext>However, I noticed that Snow Leopard doesn't install Rosetta by default.
At least this was so when I updated my work Macbook last week.Seems a bit silly, since the installer claimed Rosetta took up only about 1.5MB of disk space.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863809</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863777</id>
	<title>Re:Gee, just 14 years</title>
	<author>Anonymous</author>
	<datestamp>1256477940000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>NextStep isnt dead, it just got a new name when Next told Apple to buy them....</htmltext>
<tokenext>NextStep isnt dead , it just got a new name when Next told Apple to buy them... .</tokentext>
<sentencetext>NextStep isnt dead, it just got a new name when Next told Apple to buy them....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863645</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29880133</id>
	<title>Re:Unix (OSF) tried it with ANDF</title>
	<author>lennier</author>
	<datestamp>1256569860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It almost seems like we could use LLVM for that nowadays. Hmm.</p></htmltext>
<tokenext>It almost seems like we could use LLVM for that nowadays .
Hmm .</tokentext>
<sentencetext>It almost seems like we could use LLVM for that nowadays.
Hmm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866761</id>
	<title>Re:Apple dropped it</title>
	<author>drsmithy</author>
	<datestamp>1256461380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>personally I wish MSFT would do the same thing.</i>
</p><p>They do.  The difference is that a "generation" to Apple is on the order of 12-18 months, whereas to Microsoft it's more like 7-8 *years*.
</p><p> <i>MSFT let's them stay in the previous century and use bare metal knife switches to turn on the lights.</i>
</p><p>So do most enterprise-level vendors - but Apple is far more interested in forced upgrades and milking its customers.</p></htmltext>
<tokenext>personally I wish MSFT would do the same thing .
They do .
The difference is that a " generation " to Apple is on the order of 12-18 months , whereas to Microsoft it 's more like 7-8 * years * .
MSFT let 's them stay in the previous century and use bare metal knife switches to turn on the lights .
So do most enterprise-level vendors - but Apple is far more interested in forced upgrades and milking its customers .</tokentext>
<sentencetext> personally I wish MSFT would do the same thing.
They do.
The difference is that a "generation" to Apple is on the order of 12-18 months, whereas to Microsoft it's more like 7-8 *years*.
MSFT let's them stay in the previous century and use bare metal knife switches to turn on the lights.
So do most enterprise-level vendors - but Apple is far more interested in forced upgrades and milking its customers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864419</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29868163</id>
	<title>This won't fix installation media. Also: LLVM?</title>
	<author>TD-Linux</author>
	<datestamp>1256478300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>First off, I would like to point out that I don't support the idea of making binaries universal by packing them in the same<nobr> <wbr></nobr>.elf. Sure, it would be neat to have a DVD that can install to different architectures, it's currently not possible because the bootloader \_has\_ to be native. You'd probably end up with a bunch of boot floppies for a single installation media. In addition, is there any difference to having a package for each architecture on this DVD rather than universal packages? With separate packages the same amount of DVD space is consumed but far less is consumed on the target hard disk.

<br> <br>In addition, this isn't going to make companies support uncommon architectures any faster. Maintaining multiple packages is really easy, keeping the code working across different architectures is not so hard. I think it's mostly programmer laziness not bothering to compile other architecture packages - and there is no reason fatELF is going to decrease this laziness.

<br> <br>If you \_really\_ want architecture independence (which universal binaries don't really provide, they still only support architectures that the author had in mind), you'll need a recompiler of some sort. LLVM is designed to be suited to this task, so why not use it? Apple already does. Yes, I know it's a bit slower, but a small price to pay if you need your app to run on every system under the sun.

<br> <br>Of course, there is always the option of web apps...</htmltext>
<tokenext>First off , I would like to point out that I do n't support the idea of making binaries universal by packing them in the same .elf .
Sure , it would be neat to have a DVD that can install to different architectures , it 's currently not possible because the bootloader \ _has \ _ to be native .
You 'd probably end up with a bunch of boot floppies for a single installation media .
In addition , is there any difference to having a package for each architecture on this DVD rather than universal packages ?
With separate packages the same amount of DVD space is consumed but far less is consumed on the target hard disk .
In addition , this is n't going to make companies support uncommon architectures any faster .
Maintaining multiple packages is really easy , keeping the code working across different architectures is not so hard .
I think it 's mostly programmer laziness not bothering to compile other architecture packages - and there is no reason fatELF is going to decrease this laziness .
If you \ _really \ _ want architecture independence ( which universal binaries do n't really provide , they still only support architectures that the author had in mind ) , you 'll need a recompiler of some sort .
LLVM is designed to be suited to this task , so why not use it ?
Apple already does .
Yes , I know it 's a bit slower , but a small price to pay if you need your app to run on every system under the sun .
Of course , there is always the option of web apps.. .</tokentext>
<sentencetext>First off, I would like to point out that I don't support the idea of making binaries universal by packing them in the same .elf.
Sure, it would be neat to have a DVD that can install to different architectures, it's currently not possible because the bootloader \_has\_ to be native.
You'd probably end up with a bunch of boot floppies for a single installation media.
In addition, is there any difference to having a package for each architecture on this DVD rather than universal packages?
With separate packages the same amount of DVD space is consumed but far less is consumed on the target hard disk.
In addition, this isn't going to make companies support uncommon architectures any faster.
Maintaining multiple packages is really easy, keeping the code working across different architectures is not so hard.
I think it's mostly programmer laziness not bothering to compile other architecture packages - and there is no reason fatELF is going to decrease this laziness.
If you \_really\_ want architecture independence (which universal binaries don't really provide, they still only support architectures that the author had in mind), you'll need a recompiler of some sort.
LLVM is designed to be suited to this task, so why not use it?
Apple already does.
Yes, I know it's a bit slower, but a small price to pay if you need your app to run on every system under the sun.
Of course, there is always the option of web apps...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866387</id>
	<title>Bandwidth is not unlimited</title>
	<author>tepples</author>
	<datestamp>1256501400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Given enough disc space, there's no reason you couldn't have one DVD<nobr> <wbr></nobr>.iso</p></div><p>I don't want a DVD image, which would burn through the vast majority of a 5 GB/month transfer allowance. I want a CD image, which burns through one-seventh of that.</p><p><div class="quote"><p>the otherwise unchanged hundreds of megabytes of data.</p></div><p>How is byte-swapped data "otherwise unchanged"?</p><p><div class="quote"><p>One hard drive partition can be booted on different machines with different CPU architectures, for development and experimentation.</p></div><p>Different machines likely have different layouts for the master boot record.</p></div>
	</htmltext>
<tokenext>Given enough disc space , there 's no reason you could n't have one DVD .isoI do n't want a DVD image , which would burn through the vast majority of a 5 GB/month transfer allowance .
I want a CD image , which burns through one-seventh of that.the otherwise unchanged hundreds of megabytes of data.How is byte-swapped data " otherwise unchanged " ? One hard drive partition can be booted on different machines with different CPU architectures , for development and experimentation.Different machines likely have different layouts for the master boot record .</tokentext>
<sentencetext>Given enough disc space, there's no reason you couldn't have one DVD .isoI don't want a DVD image, which would burn through the vast majority of a 5 GB/month transfer allowance.
I want a CD image, which burns through one-seventh of that.the otherwise unchanged hundreds of megabytes of data.How is byte-swapped data "otherwise unchanged"?One hard drive partition can be booted on different machines with different CPU architectures, for development and experimentation.Different machines likely have different layouts for the master boot record.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865939</id>
	<title>What's with the hate?</title>
	<author>IntergalacticWalrus</author>
	<datestamp>1256498220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Fat binaries are awesome, they're what makes Mac OS X the mainstream operating system with the most painless 64-bit transition. Whereas Windows and Linux use messy hacks to allow 32-bit apps to live in a 64-bit environment, in OS X it "just works" because all libraries are multi-architecture in a completely transparent way thanks to fat binaries. It also made the transition from PPC to x86 relatively easy, too.</p><p>I feel sorry for people who believe Linux doesn't need fat binaries. They don't understand all the advantages this system brings (and not just to non-free software).</p></htmltext>
<tokenext>Fat binaries are awesome , they 're what makes Mac OS X the mainstream operating system with the most painless 64-bit transition .
Whereas Windows and Linux use messy hacks to allow 32-bit apps to live in a 64-bit environment , in OS X it " just works " because all libraries are multi-architecture in a completely transparent way thanks to fat binaries .
It also made the transition from PPC to x86 relatively easy , too.I feel sorry for people who believe Linux does n't need fat binaries .
They do n't understand all the advantages this system brings ( and not just to non-free software ) .</tokentext>
<sentencetext>Fat binaries are awesome, they're what makes Mac OS X the mainstream operating system with the most painless 64-bit transition.
Whereas Windows and Linux use messy hacks to allow 32-bit apps to live in a 64-bit environment, in OS X it "just works" because all libraries are multi-architecture in a completely transparent way thanks to fat binaries.
It also made the transition from PPC to x86 relatively easy, too.I feel sorry for people who believe Linux doesn't need fat binaries.
They don't understand all the advantages this system brings (and not just to non-free software).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865727</id>
	<title>It's not a bad idea, actually.. or is it?</title>
	<author>tjstork</author>
	<datestamp>1256496420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The benefit would be that it would make data more transportable at first blush.  The problem is clear enough : The whole process of installing an operating system and shipping data to it is a huge waste.  Being able to take a drive and transplant it into a newer machine without having to re-install anything is an absolute time saver.  I'm still loving Linux that it let me do that without too much fallout, but why couldn't I take a brick and put it on a faster processor, or a lower power processor, or in a friend's virtual machine, or anything... I don't need to be married to CPU architecture...</p><p>And that's really where it all falls apart, because you can't possibly ship a computer that has every CPU architecture in every binary..  But maybe you could have a bootstrapper / kernel that always gets all of the possible CPUs, just in case, for enough to be able to boot itself, mount its own file system and get to a network.  Then, the operating system would replace the rest of the binaries with new versions, as part of your transplant process, and your computer would just work.</p></htmltext>
<tokenext>The benefit would be that it would make data more transportable at first blush .
The problem is clear enough : The whole process of installing an operating system and shipping data to it is a huge waste .
Being able to take a drive and transplant it into a newer machine without having to re-install anything is an absolute time saver .
I 'm still loving Linux that it let me do that without too much fallout , but why could n't I take a brick and put it on a faster processor , or a lower power processor , or in a friend 's virtual machine , or anything... I do n't need to be married to CPU architecture...And that 's really where it all falls apart , because you ca n't possibly ship a computer that has every CPU architecture in every binary.. But maybe you could have a bootstrapper / kernel that always gets all of the possible CPUs , just in case , for enough to be able to boot itself , mount its own file system and get to a network .
Then , the operating system would replace the rest of the binaries with new versions , as part of your transplant process , and your computer would just work .</tokentext>
<sentencetext>The benefit would be that it would make data more transportable at first blush.
The problem is clear enough : The whole process of installing an operating system and shipping data to it is a huge waste.
Being able to take a drive and transplant it into a newer machine without having to re-install anything is an absolute time saver.
I'm still loving Linux that it let me do that without too much fallout, but why couldn't I take a brick and put it on a faster processor, or a lower power processor, or in a friend's virtual machine, or anything... I don't need to be married to CPU architecture...And that's really where it all falls apart, because you can't possibly ship a computer that has every CPU architecture in every binary..  But maybe you could have a bootstrapper / kernel that always gets all of the possible CPUs, just in case, for enough to be able to boot itself, mount its own file system and get to a network.
Then, the operating system would replace the rest of the binaries with new versions, as part of your transplant process, and your computer would just work.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864323</id>
	<title>we already have this just not binary</title>
	<author>Murdoch5</author>
	<datestamp>1256483340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>We can already do this.  Just run a source based distro and then you can easily port the code to any target you want.  It's not a hard job or even difficult.    The best case is Gentoo just change the chost and cflags to accept a PPC input and get a cross chain running with gcc etc....  Either way the problem is not hard to over come and it wouldn't be hard to fix.</htmltext>
<tokenext>We can already do this .
Just run a source based distro and then you can easily port the code to any target you want .
It 's not a hard job or even difficult .
The best case is Gentoo just change the chost and cflags to accept a PPC input and get a cross chain running with gcc etc.... Either way the problem is not hard to over come and it would n't be hard to fix .</tokentext>
<sentencetext>We can already do this.
Just run a source based distro and then you can easily port the code to any target you want.
It's not a hard job or even difficult.
The best case is Gentoo just change the chost and cflags to accept a PPC input and get a cross chain running with gcc etc....  Either way the problem is not hard to over come and it wouldn't be hard to fix.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864047</id>
	<title>Re:Only useful for non-free applications</title>
	<author>selven</author>
	<datestamp>1256480940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Non-open-source? That's a pretty convoluted way to say "closed-source".</p></htmltext>
<tokenext>Non-open-source ?
That 's a pretty convoluted way to say " closed-source " .</tokentext>
<sentencetext>Non-open-source?
That's a pretty convoluted way to say "closed-source".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864927</id>
	<title>Why not fat packages?</title>
	<author>Vellmont</author>
	<datestamp>1256489520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The underlying problem is that end users don't want to be bothered by having to know if they need the 64 bit version or the 32 bit version (or rarely some other platform).</p><p>We already have wildly successful packaging systems for linux that handle this class of problem rather well.  So why not extend the packaging system to support multiple binaries in the same package?  You'd certainly save on HD space.  It also seems a bit cleaner.</p></htmltext>
<tokenext>The underlying problem is that end users do n't want to be bothered by having to know if they need the 64 bit version or the 32 bit version ( or rarely some other platform ) .We already have wildly successful packaging systems for linux that handle this class of problem rather well .
So why not extend the packaging system to support multiple binaries in the same package ?
You 'd certainly save on HD space .
It also seems a bit cleaner .</tokentext>
<sentencetext>The underlying problem is that end users don't want to be bothered by having to know if they need the 64 bit version or the 32 bit version (or rarely some other platform).We already have wildly successful packaging systems for linux that handle this class of problem rather well.
So why not extend the packaging system to support multiple binaries in the same package?
You'd certainly save on HD space.
It also seems a bit cleaner.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867085</id>
	<title>Re:Unix (OSF) tried it with ANDF</title>
	<author>eggnoglatte</author>
	<datestamp>1256464560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That is not what the Mac does, though. On MacOS, you literally have the different platform binaries stored in <b>a single</b> file. For example:</p><p># file<nobr> <wbr></nobr>/Applications/iMovie.app/Contents/MacOS/iMovie<nobr> <wbr></nobr>/Applications/iMovie.app/Contents/MacOS/iMovie: Mach-O universal binary with 2 architectures<nobr> <wbr></nobr>/Applications/iMovie.app/Contents/MacOS/iMovie (for architecture ppc):    Mach-O executable ppc<nobr> <wbr></nobr>/Applications/iMovie.app/Contents/MacOS/iMovie (for architecture i386):    Mach-O executable i386</p><p>Now, this may not sound super useful for a single machine, but it makes it so easy to share an Application folder on a common file server, for example. Just imagine - PowerPC, Intel 32, Intel 64, all sharing the same mount point and applications with the same path. Brilliant.</p></htmltext>
<tokenext>That is not what the Mac does , though .
On MacOS , you literally have the different platform binaries stored in a single file .
For example : # file /Applications/iMovie.app/Contents/MacOS/iMovie /Applications/iMovie.app/Contents/MacOS/iMovie : Mach-O universal binary with 2 architectures /Applications/iMovie.app/Contents/MacOS/iMovie ( for architecture ppc ) : Mach-O executable ppc /Applications/iMovie.app/Contents/MacOS/iMovie ( for architecture i386 ) : Mach-O executable i386Now , this may not sound super useful for a single machine , but it makes it so easy to share an Application folder on a common file server , for example .
Just imagine - PowerPC , Intel 32 , Intel 64 , all sharing the same mount point and applications with the same path .
Brilliant .</tokentext>
<sentencetext>That is not what the Mac does, though.
On MacOS, you literally have the different platform binaries stored in a single file.
For example:# file /Applications/iMovie.app/Contents/MacOS/iMovie /Applications/iMovie.app/Contents/MacOS/iMovie: Mach-O universal binary with 2 architectures /Applications/iMovie.app/Contents/MacOS/iMovie (for architecture ppc):    Mach-O executable ppc /Applications/iMovie.app/Contents/MacOS/iMovie (for architecture i386):    Mach-O executable i386Now, this may not sound super useful for a single machine, but it makes it so easy to share an Application folder on a common file server, for example.
Just imagine - PowerPC, Intel 32, Intel 64, all sharing the same mount point and applications with the same path.
Brilliant.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866667</id>
	<title>Re:Linking problems</title>
	<author>Anonymous</author>
	<datestamp>1256503800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think Apple had the right idea with their PEF versioning, which allowed libraries to declare their own compatibility ranges.
</p><p>The idea was each built library advertises the oldest version whose api it supports, and its current version.  When you link against a library for your own build, the result is by declaration of the library implementor compatible with any version in that range.  At runtime, so long as an available implementation's compatibility range overlaps the one you built against, it'll work &mdash; again, by declaration of the library implementor.  You don't even have to know, which helps you and, more to the point, the local admin.
</p><p>So the consequent rule was, when building a library reset its oldest-supported-version to the current version only when you remove or change the behavior of an existing api &mdash; adding a call or defining new selectors on an existing call was fine &mdash; and otherwise leave it alone.  Once you understood it, it was simple.</p></htmltext>
<tokenext>I think Apple had the right idea with their PEF versioning , which allowed libraries to declare their own compatibility ranges .
The idea was each built library advertises the oldest version whose api it supports , and its current version .
When you link against a library for your own build , the result is by declaration of the library implementor compatible with any version in that range .
At runtime , so long as an available implementation 's compatibility range overlaps the one you built against , it 'll work    again , by declaration of the library implementor .
You do n't even have to know , which helps you and , more to the point , the local admin .
So the consequent rule was , when building a library reset its oldest-supported-version to the current version only when you remove or change the behavior of an existing api    adding a call or defining new selectors on an existing call was fine    and otherwise leave it alone .
Once you understood it , it was simple .</tokentext>
<sentencetext>I think Apple had the right idea with their PEF versioning, which allowed libraries to declare their own compatibility ranges.
The idea was each built library advertises the oldest version whose api it supports, and its current version.
When you link against a library for your own build, the result is by declaration of the library implementor compatible with any version in that range.
At runtime, so long as an available implementation's compatibility range overlaps the one you built against, it'll work — again, by declaration of the library implementor.
You don't even have to know, which helps you and, more to the point, the local admin.
So the consequent rule was, when building a library reset its oldest-supported-version to the current version only when you remove or change the behavior of an existing api — adding a call or defining new selectors on an existing call was fine — and otherwise leave it alone.
Once you understood it, it was simple.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863701</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29870211</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>Guy Harris</author>
	<datestamp>1256551380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>You are confusing NeXT and Apple's approaches, I think.</p></div><p>You should think differently.  NeXTStEP introduced a fat binary scheme to allow a single executable file to contain binaries for 68k NeXT boxes and x86 PC's running NeXTStEP.  OS X picked up that scheme from NeXTStEP, so NeXT's and Apple's approaches are the same.</p><p><div class="quote"><p>Your code is compiled twice, but it's only linked once.</p></div><p>No, it's compiled N times, and linked N times, once for each instruction set architecture.</p><p><div class="quote"><p>The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files.</p></div><p>If by "data" you mean data in files under, say, Resources in the app bundle, yes, it can be shared.  If by "data" you mean the data segments in the executable, no, it's not shared - a fat binary is just a bunch of Mach-O binaries with a special header wrapped around them, and each Mach-O binary has a full set of text/data/etc. segments.</p><p><div class="quote"><p>To support this on Linux would not require any changes to the kernel, only to the loader (which is a GNU project, and not actually part of Linux).</p></div><p>If by "this" you mean something that looks like OS X fat binaries, you would have to change the kernel to understand fat binary files, running the appropriate executable within that file.</p></div>
	</htmltext>
<tokenext>You are confusing NeXT and Apple 's approaches , I think.You should think differently .
NeXTStEP introduced a fat binary scheme to allow a single executable file to contain binaries for 68k NeXT boxes and x86 PC 's running NeXTStEP .
OS X picked up that scheme from NeXTStEP , so NeXT 's and Apple 's approaches are the same.Your code is compiled twice , but it 's only linked once.No , it 's compiled N times , and linked N times , once for each instruction set architecture.The PowerPC { 32,64 } and x86 { 32,64 } code all goes in different segments in the binary , but data is shared between all of them , so it takes less space than having 2-4 independent binary files.If by " data " you mean data in files under , say , Resources in the app bundle , yes , it can be shared .
If by " data " you mean the data segments in the executable , no , it 's not shared - a fat binary is just a bunch of Mach-O binaries with a special header wrapped around them , and each Mach-O binary has a full set of text/data/etc .
segments.To support this on Linux would not require any changes to the kernel , only to the loader ( which is a GNU project , and not actually part of Linux ) .If by " this " you mean something that looks like OS X fat binaries , you would have to change the kernel to understand fat binary files , running the appropriate executable within that file .</tokentext>
<sentencetext>You are confusing NeXT and Apple's approaches, I think.You should think differently.
NeXTStEP introduced a fat binary scheme to allow a single executable file to contain binaries for 68k NeXT boxes and x86 PC's running NeXTStEP.
OS X picked up that scheme from NeXTStEP, so NeXT's and Apple's approaches are the same.Your code is compiled twice, but it's only linked once.No, it's compiled N times, and linked N times, once for each instruction set architecture.The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files.If by "data" you mean data in files under, say, Resources in the app bundle, yes, it can be shared.
If by "data" you mean the data segments in the executable, no, it's not shared - a fat binary is just a bunch of Mach-O binaries with a special header wrapped around them, and each Mach-O binary has a full set of text/data/etc.
segments.To support this on Linux would not require any changes to the kernel, only to the loader (which is a GNU project, and not actually part of Linux).If by "this" you mean something that looks like OS X fat binaries, you would have to change the kernel to understand fat binary files, running the appropriate executable within that file.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867515</id>
	<title>Re:Unix (OSF) tried it with ANDF</title>
	<author>FrankieBaby1986</author>
	<datestamp>1256470200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Dead on. Somebody please tag this article "JavaJIT"</htmltext>
<tokenext>Dead on .
Somebody please tag this article " JavaJIT "</tokentext>
<sentencetext>Dead on.
Somebody please tag this article "JavaJIT"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863827</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>Anonymous</author>
	<datestamp>1256478480000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p>OS X' universal applications are ONE SINGLE application, but the executable file itself inside the app - and there is only ONE executable, not two or more - contains code for all architectures. Let me repeat: ONE executable file with all architectures, not several executables. So you're wrong. It's a quite smart solution.</p><p>That said, OS X' universal files are pretty much on their way out, as Snow Leopard (10.6) doesn't play ball with PPC. As time goes on and people realize that x86 is a dead horse running, we might however see universal executables again, but then as ARM and x86.</p></htmltext>
<tokenext>OS X ' universal applications are ONE SINGLE application , but the executable file itself inside the app - and there is only ONE executable , not two or more - contains code for all architectures .
Let me repeat : ONE executable file with all architectures , not several executables .
So you 're wrong .
It 's a quite smart solution.That said , OS X ' universal files are pretty much on their way out , as Snow Leopard ( 10.6 ) does n't play ball with PPC .
As time goes on and people realize that x86 is a dead horse running , we might however see universal executables again , but then as ARM and x86 .</tokentext>
<sentencetext>OS X' universal applications are ONE SINGLE application, but the executable file itself inside the app - and there is only ONE executable, not two or more - contains code for all architectures.
Let me repeat: ONE executable file with all architectures, not several executables.
So you're wrong.
It's a quite smart solution.That said, OS X' universal files are pretty much on their way out, as Snow Leopard (10.6) doesn't play ball with PPC.
As time goes on and people realize that x86 is a dead horse running, we might however see universal executables again, but then as ARM and x86.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867455</id>
	<title>Good intentions, but a waste of time.</title>
	<author>keatonguy</author>
	<datestamp>1256469240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I don't think this is really relevant in an OSS-based platform. Most apps you get for Linux that aren't distributed through your package manager are in source, allowing you to run it through the compiler for whatever architecture you happen to be using, which makes having multi-architecture binaries a moot point. Which is not to say that multi-architecture support is a BAD thing, of course, just that putting it all in one binary is the wrong approach to take.</p><p>I don't know if the average repository stores packages in a particularly wide variety of architectures, but it seems logical to me that <i>that's</i> the place where you put in universal support if it isn't that way already.</p></htmltext>
<tokenext>I do n't think this is really relevant in an OSS-based platform .
Most apps you get for Linux that are n't distributed through your package manager are in source , allowing you to run it through the compiler for whatever architecture you happen to be using , which makes having multi-architecture binaries a moot point .
Which is not to say that multi-architecture support is a BAD thing , of course , just that putting it all in one binary is the wrong approach to take.I do n't know if the average repository stores packages in a particularly wide variety of architectures , but it seems logical to me that that 's the place where you put in universal support if it is n't that way already .</tokentext>
<sentencetext>I don't think this is really relevant in an OSS-based platform.
Most apps you get for Linux that aren't distributed through your package manager are in source, allowing you to run it through the compiler for whatever architecture you happen to be using, which makes having multi-architecture binaries a moot point.
Which is not to say that multi-architecture support is a BAD thing, of course, just that putting it all in one binary is the wrong approach to take.I don't know if the average repository stores packages in a particularly wide variety of architectures, but it seems logical to me that that's the place where you put in universal support if it isn't that way already.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864243</id>
	<title>Less crazy than it looks</title>
	<author>Thad Zurich</author>
	<datestamp>1256482560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>My initial thought was that this was insane -- Linux software should only be distributed as source and compiled on the target. Then I actually read the material and realized we were talking about distribution binary Linux installers. Since those pretty much have to be compiled before loading on the target, a multi-platform binary seems to make perfect sense in this context.</htmltext>
<tokenext>My initial thought was that this was insane -- Linux software should only be distributed as source and compiled on the target .
Then I actually read the material and realized we were talking about distribution binary Linux installers .
Since those pretty much have to be compiled before loading on the target , a multi-platform binary seems to make perfect sense in this context .</tokentext>
<sentencetext>My initial thought was that this was insane -- Linux software should only be distributed as source and compiled on the target.
Then I actually read the material and realized we were talking about distribution binary Linux installers.
Since those pretty much have to be compiled before loading on the target, a multi-platform binary seems to make perfect sense in this context.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863901</id>
	<title>Re:Gee, just 14 years</title>
	<author>Hal\_Porter</author>
	<datestamp>1256479380000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Nextstep isn't really gone, it just possessed MacOS and now it walks around in its body, a bit like VMS did to Windows.</p></htmltext>
<tokenext>Nextstep is n't really gone , it just possessed MacOS and now it walks around in its body , a bit like VMS did to Windows .</tokentext>
<sentencetext>Nextstep isn't really gone, it just possessed MacOS and now it walks around in its body, a bit like VMS did to Windows.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863645</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29871769</id>
	<title>Mac users are stupid</title>
	<author>Anonymous</author>
	<datestamp>1256568660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Proven, once again. One size doesn't fit everybody...</p><p>Fortunately a company doesn't decide <i>our</i> destiny.</p></htmltext>
<tokenext>Proven , once again .
One size does n't fit everybody...Fortunately a company does n't decide our destiny .</tokentext>
<sentencetext>Proven, once again.
One size doesn't fit everybody...Fortunately a company doesn't decide our destiny.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864415</id>
	<title>Less Fat, more Unreal</title>
	<author>iamspews</author>
	<datestamp>1256484240000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>This is cool and everything, but I'd rather Ryan spent the time on whatever it takes to get Linux Unreal 3 published.</htmltext>
<tokenext>This is cool and everything , but I 'd rather Ryan spent the time on whatever it takes to get Linux Unreal 3 published .</tokentext>
<sentencetext>This is cool and everything, but I'd rather Ryan spent the time on whatever it takes to get Linux Unreal 3 published.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864369</id>
	<title>He forgot the ARM, z10, m88k CPUs</title>
	<author>Anonymous</author>
	<datestamp>1256483760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There may be more which should be included...</p><p>So... x86-64, x86, PowerPC, SPARC, MIPS, ARM, z10, m88k</p><p>How big are *your* binaries?</p><p>
&nbsp;</p></htmltext>
<tokenext>There may be more which should be included...So... x86-64 , x86 , PowerPC , SPARC , MIPS , ARM , z10 , m88kHow big are * your * binaries ?
 </tokentext>
<sentencetext>There may be more which should be included...So... x86-64, x86, PowerPC, SPARC, MIPS, ARM, z10, m88kHow big are *your* binaries?
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863719</id>
	<title>Re:Linking problems</title>
	<author>martin-boundary</author>
	<datestamp>1256477400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><blockquote><div><p>  Could this technology also help binaries to link against multiple versions of standard libraries
  (glibc, libstdc++)?</p></div>
 </blockquote><p>
I think FatELF is too skinny for that. You want SantaELF, which links all those libraries statically in each binary...</p></div>
	</htmltext>
<tokenext>Could this technology also help binaries to link against multiple versions of standard libraries ( glibc , libstdc + + ) ?
I think FatELF is too skinny for that .
You want SantaELF , which links all those libraries statically in each binary.. .</tokentext>
<sentencetext>  Could this technology also help binaries to link against multiple versions of standard libraries
  (glibc, libstdc++)?
I think FatELF is too skinny for that.
You want SantaELF, which links all those libraries statically in each binary...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864959</id>
	<title>Somebody should tell this guy about ./configure</title>
	<author>dbc</author>
	<datestamp>1256489760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>*sheesh* Just what we need.  A way to distribute stale, broken, un-optimized binaries everywhere all at once.</p></htmltext>
<tokenext>* sheesh * Just what we need .
A way to distribute stale , broken , un-optimized binaries everywhere all at once .</tokentext>
<sentencetext>*sheesh* Just what we need.
A way to distribute stale, broken, un-optimized binaries everywhere all at once.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864615</id>
	<title>I don't get it</title>
	<author>sjames</author>
	<datestamp>1256486400000</datestamp>
	<modclass>None</modclass>
	<modscore>2</modscore>
	<htmltext><p>What's the big benefit again? Instead of the package manager making the decision once at install time and all of the un-needed parts for platforms I'm not using stay on the install disk, now the decision is made each time I run the app and I get to clog my HD (or worse, my SSD) with all of them?</p><p>Now I can have the world's LARGEST hello world program with support for alpha,  arm, avr32,  blackfin,  cris,  frv,  h8300,  ia64,  m32r,  m68k,  m68knommu,  mips,  parisc,  powerpc,  ppc,  s390,  sh,  sh64,  sparc,  sparc64,  um,  v850,  x86,  and xtensa?</p><p>I'm guessing if this catches on, the most commonly used program will be 'diet' the program that slims down fat binaries by removing the architectures you will never encounter in a zillion years. (Just what are the odds that I will one day replace my workstation with an s390?)</p><p>If they want to do this, they should do it right and implement something like <a href="http://en.wikipedia.org/wiki/AS400#Instruction\_set" title="wikipedia.org">TIMI</a> [wikipedia.org]. Done well, it would mean that an app could run on a platform that didn't even exist when it was shipped (it worked for IBM).</p><p>Beyond the technical advantages of TIMI, it will provide us years of South Park references.</p></htmltext>
<tokenext>What 's the big benefit again ?
Instead of the package manager making the decision once at install time and all of the un-needed parts for platforms I 'm not using stay on the install disk , now the decision is made each time I run the app and I get to clog my HD ( or worse , my SSD ) with all of them ? Now I can have the world 's LARGEST hello world program with support for alpha , arm , avr32 , blackfin , cris , frv , h8300 , ia64 , m32r , m68k , m68knommu , mips , parisc , powerpc , ppc , s390 , sh , sh64 , sparc , sparc64 , um , v850 , x86 , and xtensa ? I 'm guessing if this catches on , the most commonly used program will be 'diet ' the program that slims down fat binaries by removing the architectures you will never encounter in a zillion years .
( Just what are the odds that I will one day replace my workstation with an s390 ?
) If they want to do this , they should do it right and implement something like TIMI [ wikipedia.org ] .
Done well , it would mean that an app could run on a platform that did n't even exist when it was shipped ( it worked for IBM ) .Beyond the technical advantages of TIMI , it will provide us years of South Park references .</tokentext>
<sentencetext>What's the big benefit again?
Instead of the package manager making the decision once at install time and all of the un-needed parts for platforms I'm not using stay on the install disk, now the decision is made each time I run the app and I get to clog my HD (or worse, my SSD) with all of them?Now I can have the world's LARGEST hello world program with support for alpha,  arm, avr32,  blackfin,  cris,  frv,  h8300,  ia64,  m32r,  m68k,  m68knommu,  mips,  parisc,  powerpc,  ppc,  s390,  sh,  sh64,  sparc,  sparc64,  um,  v850,  x86,  and xtensa?I'm guessing if this catches on, the most commonly used program will be 'diet' the program that slims down fat binaries by removing the architectures you will never encounter in a zillion years.
(Just what are the odds that I will one day replace my workstation with an s390?
)If they want to do this, they should do it right and implement something like TIMI [wikipedia.org].
Done well, it would mean that an app could run on a platform that didn't even exist when it was shipped (it worked for IBM).Beyond the technical advantages of TIMI, it will provide us years of South Park references.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864419</id>
	<title>Re:Apple dropped it</title>
	<author>peragrin</author>
	<datestamp>1256484240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Apple does that.  when 10.3 came out apple stopped installing OS 9 classic by default as well.  Support backwards compatibility for 2-3 generations and then phase it out.  First phase is simply not installing it by default.  Second phase is not to supply it.   Snow leopard is the 3rd generation of OS after Rosetta came out.  installed by default in tiger, and Leopard, they stopped installing it by default for 10.6.</p><p>personally I wish MSFT would do the same thing.  I get really pissed when my "new application" requires the same installer that win95 had, and in order to run it I have to reboot into safe mode as my antivirus won't let it run.  Seriously why does an Application built in 2009 still require the win16 subsystem to run?  Why aren't the coders moveing onto new toolkits?  Apple nudges and then pushes programers forward.  MSFT let's them stay in the previous century and use bare metal knife switches to turn on the lights.</p></htmltext>
<tokenext>Apple does that .
when 10.3 came out apple stopped installing OS 9 classic by default as well .
Support backwards compatibility for 2-3 generations and then phase it out .
First phase is simply not installing it by default .
Second phase is not to supply it .
Snow leopard is the 3rd generation of OS after Rosetta came out .
installed by default in tiger , and Leopard , they stopped installing it by default for 10.6.personally I wish MSFT would do the same thing .
I get really pissed when my " new application " requires the same installer that win95 had , and in order to run it I have to reboot into safe mode as my antivirus wo n't let it run .
Seriously why does an Application built in 2009 still require the win16 subsystem to run ?
Why are n't the coders moveing onto new toolkits ?
Apple nudges and then pushes programers forward .
MSFT let 's them stay in the previous century and use bare metal knife switches to turn on the lights .</tokentext>
<sentencetext>Apple does that.
when 10.3 came out apple stopped installing OS 9 classic by default as well.
Support backwards compatibility for 2-3 generations and then phase it out.
First phase is simply not installing it by default.
Second phase is not to supply it.
Snow leopard is the 3rd generation of OS after Rosetta came out.
installed by default in tiger, and Leopard, they stopped installing it by default for 10.6.personally I wish MSFT would do the same thing.
I get really pissed when my "new application" requires the same installer that win95 had, and in order to run it I have to reboot into safe mode as my antivirus won't let it run.
Seriously why does an Application built in 2009 still require the win16 subsystem to run?
Why aren't the coders moveing onto new toolkits?
Apple nudges and then pushes programers forward.
MSFT let's them stay in the previous century and use bare metal knife switches to turn on the lights.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864245</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29870477</id>
	<title>Revolutionary suggestion</title>
	<author>petrus4</author>
	<datestamp>1256554680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In most cases, with Linux, we've got source code.</p><p>So if we want to run programs on a new architecture, maybe we could *gasp* compile from source?</p></htmltext>
<tokenext>In most cases , with Linux , we 've got source code.So if we want to run programs on a new architecture , maybe we could * gasp * compile from source ?</tokentext>
<sentencetext>In most cases, with Linux, we've got source code.So if we want to run programs on a new architecture, maybe we could *gasp* compile from source?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659</id>
	<title>Linking problems</title>
	<author>Anonymous</author>
	<datestamp>1256476740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext> Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?</htmltext>
<tokenext>Could this technology also help binaries to link against multiple versions of standard libraries ( glibc , libstdc + + ) ?</tokentext>
<sentencetext> Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865599</id>
	<title>VM</title>
	<author>shyisc</author>
	<datestamp>1256495460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Isn't this kind of thing what VMs are for? The kernel could have a VM, and run all binary executables that are compiled for the VM inside the VM. That way you can get real compile-once-run-everywhere, and with no bloating of the size of executables. It should then also be possible to port the VM to other OSs.</htmltext>
<tokenext>Is n't this kind of thing what VMs are for ?
The kernel could have a VM , and run all binary executables that are compiled for the VM inside the VM .
That way you can get real compile-once-run-everywhere , and with no bloating of the size of executables .
It should then also be possible to port the VM to other OSs .</tokentext>
<sentencetext>Isn't this kind of thing what VMs are for?
The kernel could have a VM, and run all binary executables that are compiled for the VM inside the VM.
That way you can get real compile-once-run-everywhere, and with no bloating of the size of executables.
It should then also be possible to port the VM to other OSs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865657</id>
	<title>Dynamic binary translation is a better method.</title>
	<author>non-e-moose</author>
	<datestamp>1256495940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>While fat binaries are one approach to run applications which are binary-only on Linux, a much better way is to use binary translation.  A fat-binary approach would require application vendors to qualify both versions, which means approximately twice the cost.  Translators can be developed by 3rd parties.  There are a lot of commercial-grade binary translators and binary optimizers that have shipped over the years.  Tandem, Digital, Transmeta, Transitive, etc.
The messy parts are getting the OS conversion semantics correct when the source and target OS's are not very similar.  Instruction decode can be a bit tricky, but it is not the development bottleneck.</htmltext>
<tokenext>While fat binaries are one approach to run applications which are binary-only on Linux , a much better way is to use binary translation .
A fat-binary approach would require application vendors to qualify both versions , which means approximately twice the cost .
Translators can be developed by 3rd parties .
There are a lot of commercial-grade binary translators and binary optimizers that have shipped over the years .
Tandem , Digital , Transmeta , Transitive , etc .
The messy parts are getting the OS conversion semantics correct when the source and target OS 's are not very similar .
Instruction decode can be a bit tricky , but it is not the development bottleneck .</tokentext>
<sentencetext>While fat binaries are one approach to run applications which are binary-only on Linux, a much better way is to use binary translation.
A fat-binary approach would require application vendors to qualify both versions, which means approximately twice the cost.
Translators can be developed by 3rd parties.
There are a lot of commercial-grade binary translators and binary optimizers that have shipped over the years.
Tandem, Digital, Transmeta, Transitive, etc.
The messy parts are getting the OS conversion semantics correct when the source and target OS's are not very similar.
Instruction decode can be a bit tricky, but it is not the development bottleneck.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866129</id>
	<title>Actually yes Linux needs a universal format</title>
	<author>Orion Blastar</author>
	<datestamp>1256499420000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Linux needs to become more like Mac OSX than Windows.</p><p>What I would like to see in Linux in the near future:</p><p>Universal file format for X86, X64, and PowerPC executiables that replaces the ELF format (WIZARD format, ELF needs food badly!)</p><p>GNOME and KDE merged into one GUI that emulates both of them, GNIGHT or something.</p><p>Ability for Linux to use Windows based drivers when Linux based drivers do not exist, something better than that NDISwrapper but under a GPL license and built into Linux.</p><p>GNUStep being developed into something that resembles Aqua, Aero, and other GUIs and is backward compatible with the Mac OSX API calls to recompile OSX programs for Linux. Maybe even in the near future run OSX Universal binaries somewhat like WINE runs Windows programs.</p></htmltext>
<tokenext>Linux needs to become more like Mac OSX than Windows.What I would like to see in Linux in the near future : Universal file format for X86 , X64 , and PowerPC executiables that replaces the ELF format ( WIZARD format , ELF needs food badly !
) GNOME and KDE merged into one GUI that emulates both of them , GNIGHT or something.Ability for Linux to use Windows based drivers when Linux based drivers do not exist , something better than that NDISwrapper but under a GPL license and built into Linux.GNUStep being developed into something that resembles Aqua , Aero , and other GUIs and is backward compatible with the Mac OSX API calls to recompile OSX programs for Linux .
Maybe even in the near future run OSX Universal binaries somewhat like WINE runs Windows programs .</tokentext>
<sentencetext>Linux needs to become more like Mac OSX than Windows.What I would like to see in Linux in the near future:Universal file format for X86, X64, and PowerPC executiables that replaces the ELF format (WIZARD format, ELF needs food badly!
)GNOME and KDE merged into one GUI that emulates both of them, GNIGHT or something.Ability for Linux to use Windows based drivers when Linux based drivers do not exist, something better than that NDISwrapper but under a GPL license and built into Linux.GNUStep being developed into something that resembles Aqua, Aero, and other GUIs and is backward compatible with the Mac OSX API calls to recompile OSX programs for Linux.
Maybe even in the near future run OSX Universal binaries somewhat like WINE runs Windows programs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864045</id>
	<title>Re:Not scalable</title>
	<author>evanbd</author>
	<datestamp>1256480940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.</p><p>This is something you might decide to ignore if you are only supporting two architectures.  Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to.  I really don't think this idea makes sense.</p><p>(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?)</p></div><p>As mentioned by the other poster, data portions of the program are shared.  In some cases, that means that data files are shared directly; multiple binaries, one data file.  In other cases (libraries, etc) where the data is embedded in the binary, it simply means that the FatELF binary will compress to produce a combined file that's smaller than n * single architecture size.  (The same is true for packing multiple binaries into one tarball, of course.  Though in the case of several different files in one tarball, the FatELF version may have (slightly) better compression because it puts all the versions of one file right next to each other, rather than grouping all of one architecture together; that makes it more likely that the copies of the same data are within the same encoding block.)</p><p>The only thing wrong with shipping multiple binaries in one package or tarball is that, afaik, none of the major package managers support it.  Sure, you could add support, but he decided this was a better approach.</p><p>In the case of something like Debian, it obviously doesn't make sense to have the package repositories use FatELF binaries, nor to include all possible architectures on one install CD / DVD.  However, it might make sense to include a couple common architectures on a single iso that would work for most people, and have the obscure architectures get their own isos (or use jigdo) like they do now.</p></div>
	</htmltext>
<tokenext>To a first approximation , the size of the binary will increase in proportion to the number of architectures supported.This is something you might decide to ignore if you are only supporting two architectures .
Debian Lenny supports twelve architectures , and I 've lost count of how many the Linux kernel itself has been ported to .
I really do n't think this idea makes sense .
( Besides , what 's wrong with simply shipping two or more binaries in the same package or tarball ?
) As mentioned by the other poster , data portions of the program are shared .
In some cases , that means that data files are shared directly ; multiple binaries , one data file .
In other cases ( libraries , etc ) where the data is embedded in the binary , it simply means that the FatELF binary will compress to produce a combined file that 's smaller than n * single architecture size .
( The same is true for packing multiple binaries into one tarball , of course .
Though in the case of several different files in one tarball , the FatELF version may have ( slightly ) better compression because it puts all the versions of one file right next to each other , rather than grouping all of one architecture together ; that makes it more likely that the copies of the same data are within the same encoding block .
) The only thing wrong with shipping multiple binaries in one package or tarball is that , afaik , none of the major package managers support it .
Sure , you could add support , but he decided this was a better approach.In the case of something like Debian , it obviously does n't make sense to have the package repositories use FatELF binaries , nor to include all possible architectures on one install CD / DVD .
However , it might make sense to include a couple common architectures on a single iso that would work for most people , and have the obscure architectures get their own isos ( or use jigdo ) like they do now .</tokentext>
<sentencetext>To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.This is something you might decide to ignore if you are only supporting two architectures.
Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to.
I really don't think this idea makes sense.
(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?
)As mentioned by the other poster, data portions of the program are shared.
In some cases, that means that data files are shared directly; multiple binaries, one data file.
In other cases (libraries, etc) where the data is embedded in the binary, it simply means that the FatELF binary will compress to produce a combined file that's smaller than n * single architecture size.
(The same is true for packing multiple binaries into one tarball, of course.
Though in the case of several different files in one tarball, the FatELF version may have (slightly) better compression because it puts all the versions of one file right next to each other, rather than grouping all of one architecture together; that makes it more likely that the copies of the same data are within the same encoding block.
)The only thing wrong with shipping multiple binaries in one package or tarball is that, afaik, none of the major package managers support it.
Sure, you could add support, but he decided this was a better approach.In the case of something like Debian, it obviously doesn't make sense to have the package repositories use FatELF binaries, nor to include all possible architectures on one install CD / DVD.
However, it might make sense to include a couple common architectures on a single iso that would work for most people, and have the obscure architectures get their own isos (or use jigdo) like they do now.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811</id>
	<title>oh boy, just pack all archs on a .deb</title>
	<author>C0vardeAn0nim0</author>
	<datestamp>1256478240000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>you know, just trick the good ol'<nobr> <wbr></nobr>.DEB package format to include several archs, then let to dpkg decide wich binaries to extract.</p><p>is not that in linux the binaries are one big blob with binaries, libs, images, videos, heplfiles, etc. all ditributed in as a single "file" which is actualy a directory with metadata that the finder hides as being a "program file".</p><p>being able to copy a binary ELF from one box to another doesn't guarantee it'll work, specially if it's GUI apps that may require other support files, so fat binaries in linux would be simply a useless gimmick. either distribute fat<nobr> <wbr></nobr>.DEBs, or just do the Right Thing(tm): distribute the source.</p></htmltext>
<tokenext>you know , just trick the good ol ' .DEB package format to include several archs , then let to dpkg decide wich binaries to extract.is not that in linux the binaries are one big blob with binaries , libs , images , videos , heplfiles , etc .
all ditributed in as a single " file " which is actualy a directory with metadata that the finder hides as being a " program file " .being able to copy a binary ELF from one box to another does n't guarantee it 'll work , specially if it 's GUI apps that may require other support files , so fat binaries in linux would be simply a useless gimmick .
either distribute fat .DEBs , or just do the Right Thing ( tm ) : distribute the source .</tokentext>
<sentencetext>you know, just trick the good ol' .DEB package format to include several archs, then let to dpkg decide wich binaries to extract.is not that in linux the binaries are one big blob with binaries, libs, images, videos, heplfiles, etc.
all ditributed in as a single "file" which is actualy a directory with metadata that the finder hides as being a "program file".being able to copy a binary ELF from one box to another doesn't guarantee it'll work, specially if it's GUI apps that may require other support files, so fat binaries in linux would be simply a useless gimmick.
either distribute fat .DEBs, or just do the Right Thing(tm): distribute the source.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864229</id>
	<title>Happy users</title>
	<author>Anonymous</author>
	<datestamp>1256482440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Now the 5 Linux PPC users in the world can finally be happy.</p></htmltext>
<tokenext>Now the 5 Linux PPC users in the world can finally be happy .</tokentext>
<sentencetext>Now the 5 Linux PPC users in the world can finally be happy.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864835</id>
	<title>Distribute IR, compile dynamically</title>
	<author>Mike\_K</author>
	<datestamp>1256488620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Why not write in Java or one of the Mono-supported languages and distribute something that will be dynamically compiled on the destination machine. It is fast, convenient and you don't have to have 10 OSs to target them....</p><p>m</p></htmltext>
<tokenext>Why not write in Java or one of the Mono-supported languages and distribute something that will be dynamically compiled on the destination machine .
It is fast , convenient and you do n't have to have 10 OSs to target them....m</tokentext>
<sentencetext>Why not write in Java or one of the Mono-supported languages and distribute something that will be dynamically compiled on the destination machine.
It is fast, convenient and you don't have to have 10 OSs to target them....m</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>TheRaven64</author>
	<datestamp>1256478420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>You are confusing NeXT and Apple's approaches, I think.  Apple puts both all of the different architectures in the same file.  Your code is compiled twice, but it's only linked once.  The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files.  To support this on Linux would not require any changes to the kernel, only to the loader (which is a GNU project, and not actually part of Linux).</htmltext>
<tokenext>You are confusing NeXT and Apple 's approaches , I think .
Apple puts both all of the different architectures in the same file .
Your code is compiled twice , but it 's only linked once .
The PowerPC { 32,64 } and x86 { 32,64 } code all goes in different segments in the binary , but data is shared between all of them , so it takes less space than having 2-4 independent binary files .
To support this on Linux would not require any changes to the kernel , only to the loader ( which is a GNU project , and not actually part of Linux ) .</tokentext>
<sentencetext>You are confusing NeXT and Apple's approaches, I think.
Apple puts both all of the different architectures in the same file.
Your code is compiled twice, but it's only linked once.
The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files.
To support this on Linux would not require any changes to the kernel, only to the loader (which is a GNU project, and not actually part of Linux).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863923</id>
	<title>Linux is fine, but how about other platforms</title>
	<author>Anonymous</author>
	<datestamp>1256479620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I would very much like to know does this also have support for building fat binaries for different operating systems in the future if the support is added? I would like to build libraries that work on Linux/*BSD/Solaris/OSX/Windows and distribute the binaries even though they are open source. I would also like to load them automatically from a C#/Mono application with P/Invoke.</p><p>Up to now having Windows and OSX libraries is easy, because the naming conventions are different and I can include a windows binary with<nobr> <wbr></nobr>.dll extension and OSX binary with<nobr> <wbr></nobr>.dylib extension and Mono even handles everything automatically just fine. Problems come with other unices, because they all use ELF and all use<nobr> <wbr></nobr>.so extension. I have to resort to ugly hacks of having a version of the library with a different name for each platform and add a Mono<nobr> <wbr></nobr>.config file to load the correct version depending on the platform.</p><p>What I would like to have is to have a single FatELF<nobr> <wbr></nobr>.so library that would include 32-bit and 64-bit versions for all mentioned platforms, with the correct one loaded by the operating system automatically. The<nobr> <wbr></nobr>.dylib OSX version I use is already made this way and it's not a big deal to distribute 32-bit and 64-bit windows versions as separate files if that ever is necessary, for now 32-bit version should work well enough in windows. However all the other platforms result in millions of version each in a separate file, that just makes me feel dirty.</p><p>I know the model I'm suggesting would result in really big files, but it would be really easy to even automatically strip the useless platforms of it if necessary. It would still make the binary distribution a lot easier.</p></htmltext>
<tokenext>I would very much like to know does this also have support for building fat binaries for different operating systems in the future if the support is added ?
I would like to build libraries that work on Linux/ * BSD/Solaris/OSX/Windows and distribute the binaries even though they are open source .
I would also like to load them automatically from a C # /Mono application with P/Invoke.Up to now having Windows and OSX libraries is easy , because the naming conventions are different and I can include a windows binary with .dll extension and OSX binary with .dylib extension and Mono even handles everything automatically just fine .
Problems come with other unices , because they all use ELF and all use .so extension .
I have to resort to ugly hacks of having a version of the library with a different name for each platform and add a Mono .config file to load the correct version depending on the platform.What I would like to have is to have a single FatELF .so library that would include 32-bit and 64-bit versions for all mentioned platforms , with the correct one loaded by the operating system automatically .
The .dylib OSX version I use is already made this way and it 's not a big deal to distribute 32-bit and 64-bit windows versions as separate files if that ever is necessary , for now 32-bit version should work well enough in windows .
However all the other platforms result in millions of version each in a separate file , that just makes me feel dirty.I know the model I 'm suggesting would result in really big files , but it would be really easy to even automatically strip the useless platforms of it if necessary .
It would still make the binary distribution a lot easier .</tokentext>
<sentencetext>I would very much like to know does this also have support for building fat binaries for different operating systems in the future if the support is added?
I would like to build libraries that work on Linux/*BSD/Solaris/OSX/Windows and distribute the binaries even though they are open source.
I would also like to load them automatically from a C#/Mono application with P/Invoke.Up to now having Windows and OSX libraries is easy, because the naming conventions are different and I can include a windows binary with .dll extension and OSX binary with .dylib extension and Mono even handles everything automatically just fine.
Problems come with other unices, because they all use ELF and all use .so extension.
I have to resort to ugly hacks of having a version of the library with a different name for each platform and add a Mono .config file to load the correct version depending on the platform.What I would like to have is to have a single FatELF .so library that would include 32-bit and 64-bit versions for all mentioned platforms, with the correct one loaded by the operating system automatically.
The .dylib OSX version I use is already made this way and it's not a big deal to distribute 32-bit and 64-bit windows versions as separate files if that ever is necessary, for now 32-bit version should work well enough in windows.
However all the other platforms result in millions of version each in a separate file, that just makes me feel dirty.I know the model I'm suggesting would result in really big files, but it would be really easy to even automatically strip the useless platforms of it if necessary.
It would still make the binary distribution a lot easier.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29871431</id>
	<title>No.</title>
	<author>dmsuperman</author>
	<datestamp>1256566560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Ryan Gordon needs to mind his own business. Keep your binaries and shit in OS X, we have source code to compile from which means that we can run it on any platform without requiring larger binaries, thanks!</htmltext>
<tokenext>Ryan Gordon needs to mind his own business .
Keep your binaries and shit in OS X , we have source code to compile from which means that we can run it on any platform without requiring larger binaries , thanks !</tokentext>
<sentencetext>Ryan Gordon needs to mind his own business.
Keep your binaries and shit in OS X, we have source code to compile from which means that we can run it on any platform without requiring larger binaries, thanks!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863737</id>
	<title>Re:Only useful for non-free applications</title>
	<author>Anonymous</author>
	<datestamp>1256477640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>While this is true, of course a lot of free software can run on OS X as well. Compiling this is nearly as easy as Linux, but it's still quite useful just to download a universal binary of the full application if it's available. Smaller apps aren't a big problem, but for bigger ones it can become an unnecessary hassle. For example, I just had to compile Inkscape from scratch on Snow Leopard and I spent an afternoon tracking down and compiling all the dependencies because the universal binary doesn't currently run on 10.6. I really would have benefited from the universal binary if I wasn't so bleeding edge.</htmltext>
<tokenext>While this is true , of course a lot of free software can run on OS X as well .
Compiling this is nearly as easy as Linux , but it 's still quite useful just to download a universal binary of the full application if it 's available .
Smaller apps are n't a big problem , but for bigger ones it can become an unnecessary hassle .
For example , I just had to compile Inkscape from scratch on Snow Leopard and I spent an afternoon tracking down and compiling all the dependencies because the universal binary does n't currently run on 10.6 .
I really would have benefited from the universal binary if I was n't so bleeding edge .</tokentext>
<sentencetext>While this is true, of course a lot of free software can run on OS X as well.
Compiling this is nearly as easy as Linux, but it's still quite useful just to download a universal binary of the full application if it's available.
Smaller apps aren't a big problem, but for bigger ones it can become an unnecessary hassle.
For example, I just had to compile Inkscape from scratch on Snow Leopard and I spent an afternoon tracking down and compiling all the dependencies because the universal binary doesn't currently run on 10.6.
I really would have benefited from the universal binary if I wasn't so bleeding edge.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864791</id>
	<title>Re:Only useful for non-free applications</title>
	<author>99BottlesOfBeerInMyF</author>
	<datestamp>1256488200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>They clearly meant free, not gratis.</p> </div><p>"Free means both "gratis" and "libre". So if you're going to quibble, you should say he used "free" as meaning "libre" instead of free as meaning "gratis". </p></div>
	</htmltext>
<tokenext>They clearly meant free , not gratis .
" Free means both " gratis " and " libre " .
So if you 're going to quibble , you should say he used " free " as meaning " libre " instead of free as meaning " gratis " .</tokentext>
<sentencetext>They clearly meant free, not gratis.
"Free means both "gratis" and "libre".
So if you're going to quibble, you should say he used "free" as meaning "libre" instead of free as meaning "gratis". 
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863703</id>
	<title>Re:Linking problems</title>
	<author>Anonymous</author>
	<datestamp>1256477280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I saw this discussed somewhere else and the answer is no. The author didn't write it for that purpose.</p><p>My guess here is that this is something a lot of people really want. A way to offer a single binary packages that work on every/most Linux setups. But this will be rejected by the people in control of Linux, because they don't need it.</p></htmltext>
<tokenext>I saw this discussed somewhere else and the answer is no .
The author did n't write it for that purpose.My guess here is that this is something a lot of people really want .
A way to offer a single binary packages that work on every/most Linux setups .
But this will be rejected by the people in control of Linux , because they do n't need it .</tokentext>
<sentencetext>I saw this discussed somewhere else and the answer is no.
The author didn't write it for that purpose.My guess here is that this is something a lot of people really want.
A way to offer a single binary packages that work on every/most Linux setups.
But this will be rejected by the people in control of Linux, because they don't need it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865621</id>
	<title>ELF does this already, doesn't it?</title>
	<author>MostAwesomeDude</author>
	<datestamp>1256495640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Talking to a couple guys learning ELF and Linux' SO loader, I got the impression that ELF already supports these multiple code segments, and that adding in headers to denote the arch is really simple. I'll see if I can get more evidence for this.</p></htmltext>
<tokenext>Talking to a couple guys learning ELF and Linux ' SO loader , I got the impression that ELF already supports these multiple code segments , and that adding in headers to denote the arch is really simple .
I 'll see if I can get more evidence for this .</tokentext>
<sentencetext>Talking to a couple guys learning ELF and Linux' SO loader, I got the impression that ELF already supports these multiple code segments, and that adding in headers to denote the arch is really simple.
I'll see if I can get more evidence for this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866201</id>
	<title>Why?</title>
	<author>Hurricane78</author>
	<datestamp>1256499840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Just because you can, doesn't mean you should.<br>I think for open software, that does not follow "traditional" models of software distribution, it's a pointless waste of resources.</p><p>Also, it won't affect me anyway, as I'm compiling everything from sources. (Gentoo)</p></htmltext>
<tokenext>Just because you can , does n't mean you should.I think for open software , that does not follow " traditional " models of software distribution , it 's a pointless waste of resources.Also , it wo n't affect me anyway , as I 'm compiling everything from sources .
( Gentoo )</tokentext>
<sentencetext>Just because you can, doesn't mean you should.I think for open software, that does not follow "traditional" models of software distribution, it's a pointless waste of resources.Also, it won't affect me anyway, as I'm compiling everything from sources.
(Gentoo)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865121</id>
	<title>Didn't we do this before?</title>
	<author>WheelDweller</author>
	<datestamp>1256491500000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>0</modscore>
	<htmltext><p>And wasn't it called Java?</p><p>Are any of you guys old enough to remember CP/M running on the 8086? It was a solid, no-glitch way of running binaries from one OS to another.  And Z80 code seemed as complex to most techs then, as protected-memory schemes do now.</p><p>But they *did* it, and did it well.</p><p>If we are *ever* going to actually have a 'universal binary' we need to make the hardware do the job: not software.</p><p>Case study: 1988, SCI Systems, Huntsville, Alabama.</p><p>Three software guys huddled over their 80186-based site-controller motherboard our company built. These guys were GODS, able to write compilers AND embedded control software, all interrupt-based, pre-emptive long before Linux. All three agreed it wasn't software, this *must* be something in the hardware. There'll be yelling and noise, but let's get a hardware tech here.</p><p>"Neil!" someone said.  And this odd-looking, couldn't-be-nicer guy snagged an oscilloscope cart on his way over. He seemed to already know the question.</p><p>He clipped on the grounds, checked about 4-5 pins and dropped the probe. 'Software problem.' and walked out.  I thought he had to be the most arrogant guy in the room, until I did the math:</p><p>He wrote the schematics. He laid out the parts, he did the prototype, he did the solder, and every other masks.  He wouldn't have sent the device to production if the Chip Enable lines weren't working. Because of this, if they're trying to talk to the memory chips (using Intel's STUPID, 16-ways to describe-every-location scheme) then it was a software problem.</p><p>The point: hardware guys have more they can cross-check; they can't move on until one level of production checks. Based on that, future levels don't \_tend\_ to need a complete re-fit.  You just can't get that in software: people don't work that way. Software is just too elusive.  I've been saying this since 1978.</p><p>They were trying to work out a 'java' at that time, they're still trying today. But different 'runners' and different code, and we STILL AFTER ALL THESE YEARS HAVE NO REAL PROGRESS. If anything works at all in the customer's hands, it's a surprise.</p><p>Make all processors have unified code, or drop the project. There's SO much more we could be doing, please?</p></htmltext>
<tokenext>And was n't it called Java ? Are any of you guys old enough to remember CP/M running on the 8086 ?
It was a solid , no-glitch way of running binaries from one OS to another .
And Z80 code seemed as complex to most techs then , as protected-memory schemes do now.But they * did * it , and did it well.If we are * ever * going to actually have a 'universal binary ' we need to make the hardware do the job : not software.Case study : 1988 , SCI Systems , Huntsville , Alabama.Three software guys huddled over their 80186-based site-controller motherboard our company built .
These guys were GODS , able to write compilers AND embedded control software , all interrupt-based , pre-emptive long before Linux .
All three agreed it was n't software , this * must * be something in the hardware .
There 'll be yelling and noise , but let 's get a hardware tech here. " Neil !
" someone said .
And this odd-looking , could n't-be-nicer guy snagged an oscilloscope cart on his way over .
He seemed to already know the question.He clipped on the grounds , checked about 4-5 pins and dropped the probe .
'Software problem .
' and walked out .
I thought he had to be the most arrogant guy in the room , until I did the math : He wrote the schematics .
He laid out the parts , he did the prototype , he did the solder , and every other masks .
He would n't have sent the device to production if the Chip Enable lines were n't working .
Because of this , if they 're trying to talk to the memory chips ( using Intel 's STUPID , 16-ways to describe-every-location scheme ) then it was a software problem.The point : hardware guys have more they can cross-check ; they ca n't move on until one level of production checks .
Based on that , future levels do n't \ _tend \ _ to need a complete re-fit .
You just ca n't get that in software : people do n't work that way .
Software is just too elusive .
I 've been saying this since 1978.They were trying to work out a 'java ' at that time , they 're still trying today .
But different 'runners ' and different code , and we STILL AFTER ALL THESE YEARS HAVE NO REAL PROGRESS .
If anything works at all in the customer 's hands , it 's a surprise.Make all processors have unified code , or drop the project .
There 's SO much more we could be doing , please ?</tokentext>
<sentencetext>And wasn't it called Java?Are any of you guys old enough to remember CP/M running on the 8086?
It was a solid, no-glitch way of running binaries from one OS to another.
And Z80 code seemed as complex to most techs then, as protected-memory schemes do now.But they *did* it, and did it well.If we are *ever* going to actually have a 'universal binary' we need to make the hardware do the job: not software.Case study: 1988, SCI Systems, Huntsville, Alabama.Three software guys huddled over their 80186-based site-controller motherboard our company built.
These guys were GODS, able to write compilers AND embedded control software, all interrupt-based, pre-emptive long before Linux.
All three agreed it wasn't software, this *must* be something in the hardware.
There'll be yelling and noise, but let's get a hardware tech here."Neil!
" someone said.
And this odd-looking, couldn't-be-nicer guy snagged an oscilloscope cart on his way over.
He seemed to already know the question.He clipped on the grounds, checked about 4-5 pins and dropped the probe.
'Software problem.
' and walked out.
I thought he had to be the most arrogant guy in the room, until I did the math:He wrote the schematics.
He laid out the parts, he did the prototype, he did the solder, and every other masks.
He wouldn't have sent the device to production if the Chip Enable lines weren't working.
Because of this, if they're trying to talk to the memory chips (using Intel's STUPID, 16-ways to describe-every-location scheme) then it was a software problem.The point: hardware guys have more they can cross-check; they can't move on until one level of production checks.
Based on that, future levels don't \_tend\_ to need a complete re-fit.
You just can't get that in software: people don't work that way.
Software is just too elusive.
I've been saying this since 1978.They were trying to work out a 'java' at that time, they're still trying today.
But different 'runners' and different code, and we STILL AFTER ALL THESE YEARS HAVE NO REAL PROGRESS.
If anything works at all in the customer's hands, it's a surprise.Make all processors have unified code, or drop the project.
There's SO much more we could be doing, please?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717</id>
	<title>Apple Universal Binary is kinda of a joke.</title>
	<author>Anonymous</author>
	<datestamp>1256477400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The binary is compiled twice.  The way OS X packages its applications is the the Application that Icon that you click on isn't a File but a Folder with a predefined structure.   So there there is a PPC and an intel port of the executable.</p><p>Linux doesn't handle applications that way. That means you will need to alter the kernel and create new files that will no longer be upward compatible with the old version or just do something really simple, however the simple solution is just as tricky as there are no starnardized installers for linux.</p><p>The file system<nobr> <wbr></nobr>/usr/bin<nobr> <wbr></nobr>/usr/local/bin<nobr> <wbr></nobr>/usr/lib<br>etc...<br>will have sub directories for each platform there is a compiled binary for.<br>eg<nobr> <wbr></nobr>/usr/bin/x86<nobr> <wbr></nobr>/usr/bin/Amd64<nobr> <wbr></nobr>/usr/bin/Sparc<br>etc...<br>Now when the installer installs the software it puts the platform particular binary there is a script installed  in the root directory that checks the platform and goes to its platforms version.</p></htmltext>
<tokenext>The binary is compiled twice .
The way OS X packages its applications is the the Application that Icon that you click on is n't a File but a Folder with a predefined structure .
So there there is a PPC and an intel port of the executable.Linux does n't handle applications that way .
That means you will need to alter the kernel and create new files that will no longer be upward compatible with the old version or just do something really simple , however the simple solution is just as tricky as there are no starnardized installers for linux.The file system /usr/bin /usr/local/bin /usr/libetc...will have sub directories for each platform there is a compiled binary for.eg /usr/bin/x86 /usr/bin/Amd64 /usr/bin/Sparcetc...Now when the installer installs the software it puts the platform particular binary there is a script installed in the root directory that checks the platform and goes to its platforms version .</tokentext>
<sentencetext>The binary is compiled twice.
The way OS X packages its applications is the the Application that Icon that you click on isn't a File but a Folder with a predefined structure.
So there there is a PPC and an intel port of the executable.Linux doesn't handle applications that way.
That means you will need to alter the kernel and create new files that will no longer be upward compatible with the old version or just do something really simple, however the simple solution is just as tricky as there are no starnardized installers for linux.The file system /usr/bin /usr/local/bin /usr/libetc...will have sub directories for each platform there is a compiled binary for.eg /usr/bin/x86 /usr/bin/Amd64 /usr/bin/Sparcetc...Now when the installer installs the software it puts the platform particular binary there is a script installed  in the root directory that checks the platform and goes to its platforms version.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29869255</id>
	<title>It's called JAVA</title>
	<author>Anonymous</author>
	<datestamp>1256495340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>I thought this was the whole point of Java.  Why are we going back and trying to solve a problem that was solved FOURTEEN YEARS AGO</p></htmltext>
<tokenext>I thought this was the whole point of Java .
Why are we going back and trying to solve a problem that was solved FOURTEEN YEARS AGO</tokentext>
<sentencetext>I thought this was the whole point of Java.
Why are we going back and trying to solve a problem that was solved FOURTEEN YEARS AGO</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679</id>
	<title>Only useful for non-free applications</title>
	<author>dingen</author>
	<datestamp>1256477160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>If you have access to the source, you can always compile a version for your platform. The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.</p><p>Since most apps for Linux are free and the source is available, this feature isn't as useful as it is on the Mac. Not that it shouldn't be created, but it makes sense to me why it took a while before someone started developing this for Linux.</p></htmltext>
<tokenext>If you have access to the source , you can always compile a version for your platform .
The 'fat binary ' principle is only useful for non-free applications , where the end-user ca n't compile the application himself and has to use the binary provided by the vendor.Since most apps for Linux are free and the source is available , this feature is n't as useful as it is on the Mac .
Not that it should n't be created , but it makes sense to me why it took a while before someone started developing this for Linux .</tokentext>
<sentencetext>If you have access to the source, you can always compile a version for your platform.
The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.Since most apps for Linux are free and the source is available, this feature isn't as useful as it is on the Mac.
Not that it shouldn't be created, but it makes sense to me why it took a while before someone started developing this for Linux.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864155</id>
	<title>With open source code, no need for one binary.</title>
	<author>Cyberwasteland</author>
	<datestamp>1256481900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>People have been asking for a universal binary or some sort of Universal Package Manager (UPM, that could work) for linux for ages.
I think the main reason why there isn't one yet is the FLOSS nature of Linux.
When the source code is open, there is no need for one single binary, it's up to the distro (or user) to make it into an fitting binary package. For that reason I think it wont change much if there was one single Binary.
Also I've always thought that more applications should just use<nobr> <wbr></nobr>.bin files, compatible for all.</htmltext>
<tokenext>People have been asking for a universal binary or some sort of Universal Package Manager ( UPM , that could work ) for linux for ages .
I think the main reason why there is n't one yet is the FLOSS nature of Linux .
When the source code is open , there is no need for one single binary , it 's up to the distro ( or user ) to make it into an fitting binary package .
For that reason I think it wont change much if there was one single Binary .
Also I 've always thought that more applications should just use .bin files , compatible for all .</tokentext>
<sentencetext>People have been asking for a universal binary or some sort of Universal Package Manager (UPM, that could work) for linux for ages.
I think the main reason why there isn't one yet is the FLOSS nature of Linux.
When the source code is open, there is no need for one single binary, it's up to the distro (or user) to make it into an fitting binary package.
For that reason I think it wont change much if there was one single Binary.
Also I've always thought that more applications should just use .bin files, compatible for all.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867409</id>
	<title>Re:oh boy, just pack all archs on a .deb</title>
	<author>Rob Riggs</author>
	<datestamp>1256468700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That was my thought as well.  Well, as a RedHat/Fedorda user, I thought you should be able to do this with RPM.  But same idea.  This is a problem best solved by a package manager and not a binary format.</p><p>However, after thinking about this further, it's a problem that doesn't need solving.  I get all my binaries from repos that support all the architectures I use.</p></htmltext>
<tokenext>That was my thought as well .
Well , as a RedHat/Fedorda user , I thought you should be able to do this with RPM .
But same idea .
This is a problem best solved by a package manager and not a binary format.However , after thinking about this further , it 's a problem that does n't need solving .
I get all my binaries from repos that support all the architectures I use .</tokentext>
<sentencetext>That was my thought as well.
Well, as a RedHat/Fedorda user, I thought you should be able to do this with RPM.
But same idea.
This is a problem best solved by a package manager and not a binary format.However, after thinking about this further, it's a problem that doesn't need solving.
I get all my binaries from repos that support all the architectures I use.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864421</id>
	<title>Re:Not scalable</title>
	<author>Anonymous</author>
	<datestamp>1256484300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>FatELF includes tools to strip the parts that are not for your architecture. So it is like shipping two or more binaries in the same package or tarball that you don't have to extract to run.</p></htmltext>
<tokenext>FatELF includes tools to strip the parts that are not for your architecture .
So it is like shipping two or more binaries in the same package or tarball that you do n't have to extract to run .</tokentext>
<sentencetext>FatELF includes tools to strip the parts that are not for your architecture.
So it is like shipping two or more binaries in the same package or tarball that you don't have to extract to run.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867629</id>
	<title>Interpreters?</title>
	<author>Anonymous</author>
	<datestamp>1256471880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If only the differences between the architectures could somehow be abstracted away... If only we could port one program to all these different architectures to accomplish this task...</p><p>Oh wait... We already have interpreters like python and ruby. If someone wants their program to work on many different architectures but does not want to compile many different types of binaries, why not just write something along the lines of python or ruby code.</p></htmltext>
<tokenext>If only the differences between the architectures could somehow be abstracted away... If only we could port one program to all these different architectures to accomplish this task...Oh wait... We already have interpreters like python and ruby .
If someone wants their program to work on many different architectures but does not want to compile many different types of binaries , why not just write something along the lines of python or ruby code .</tokentext>
<sentencetext>If only the differences between the architectures could somehow be abstracted away... If only we could port one program to all these different architectures to accomplish this task...Oh wait... We already have interpreters like python and ruby.
If someone wants their program to work on many different architectures but does not want to compile many different types of binaries, why not just write something along the lines of python or ruby code.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864735</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>RedK</author>
	<datestamp>1256487660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>The binary is compiled twice. The way OS X packages its applications is the the Application that Icon that you click on isn't a File but a Folder with a predefined structure. So there there is a PPC and an intel port of the executable.</p></div><p>This is wrong.  OS X has 1 binary file that contains both architectures in the App Bundle, not 2 executables : <br>
<br>
$ ls -al firefox-bin<br>
-rwxr-xr-x@ 1 user  admin  42596  9 Sep 19:19 firefox-bin<br>
$ file<nobr> <wbr></nobr>./firefox-bin<br><nobr> <wbr></nobr>./firefox-bin: Mach-O universal binary with 2 architectures<br><nobr> <wbr></nobr>./firefox-bin (for architecture ppc):	Mach-O executable ppc<br><nobr> <wbr></nobr>./firefox-bin (for architecture i386):	Mach-O executable i386<br>
$ pwd<br><nobr> <wbr></nobr>/Applications/Firefox.app/Contents/MacOS</p></div>
	</htmltext>
<tokenext>The binary is compiled twice .
The way OS X packages its applications is the the Application that Icon that you click on is n't a File but a Folder with a predefined structure .
So there there is a PPC and an intel port of the executable.This is wrong .
OS X has 1 binary file that contains both architectures in the App Bundle , not 2 executables : $ ls -al firefox-bin -rwxr-xr-x @ 1 user admin 42596 9 Sep 19 : 19 firefox-bin $ file ./firefox-bin ./firefox-bin : Mach-O universal binary with 2 architectures ./firefox-bin ( for architecture ppc ) : Mach-O executable ppc ./firefox-bin ( for architecture i386 ) : Mach-O executable i386 $ pwd /Applications/Firefox.app/Contents/MacOS</tokentext>
<sentencetext>The binary is compiled twice.
The way OS X packages its applications is the the Application that Icon that you click on isn't a File but a Folder with a predefined structure.
So there there is a PPC and an intel port of the executable.This is wrong.
OS X has 1 binary file that contains both architectures in the App Bundle, not 2 executables : 

$ ls -al firefox-bin
-rwxr-xr-x@ 1 user  admin  42596  9 Sep 19:19 firefox-bin
$ file ./firefox-bin ./firefox-bin: Mach-O universal binary with 2 architectures ./firefox-bin (for architecture ppc):	Mach-O executable ppc ./firefox-bin (for architecture i386):	Mach-O executable i386
$ pwd /Applications/Firefox.app/Contents/MacOS
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863911</id>
	<title>Re:Only useful for non-free applications</title>
	<author>dingen</author>
	<datestamp>1256479500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).</p></div><p>I'm sorry, I should have been more clear. I mean free as in freedom. MS Visual Studio Express isn't free, it just doesn't cost any money to purchase.</p></div>
	</htmltext>
<tokenext>Furthermore , I think you mean to say that it 's " only useful for non-open source applications " as there are tons of free software applications out there that are not open source but are free ( like Microsoft 's Express editions of Visual Studio ) .I 'm sorry , I should have been more clear .
I mean free as in freedom .
MS Visual Studio Express is n't free , it just does n't cost any money to purchase .</tokentext>
<sentencetext>Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).I'm sorry, I should have been more clear.
I mean free as in freedom.
MS Visual Studio Express isn't free, it just doesn't cost any money to purchase.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865411</id>
	<title>I was curious</title>
	<author>ratboy666</author>
	<datestamp>1256494380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So I decided to have a look at my netbook -- an Acer Aspire One, running the Acer Linpus Linux system. It's the 512MB memory with 120GB disk model; not the SSD dress.</p><p>[user@ariel ~]$ find<nobr> <wbr></nobr>/usr<nobr> <wbr></nobr>/sbin<nobr> <wbr></nobr>/bin<nobr> <wbr></nobr>/lib -type f -print0 | xargs -0 file | grep ELF | sed -e "s/:.*$//" &gt;list<br>[user@ariel ~]$ wc list<br>
&nbsp; 12318  12318 455036 list<br>[user@ariel ~]$ ls -l `cat list` | awk -F " *" "{ sum += \$5 }; END { print sum }"<br>2090169634</p><p>For the "non-unix" types, this means -- traverse four directories (/usr<nobr> <wbr></nobr>/sbin<nobr> <wbr></nobr>/bin<nobr> <wbr></nobr>/lib) that contain all the binaries (well, I did forget<nobr> <wbr></nobr>/boot, but that's not very important for this discussion). We extract those files that mention ELF in the file-type, count them, and then sum their sizes.</p><p>Note that there are ~12,000 objects that would be expanded, taking up around 2GB of space. Which means that adding an alternate "fat" binary would roughly double this to 4GB. In other words, not very good for an SSD dress, but reasonable for a hard disk dress (my total data usage is 30GB on this netbook, so we would add 10\% to the utilization).</p><p>On the other hand, this is storage resource that would not ever be reclaimed. There are other ways to achieve this result. For example -- simply provide multiple ELF objects in an archive, including a "bintester". Iterate each folder in the archive, test-running the bintester. If the bintester succeeds, use that. If none succeed, fail if its "closed object/source", or attempt to download and build the source. An example of this type of function is the VMware driver loader.</p><p>Or, use a neutral distribution -- JAVA, or Squeak, or the Microsoft CLR.</p><p>All of these provide similar results, without the need to modify the object loader. Which is something that I would really frown about -- After all, the loader is a very heavily used, very security sensitive component.</p><p>As to "fatness"; it turns out not to be as big an issue as I had originally feared (I did the measurement on the "worst-case" system I use, and started with the mind-set that it would matter a whole lot more than it turned out... my preconception was wrong).</p><p>Your mileage may, of course, vary.</p><p>PS. For full disclosure, you may note that I excluded<nobr> <wbr></nobr>/opt -- I have 3.9G of stuff there: OpenOffice.org3, Intel C 11.0, Adobe Reader, Microsoft MSVC 7, and some smaller stuff (tcl/tk dev, hua wei 220 support, and a virtual tape library . Since this may or may not impact a "standard" distribution, I decided to not include it (and, MSVC 7 is COFF format, used under WINE anyway).</p></htmltext>
<tokenext>So I decided to have a look at my netbook -- an Acer Aspire One , running the Acer Linpus Linux system .
It 's the 512MB memory with 120GB disk model ; not the SSD dress .
[ user @ ariel ~ ] $ find /usr /sbin /bin /lib -type f -print0 | xargs -0 file | grep ELF | sed -e " s/ : .
* $ // " &gt; list [ user @ ariel ~ ] $ wc list   12318 12318 455036 list [ user @ ariel ~ ] $ ls -l ` cat list ` | awk -F " * " " { sum + = \ $ 5 } ; END { print sum } " 2090169634For the " non-unix " types , this means -- traverse four directories ( /usr /sbin /bin /lib ) that contain all the binaries ( well , I did forget /boot , but that 's not very important for this discussion ) .
We extract those files that mention ELF in the file-type , count them , and then sum their sizes.Note that there are ~ 12,000 objects that would be expanded , taking up around 2GB of space .
Which means that adding an alternate " fat " binary would roughly double this to 4GB .
In other words , not very good for an SSD dress , but reasonable for a hard disk dress ( my total data usage is 30GB on this netbook , so we would add 10 \ % to the utilization ) .On the other hand , this is storage resource that would not ever be reclaimed .
There are other ways to achieve this result .
For example -- simply provide multiple ELF objects in an archive , including a " bintester " .
Iterate each folder in the archive , test-running the bintester .
If the bintester succeeds , use that .
If none succeed , fail if its " closed object/source " , or attempt to download and build the source .
An example of this type of function is the VMware driver loader.Or , use a neutral distribution -- JAVA , or Squeak , or the Microsoft CLR.All of these provide similar results , without the need to modify the object loader .
Which is something that I would really frown about -- After all , the loader is a very heavily used , very security sensitive component.As to " fatness " ; it turns out not to be as big an issue as I had originally feared ( I did the measurement on the " worst-case " system I use , and started with the mind-set that it would matter a whole lot more than it turned out... my preconception was wrong ) .Your mileage may , of course , vary.PS .
For full disclosure , you may note that I excluded /opt -- I have 3.9G of stuff there : OpenOffice.org3 , Intel C 11.0 , Adobe Reader , Microsoft MSVC 7 , and some smaller stuff ( tcl/tk dev , hua wei 220 support , and a virtual tape library .
Since this may or may not impact a " standard " distribution , I decided to not include it ( and , MSVC 7 is COFF format , used under WINE anyway ) .</tokentext>
<sentencetext>So I decided to have a look at my netbook -- an Acer Aspire One, running the Acer Linpus Linux system.
It's the 512MB memory with 120GB disk model; not the SSD dress.
[user@ariel ~]$ find /usr /sbin /bin /lib -type f -print0 | xargs -0 file | grep ELF | sed -e "s/:.
*$//" &gt;list[user@ariel ~]$ wc list
  12318  12318 455036 list[user@ariel ~]$ ls -l `cat list` | awk -F " *" "{ sum += \$5 }; END { print sum }"2090169634For the "non-unix" types, this means -- traverse four directories (/usr /sbin /bin /lib) that contain all the binaries (well, I did forget /boot, but that's not very important for this discussion).
We extract those files that mention ELF in the file-type, count them, and then sum their sizes.Note that there are ~12,000 objects that would be expanded, taking up around 2GB of space.
Which means that adding an alternate "fat" binary would roughly double this to 4GB.
In other words, not very good for an SSD dress, but reasonable for a hard disk dress (my total data usage is 30GB on this netbook, so we would add 10\% to the utilization).On the other hand, this is storage resource that would not ever be reclaimed.
There are other ways to achieve this result.
For example -- simply provide multiple ELF objects in an archive, including a "bintester".
Iterate each folder in the archive, test-running the bintester.
If the bintester succeeds, use that.
If none succeed, fail if its "closed object/source", or attempt to download and build the source.
An example of this type of function is the VMware driver loader.Or, use a neutral distribution -- JAVA, or Squeak, or the Microsoft CLR.All of these provide similar results, without the need to modify the object loader.
Which is something that I would really frown about -- After all, the loader is a very heavily used, very security sensitive component.As to "fatness"; it turns out not to be as big an issue as I had originally feared (I did the measurement on the "worst-case" system I use, and started with the mind-set that it would matter a whole lot more than it turned out... my preconception was wrong).Your mileage may, of course, vary.PS.
For full disclosure, you may note that I excluded /opt -- I have 3.9G of stuff there: OpenOffice.org3, Intel C 11.0, Adobe Reader, Microsoft MSVC 7, and some smaller stuff (tcl/tk dev, hua wei 220 support, and a virtual tape library .
Since this may or may not impact a "standard" distribution, I decided to not include it (and, MSVC 7 is COFF format, used under WINE anyway).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863701</id>
	<title>Re:Linking problems</title>
	<author>dkf</author>
	<datestamp>1256477280000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p> Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?</p></div><p> Probably not. Or not without getting headaches like you get with assemblies on Vista. Keying off the system architecture (32-bit x86 vs. 64-bit ia64) is much simpler than keying off library versions.</p><p>The fix with standard libraries is for the makers of them to stop screwing around and stick with ABI compatibility for a good number of years. OK, this does tend to codify some poor decisions but is enormously more supportive of application programmers. Note that I differentiate from API compat.; rebuilding against a later version of the API can result in a different - later - part of the ABI being used, and it's definitely possible to extend the ABI if structure and offset versioning is done right. But overall, it takes a lot of discipline (i.e., commitment to being a foundational library) from the part of the authors of the standard libs, and some languages make that hard (it's easier in C than in C++, for example).</p></div>
	</htmltext>
<tokenext>Could this technology also help binaries to link against multiple versions of standard libraries ( glibc , libstdc + + ) ?
Probably not .
Or not without getting headaches like you get with assemblies on Vista .
Keying off the system architecture ( 32-bit x86 vs. 64-bit ia64 ) is much simpler than keying off library versions.The fix with standard libraries is for the makers of them to stop screwing around and stick with ABI compatibility for a good number of years .
OK , this does tend to codify some poor decisions but is enormously more supportive of application programmers .
Note that I differentiate from API compat .
; rebuilding against a later version of the API can result in a different - later - part of the ABI being used , and it 's definitely possible to extend the ABI if structure and offset versioning is done right .
But overall , it takes a lot of discipline ( i.e. , commitment to being a foundational library ) from the part of the authors of the standard libs , and some languages make that hard ( it 's easier in C than in C + + , for example ) .</tokentext>
<sentencetext> Could this technology also help binaries to link against multiple versions of standard libraries (glibc, libstdc++)?
Probably not.
Or not without getting headaches like you get with assemblies on Vista.
Keying off the system architecture (32-bit x86 vs. 64-bit ia64) is much simpler than keying off library versions.The fix with standard libraries is for the makers of them to stop screwing around and stick with ABI compatibility for a good number of years.
OK, this does tend to codify some poor decisions but is enormously more supportive of application programmers.
Note that I differentiate from API compat.
; rebuilding against a later version of the API can result in a different - later - part of the ABI being used, and it's definitely possible to extend the ABI if structure and offset versioning is done right.
But overall, it takes a lot of discipline (i.e., commitment to being a foundational library) from the part of the authors of the standard libs, and some languages make that hard (it's easier in C than in C++, for example).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864861</id>
	<title>Re:Only useful for non-free applications</title>
	<author>BitZtream</author>
	<datestamp>1256488860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Running 10.6 is hardly 'bleeding edge'.  It would have been bleeding edge a year or so ago when the first developer seeds of it went out now.</p><p>Any app that hasn't patched for 10.6 at this point is a neglected app.</p><p>Do yourself a favor, use Sketsa instead.  It actually produces standard SVG files.  Using Inkscape to produce SVGs is like using Word to make HTML, you end up with a half assed, full of proprietary tags, mess chunk of an SVG that won't render right in just about any standard SVG display system if its more complex than a smiley face.</p><p>Yes, Sketsa is commercial, but the price is well worth the difference.  Inkscape is crap.</p></htmltext>
<tokenext>Running 10.6 is hardly 'bleeding edge' .
It would have been bleeding edge a year or so ago when the first developer seeds of it went out now.Any app that has n't patched for 10.6 at this point is a neglected app.Do yourself a favor , use Sketsa instead .
It actually produces standard SVG files .
Using Inkscape to produce SVGs is like using Word to make HTML , you end up with a half assed , full of proprietary tags , mess chunk of an SVG that wo n't render right in just about any standard SVG display system if its more complex than a smiley face.Yes , Sketsa is commercial , but the price is well worth the difference .
Inkscape is crap .</tokentext>
<sentencetext>Running 10.6 is hardly 'bleeding edge'.
It would have been bleeding edge a year or so ago when the first developer seeds of it went out now.Any app that hasn't patched for 10.6 at this point is a neglected app.Do yourself a favor, use Sketsa instead.
It actually produces standard SVG files.
Using Inkscape to produce SVGs is like using Word to make HTML, you end up with a half assed, full of proprietary tags, mess chunk of an SVG that won't render right in just about any standard SVG display system if its more complex than a smiley face.Yes, Sketsa is commercial, but the price is well worth the difference.
Inkscape is crap.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29883225</id>
	<title>Re:He forgot the ARM, z10, m88k CPUs</title>
	<author>badkarmadayaccount</author>
	<datestamp>1256655300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Between 1.3x and 2x smaller than x86\_32 binaries. LLVA FTW!</htmltext>
<tokenext>Between 1.3x and 2x smaller than x86 \ _32 binaries .
LLVA FTW !</tokentext>
<sentencetext>Between 1.3x and 2x smaller than x86\_32 binaries.
LLVA FTW!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864369</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864029</id>
	<title>Better Solutions For This Problem Exist</title>
	<author>Anonymous</author>
	<datestamp>1256480640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It seems to me this problem of "a single binary which can run on multiple architectures" could be extended to "a single binary which can run on multiple platforms."  For BOTH of these goals, rolling all the possible binaries into one larger executable seems to be a bit of a messy, sloppy approach.</p><p>On the other hand, what if we compiled programs to some kind of intermediate language, and ran it on a code interpreter or virtual machine?  The virtual machine could have a version for every platform and architecture.  We could call it something random, like... Java.  Or<nobr> <wbr></nobr>.NET.</p><p>Oh wait.</p></htmltext>
<tokenext>It seems to me this problem of " a single binary which can run on multiple architectures " could be extended to " a single binary which can run on multiple platforms .
" For BOTH of these goals , rolling all the possible binaries into one larger executable seems to be a bit of a messy , sloppy approach.On the other hand , what if we compiled programs to some kind of intermediate language , and ran it on a code interpreter or virtual machine ?
The virtual machine could have a version for every platform and architecture .
We could call it something random , like... Java. Or .NET.Oh wait .</tokentext>
<sentencetext>It seems to me this problem of "a single binary which can run on multiple architectures" could be extended to "a single binary which can run on multiple platforms.
"  For BOTH of these goals, rolling all the possible binaries into one larger executable seems to be a bit of a messy, sloppy approach.On the other hand, what if we compiled programs to some kind of intermediate language, and ran it on a code interpreter or virtual machine?
The virtual machine could have a version for every platform and architecture.
We could call it something random, like... Java.  Or .NET.Oh wait.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865283</id>
	<title>Fat packages or FatELF, Qemu and x86 on ARM netboo</title>
	<author>caseih</author>
	<datestamp>1256493120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One use case I can see for fat packages or FatELF (either one works) would be if we also had a smart boot loader that could take a single-architecture binary and run it through QEMU in conjunction with FatELF libraries on the system.  If my ARM netbook ran Fedora, example, and installed FatELF libraries supporting both ARM and x86 (supposing cheap SSDs!) then if I really needed Adobe Reader for some reason which is only on x86, then I could download and install their Fedora x86 RPM and run it seamlessly.</p><p>This could also be done with plain fat packages too.  A fat gtk-libs package, for example could dump files native to your architecture in<nobr> <wbr></nobr>/usr/lib or<nobr> <wbr></nobr>/usr/lib64 or both, while putting other architecture libraries in the Qemu system directory for that architecture.</p><p>Note that Qemu already allows this without fat libraries.  But wouldn't expect a average netbook user to know how to download the rpm or tarball, extract everything and stick it somewhere, then populate the Qemu system directory with Fedora x86 libraries.  The benefit of FatELF or fat packages here would be that the RPM would just install because it could see that the requisite FatELF libraries where installed or that the architecture-specific libs were installed in the Qemu system.  This is all provided that RPM or DEB or whatever either knows about FatELF or implements fat packages.</p></htmltext>
<tokenext>One use case I can see for fat packages or FatELF ( either one works ) would be if we also had a smart boot loader that could take a single-architecture binary and run it through QEMU in conjunction with FatELF libraries on the system .
If my ARM netbook ran Fedora , example , and installed FatELF libraries supporting both ARM and x86 ( supposing cheap SSDs !
) then if I really needed Adobe Reader for some reason which is only on x86 , then I could download and install their Fedora x86 RPM and run it seamlessly.This could also be done with plain fat packages too .
A fat gtk-libs package , for example could dump files native to your architecture in /usr/lib or /usr/lib64 or both , while putting other architecture libraries in the Qemu system directory for that architecture.Note that Qemu already allows this without fat libraries .
But would n't expect a average netbook user to know how to download the rpm or tarball , extract everything and stick it somewhere , then populate the Qemu system directory with Fedora x86 libraries .
The benefit of FatELF or fat packages here would be that the RPM would just install because it could see that the requisite FatELF libraries where installed or that the architecture-specific libs were installed in the Qemu system .
This is all provided that RPM or DEB or whatever either knows about FatELF or implements fat packages .</tokentext>
<sentencetext>One use case I can see for fat packages or FatELF (either one works) would be if we also had a smart boot loader that could take a single-architecture binary and run it through QEMU in conjunction with FatELF libraries on the system.
If my ARM netbook ran Fedora, example, and installed FatELF libraries supporting both ARM and x86 (supposing cheap SSDs!
) then if I really needed Adobe Reader for some reason which is only on x86, then I could download and install their Fedora x86 RPM and run it seamlessly.This could also be done with plain fat packages too.
A fat gtk-libs package, for example could dump files native to your architecture in /usr/lib or /usr/lib64 or both, while putting other architecture libraries in the Qemu system directory for that architecture.Note that Qemu already allows this without fat libraries.
But wouldn't expect a average netbook user to know how to download the rpm or tarball, extract everything and stick it somewhere, then populate the Qemu system directory with Fedora x86 libraries.
The benefit of FatELF or fat packages here would be that the RPM would just install because it could see that the requisite FatELF libraries where installed or that the architecture-specific libs were installed in the Qemu system.
This is all provided that RPM or DEB or whatever either knows about FatELF or implements fat packages.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864079</id>
	<title>Re:Only useful for non-free applications</title>
	<author>mrmeval</author>
	<datestamp>1256481240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We already have all of this without the bloat blob that has irrelevant crap in it. Adobe set up a binary server for it's product for various flavors of linux and if they do the code right it works. All I had to do was add them into my repository list.</p><p>If they need some means of collecting money or locking it to only my PC that may be difficult but free is easy. Non-free would probably need a physical or internet dongle if they needed that level of paranoia.</p></htmltext>
<tokenext>We already have all of this without the bloat blob that has irrelevant crap in it .
Adobe set up a binary server for it 's product for various flavors of linux and if they do the code right it works .
All I had to do was add them into my repository list.If they need some means of collecting money or locking it to only my PC that may be difficult but free is easy .
Non-free would probably need a physical or internet dongle if they needed that level of paranoia .</tokentext>
<sentencetext>We already have all of this without the bloat blob that has irrelevant crap in it.
Adobe set up a binary server for it's product for various flavors of linux and if they do the code right it works.
All I had to do was add them into my repository list.If they need some means of collecting money or locking it to only my PC that may be difficult but free is easy.
Non-free would probably need a physical or internet dongle if they needed that level of paranoia.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863645</id>
	<title>Gee, just 14 years</title>
	<author>Anonymous</author>
	<datestamp>1256476500000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>after the diminse of NeXTStep!</p><p>(c)Innovation!!(tm)(R)</p></htmltext>
<tokenext>after the diminse of NeXTStep ! ( c ) Innovation ! !
( tm ) ( R )</tokentext>
<sentencetext>after the diminse of NeXTStep!(c)Innovation!!
(tm)(R)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121</id>
	<title>Unix (OSF) tried it with ANDF</title>
	<author>Alain Williams</author>
	<datestamp>1256481600000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><a href="http://en.wikipedia.org/wiki/ANDF" title="wikipedia.org">Architecture Neutral Distribution Format</a> [wikipedia.org] was tried some 20 years ago. The idea was to have a binary that could be installed on any machine. From what I can remember it involved compiling to some intermediate form and when installed compilation to the target machine code was done.<p>
It never really flew.</p><p>
If someone wants to do this then something like Java would be good enough for many types of software. There will always be some things for which a binary tied to the specific target is all that would work; I think that it would be better to adopt something that works for most software rather than trying to achieve 100\%.</p></htmltext>
<tokenext>Architecture Neutral Distribution Format [ wikipedia.org ] was tried some 20 years ago .
The idea was to have a binary that could be installed on any machine .
From what I can remember it involved compiling to some intermediate form and when installed compilation to the target machine code was done .
It never really flew .
If someone wants to do this then something like Java would be good enough for many types of software .
There will always be some things for which a binary tied to the specific target is all that would work ; I think that it would be better to adopt something that works for most software rather than trying to achieve 100 \ % .</tokentext>
<sentencetext>Architecture Neutral Distribution Format [wikipedia.org] was tried some 20 years ago.
The idea was to have a binary that could be installed on any machine.
From what I can remember it involved compiling to some intermediate form and when installed compilation to the target machine code was done.
It never really flew.
If someone wants to do this then something like Java would be good enough for many types of software.
There will always be some things for which a binary tied to the specific target is all that would work; I think that it would be better to adopt something that works for most software rather than trying to achieve 100\%.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864225</id>
	<title>Re:Not scalable</title>
	<author>Hal\_Porter</author>
	<datestamp>1256482440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Yup. I think the best way to do this would be to have a compiler that works in two passes - the first one would turn C into some intermediate format and would be run on the developer's machine, the second would turn that intermediate code into native code and then link it.</p><p>Of course for this to work there would need to be a NT 4.0 like similarity between the processor platforms. In the world of NT 4.0 all supported processors were 32 bit, all are little endian and all had the same alignment rules for structures. The same API existed across all platforms and all the types had the same sizes. Of course it is possible to force unaligned data, and in that case the Risc chips would fault, but NT 4.0 installed an alignment fixup handler to let the code run - albeit much more slowly - if they did this.</p><p>So Win16 code could be ported with a bit of work to Win32 and once ported it was trivial to cross compile. Still in practice no one bothered because 99.9\% of the users had x86.</p><p>Still if the Risc chips had taken over, the NT 4.0 approach would probably have worked quite well. In fact now with x64 taking over this approach works quite well now. The current Windows API is cunningly defined to be portable between x64 and x86 you follow the rules you produce code which will build for x86 and x64. Your code will build for Itanium too - the only things that change size are pointer sized ones.</p><p>Now in the world of free software it's not like this - alignment, endianess structure packing rules and so on all vary. It's not standard on Unix to fixup alignment faults transparently. There are multiple library versions around on different distributions. So if you had an executable in some intermediate format there would be no way to know it would work once it was turned into native code on your system unless the developer had tested it. If he had tested it, he might was well give you the native binary for your system. If he hadn't you need to get the source code and port it.</p></htmltext>
<tokenext>Yup .
I think the best way to do this would be to have a compiler that works in two passes - the first one would turn C into some intermediate format and would be run on the developer 's machine , the second would turn that intermediate code into native code and then link it.Of course for this to work there would need to be a NT 4.0 like similarity between the processor platforms .
In the world of NT 4.0 all supported processors were 32 bit , all are little endian and all had the same alignment rules for structures .
The same API existed across all platforms and all the types had the same sizes .
Of course it is possible to force unaligned data , and in that case the Risc chips would fault , but NT 4.0 installed an alignment fixup handler to let the code run - albeit much more slowly - if they did this.So Win16 code could be ported with a bit of work to Win32 and once ported it was trivial to cross compile .
Still in practice no one bothered because 99.9 \ % of the users had x86.Still if the Risc chips had taken over , the NT 4.0 approach would probably have worked quite well .
In fact now with x64 taking over this approach works quite well now .
The current Windows API is cunningly defined to be portable between x64 and x86 you follow the rules you produce code which will build for x86 and x64 .
Your code will build for Itanium too - the only things that change size are pointer sized ones.Now in the world of free software it 's not like this - alignment , endianess structure packing rules and so on all vary .
It 's not standard on Unix to fixup alignment faults transparently .
There are multiple library versions around on different distributions .
So if you had an executable in some intermediate format there would be no way to know it would work once it was turned into native code on your system unless the developer had tested it .
If he had tested it , he might was well give you the native binary for your system .
If he had n't you need to get the source code and port it .</tokentext>
<sentencetext>Yup.
I think the best way to do this would be to have a compiler that works in two passes - the first one would turn C into some intermediate format and would be run on the developer's machine, the second would turn that intermediate code into native code and then link it.Of course for this to work there would need to be a NT 4.0 like similarity between the processor platforms.
In the world of NT 4.0 all supported processors were 32 bit, all are little endian and all had the same alignment rules for structures.
The same API existed across all platforms and all the types had the same sizes.
Of course it is possible to force unaligned data, and in that case the Risc chips would fault, but NT 4.0 installed an alignment fixup handler to let the code run - albeit much more slowly - if they did this.So Win16 code could be ported with a bit of work to Win32 and once ported it was trivial to cross compile.
Still in practice no one bothered because 99.9\% of the users had x86.Still if the Risc chips had taken over, the NT 4.0 approach would probably have worked quite well.
In fact now with x64 taking over this approach works quite well now.
The current Windows API is cunningly defined to be portable between x64 and x86 you follow the rules you produce code which will build for x86 and x64.
Your code will build for Itanium too - the only things that change size are pointer sized ones.Now in the world of free software it's not like this - alignment, endianess structure packing rules and so on all vary.
It's not standard on Unix to fixup alignment faults transparently.
There are multiple library versions around on different distributions.
So if you had an executable in some intermediate format there would be no way to know it would work once it was turned into native code on your system unless the developer had tested it.
If he had tested it, he might was well give you the native binary for your system.
If he hadn't you need to get the source code and port it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863893</id>
	<title>Not a particularly terrible idea...</title>
	<author>cfriedt</author>
	<datestamp>1256479320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This isn't a particularly bad idea and community-driven distros (or maybe the community of community-driven distros) like Ubuntu would probably benefit from it quite significantly. You can even strip unnecessary binary portions out of most programs (at least with the mach-o binary format), although it would really only effect disk usage. With the disk capacities today, it's really negligible disk-usage savings anyway.</p><p>In terms of the Linux kernel, this would mean a major overhaul for a large portion of the kernel and I can't see it being adopted very widely outside of the Desktop market.</p></htmltext>
<tokenext>This is n't a particularly bad idea and community-driven distros ( or maybe the community of community-driven distros ) like Ubuntu would probably benefit from it quite significantly .
You can even strip unnecessary binary portions out of most programs ( at least with the mach-o binary format ) , although it would really only effect disk usage .
With the disk capacities today , it 's really negligible disk-usage savings anyway.In terms of the Linux kernel , this would mean a major overhaul for a large portion of the kernel and I ca n't see it being adopted very widely outside of the Desktop market .</tokentext>
<sentencetext>This isn't a particularly bad idea and community-driven distros (or maybe the community of community-driven distros) like Ubuntu would probably benefit from it quite significantly.
You can even strip unnecessary binary portions out of most programs (at least with the mach-o binary format), although it would really only effect disk usage.
With the disk capacities today, it's really negligible disk-usage savings anyway.In terms of the Linux kernel, this would mean a major overhaul for a large portion of the kernel and I can't see it being adopted very widely outside of the Desktop market.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29870337</id>
	<title>Re:</title>
	<author>clint999</author>
	<datestamp>1256553000000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><blockquote><div><p>The down side of this approach is that it consumes a bit more disk space because you have a copy of all of the data (not just the code) in every binary.</p></div></blockquote></div>
	</htmltext>
<tokenext>The down side of this approach is that it consumes a bit more disk space because you have a copy of all of the data ( not just the code ) in every binary .</tokentext>
<sentencetext>The down side of this approach is that it consumes a bit more disk space because you have a copy of all of the data (not just the code) in every binary.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863673</id>
	<title>Apple dropped it</title>
	<author>Anonymous</author>
	<datestamp>1256477040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ask PPC owners that want to get the latest version of OS X.</p></htmltext>
<tokenext>Ask PPC owners that want to get the latest version of OS X .</tokentext>
<sentencetext>Ask PPC owners that want to get the latest version of OS X.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864251</id>
	<title>Reversing decades of package management advances</title>
	<author>Anonymous</author>
	<datestamp>1256482680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The entire reason dynamic linking was invented is to make this sort of rubbish unnecessary. It's thanks to this that, on sane operating systems lacking DLL hell, the same library can be installed once and then used by hundreds of applications that themselves weigh under 300k --- we only have to download the application, because we already installed its dependencies*, and we certainly don't have to download the application and its dependencies <em>compiled for eight architectures at once</em>. Does this person not realise how much space/bandwidth/memory that would take? My Debian installation's root partition is only 6GB, which includes every application I use. Why would anyone want to make the situation more like Windows/OSX? Ah, yes, to make it slightly easier for lazy proprietary developers. Sorry, but we don't actually need lazy proprietary developers.<br>
&nbsp; <br>*Dependencies are automatically resolved and installed in the event they've not previously been installed.</p></htmltext>
<tokenext>The entire reason dynamic linking was invented is to make this sort of rubbish unnecessary .
It 's thanks to this that , on sane operating systems lacking DLL hell , the same library can be installed once and then used by hundreds of applications that themselves weigh under 300k --- we only have to download the application , because we already installed its dependencies * , and we certainly do n't have to download the application and its dependencies compiled for eight architectures at once .
Does this person not realise how much space/bandwidth/memory that would take ?
My Debian installation 's root partition is only 6GB , which includes every application I use .
Why would anyone want to make the situation more like Windows/OSX ?
Ah , yes , to make it slightly easier for lazy proprietary developers .
Sorry , but we do n't actually need lazy proprietary developers .
  * Dependencies are automatically resolved and installed in the event they 've not previously been installed .</tokentext>
<sentencetext>The entire reason dynamic linking was invented is to make this sort of rubbish unnecessary.
It's thanks to this that, on sane operating systems lacking DLL hell, the same library can be installed once and then used by hundreds of applications that themselves weigh under 300k --- we only have to download the application, because we already installed its dependencies*, and we certainly don't have to download the application and its dependencies compiled for eight architectures at once.
Does this person not realise how much space/bandwidth/memory that would take?
My Debian installation's root partition is only 6GB, which includes every application I use.
Why would anyone want to make the situation more like Windows/OSX?
Ah, yes, to make it slightly easier for lazy proprietary developers.
Sorry, but we don't actually need lazy proprietary developers.
  *Dependencies are automatically resolved and installed in the event they've not previously been installed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29879399</id>
	<title>Mono?</title>
	<author>GWBasic</author>
	<datestamp>1256562900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Uhm, doesn't mono already give Linux this?  All they need to do is finish gcc's support for CLR binaries and the problem is solved.  (CLR / mono can call into the underlying platform's C API.)</htmltext>
<tokenext>Uhm , does n't mono already give Linux this ?
All they need to do is finish gcc 's support for CLR binaries and the problem is solved .
( CLR / mono can call into the underlying platform 's C API .
)</tokentext>
<sentencetext>Uhm, doesn't mono already give Linux this?
All they need to do is finish gcc's support for CLR binaries and the problem is solved.
(CLR / mono can call into the underlying platform's C API.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864313</id>
	<title>Re:Only useful for non-free applications</title>
	<author>Stevecrox</author>
	<datestamp>1256483220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I have to disagree, this is useful for open source and is probably needed sooner rather than later. With windows I can take an install file from 1997 (Myst Masterpiece and Quicktime 6.5.2 in this example) and install it on my Win 7 machine, I don't have to care if its CentOS, Debian, etc... nor do I have to care if its version 7.4, 8.10, etc...<br> <br>
While the various package managers do provide pretty much everything, forcing key libraries to be backwards compatible and providing a file that will work out all the tech stuff for you would bring this aspect of Linux up to Windows/Mac OSX standard.<br> <br>
I program at work and I don't want to be recompiling binaries at home.</htmltext>
<tokenext>I have to disagree , this is useful for open source and is probably needed sooner rather than later .
With windows I can take an install file from 1997 ( Myst Masterpiece and Quicktime 6.5.2 in this example ) and install it on my Win 7 machine , I do n't have to care if its CentOS , Debian , etc... nor do I have to care if its version 7.4 , 8.10 , etc.. . While the various package managers do provide pretty much everything , forcing key libraries to be backwards compatible and providing a file that will work out all the tech stuff for you would bring this aspect of Linux up to Windows/Mac OSX standard .
I program at work and I do n't want to be recompiling binaries at home .</tokentext>
<sentencetext>I have to disagree, this is useful for open source and is probably needed sooner rather than later.
With windows I can take an install file from 1997 (Myst Masterpiece and Quicktime 6.5.2 in this example) and install it on my Win 7 machine, I don't have to care if its CentOS, Debian, etc... nor do I have to care if its version 7.4, 8.10, etc... 
While the various package managers do provide pretty much everything, forcing key libraries to be backwards compatible and providing a file that will work out all the tech stuff for you would bring this aspect of Linux up to Windows/Mac OSX standard.
I program at work and I don't want to be recompiling binaries at home.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864327</id>
	<title>Just what we need...</title>
	<author>Bloody Peasant</author>
	<datestamp>1256483340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>While this comes at a cost of a larger binary file</p></div></blockquote><p>

Not to mention the greater ease for potential malware to work.  Right now Linux is an extremely unfriendly and hostile environment for such malware.  Why do we need to change that?</p></div>
	</htmltext>
<tokenext>While this comes at a cost of a larger binary file Not to mention the greater ease for potential malware to work .
Right now Linux is an extremely unfriendly and hostile environment for such malware .
Why do we need to change that ?</tokentext>
<sentencetext>While this comes at a cost of a larger binary file

Not to mention the greater ease for potential malware to work.
Right now Linux is an extremely unfriendly and hostile environment for such malware.
Why do we need to change that?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863887</id>
	<title>Re:Only useful for non-free applications</title>
	<author>Digana</author>
	<datestamp>1256479260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>They clearly meant free, not gratis.

Gratis is such a weak feature of software that I don't think it deserves to share a meanin with free.</htmltext>
<tokenext>They clearly meant free , not gratis .
Gratis is such a weak feature of software that I do n't think it deserves to share a meanin with free .</tokentext>
<sentencetext>They clearly meant free, not gratis.
Gratis is such a weak feature of software that I don't think it deserves to share a meanin with free.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864873</id>
	<title>Why not make a universal archive and be done?</title>
	<author>Khyber</author>
	<datestamp>1256489040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How about we have a single chunk of data and executable code/libraries needed to run across any OS or any architecture? If you're going to call it universal you'd best damn-well make it universal! Take the idea of what OSX does for their apps and apply it to every OS possible? Even if the stuff has to be compressed to fit onto a single DVD you can have that decompress itself but still stay a single file (at least on the user end, the OS would still be able to read each file individually inside the archive.) For example, most games only need a different executable, the data (textures, maps, etc) stays the same.</p><p>You could even make it able to run on different distributions of the same kernel! No more stupid dependencies, no more RPM hell, and best of all there'd be no reason for it to just wipe out other installed software, such a beta versions of libraries and what not.</p><p>And then you could lock it down by making that file read-only. If there's an exploit, it won't destroy your software as it can't modify it, unless that exploit somehow allows write access.</p></htmltext>
<tokenext>How about we have a single chunk of data and executable code/libraries needed to run across any OS or any architecture ?
If you 're going to call it universal you 'd best damn-well make it universal !
Take the idea of what OSX does for their apps and apply it to every OS possible ?
Even if the stuff has to be compressed to fit onto a single DVD you can have that decompress itself but still stay a single file ( at least on the user end , the OS would still be able to read each file individually inside the archive .
) For example , most games only need a different executable , the data ( textures , maps , etc ) stays the same.You could even make it able to run on different distributions of the same kernel !
No more stupid dependencies , no more RPM hell , and best of all there 'd be no reason for it to just wipe out other installed software , such a beta versions of libraries and what not.And then you could lock it down by making that file read-only .
If there 's an exploit , it wo n't destroy your software as it ca n't modify it , unless that exploit somehow allows write access .</tokentext>
<sentencetext>How about we have a single chunk of data and executable code/libraries needed to run across any OS or any architecture?
If you're going to call it universal you'd best damn-well make it universal!
Take the idea of what OSX does for their apps and apply it to every OS possible?
Even if the stuff has to be compressed to fit onto a single DVD you can have that decompress itself but still stay a single file (at least on the user end, the OS would still be able to read each file individually inside the archive.
) For example, most games only need a different executable, the data (textures, maps, etc) stays the same.You could even make it able to run on different distributions of the same kernel!
No more stupid dependencies, no more RPM hell, and best of all there'd be no reason for it to just wipe out other installed software, such a beta versions of libraries and what not.And then you could lock it down by making that file read-only.
If there's an exploit, it won't destroy your software as it can't modify it, unless that exploit somehow allows write access.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864827</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>RedK</author>
	<datestamp>1256488560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>That said, OS X' universal files are pretty much on their way out, as Snow Leopard (10.6) doesn't play ball with PPC. As time goes on and people realize that x86 is a dead horse running, we might however see universal executables again, but then as ARM and x86.</p></div><p>This is wrong also.  Snow Leopard still makes good use of Universal binaries even without PPC being there.  They can ship 32 bit and 64 bit in the same binary (i386 and x86\_64).  Universal binaries aren't going away.<br>
<br>
$ ls -al iChat<br>
-rwxr-xr-x  1 root  wheel  5844848 29 Jul 01:28 iChat<br>
$ file<nobr> <wbr></nobr>./iChat<br><nobr> <wbr></nobr>./iChat: Mach-O universal binary with 2 architectures<br><nobr> <wbr></nobr>./iChat (for architecture x86\_64):	Mach-O 64-bit executable x86\_64<br><nobr> <wbr></nobr>./iChat (for architecture i386):	Mach-O executable i386<br>
$ pwd<br><nobr> <wbr></nobr>/Applications/iChat.app/Contents/MacOS</p></div>
	</htmltext>
<tokenext>That said , OS X ' universal files are pretty much on their way out , as Snow Leopard ( 10.6 ) does n't play ball with PPC .
As time goes on and people realize that x86 is a dead horse running , we might however see universal executables again , but then as ARM and x86.This is wrong also .
Snow Leopard still makes good use of Universal binaries even without PPC being there .
They can ship 32 bit and 64 bit in the same binary ( i386 and x86 \ _64 ) .
Universal binaries are n't going away .
$ ls -al iChat -rwxr-xr-x 1 root wheel 5844848 29 Jul 01 : 28 iChat $ file ./iChat ./iChat : Mach-O universal binary with 2 architectures ./iChat ( for architecture x86 \ _64 ) : Mach-O 64-bit executable x86 \ _64 ./iChat ( for architecture i386 ) : Mach-O executable i386 $ pwd /Applications/iChat.app/Contents/MacOS</tokentext>
<sentencetext>That said, OS X' universal files are pretty much on their way out, as Snow Leopard (10.6) doesn't play ball with PPC.
As time goes on and people realize that x86 is a dead horse running, we might however see universal executables again, but then as ARM and x86.This is wrong also.
Snow Leopard still makes good use of Universal binaries even without PPC being there.
They can ship 32 bit and 64 bit in the same binary (i386 and x86\_64).
Universal binaries aren't going away.
$ ls -al iChat
-rwxr-xr-x  1 root  wheel  5844848 29 Jul 01:28 iChat
$ file ./iChat ./iChat: Mach-O universal binary with 2 architectures ./iChat (for architecture x86\_64):	Mach-O 64-bit executable x86\_64 ./iChat (for architecture i386):	Mach-O executable i386
$ pwd /Applications/iChat.app/Contents/MacOS
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863827</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29869115</id>
	<title>Linux, plural.</title>
	<author>Anonymous</author>
	<datestamp>1256492820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>
The vast majority of all desktop computers out there
are x86.  The vast majority of Linux desktops, ditto.
</p><p>
That said, currently we can't even easily produce binaries that
run on multiple distributions <b> <i>for just the x86</i> </b>.
</p><p>
Generally speaking, unlike Windows/OSX, you can not just get "the Linux Desktop Version"
of a program.
<b>There is no Linux desktop.</b>  There is a plethora of them.
For better or worse,
the issues and complexity that arise from this plurality are
due to the lack of central management and control.</p></htmltext>
<tokenext>The vast majority of all desktop computers out there are x86 .
The vast majority of Linux desktops , ditto .
That said , currently we ca n't even easily produce binaries that run on multiple distributions for just the x86 .
Generally speaking , unlike Windows/OSX , you can not just get " the Linux Desktop Version " of a program .
There is no Linux desktop .
There is a plethora of them .
For better or worse , the issues and complexity that arise from this plurality are due to the lack of central management and control .</tokentext>
<sentencetext>
The vast majority of all desktop computers out there
are x86.
The vast majority of Linux desktops, ditto.
That said, currently we can't even easily produce binaries that
run on multiple distributions  for just the x86 .
Generally speaking, unlike Windows/OSX, you can not just get "the Linux Desktop Version"
of a program.
There is no Linux desktop.
There is a plethora of them.
For better or worse,
the issues and complexity that arise from this plurality are
due to the lack of central management and control.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867823</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>maccodemonkey</author>
	<datestamp>1256474040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"You are confusing NeXT and Apple's approaches, I think. Apple puts both all of the different architectures in the same file. Your code is compiled twice, but it's only linked once. The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files."

Actually, this isn't true. The code is compiled four times (for a 32/64 bit unibin), linked four times, and then all four executables are stitched together into one executable file.

With regards to disk space, Apple thought ahead, and the format supports stripping out of versions of the binary you don't want to keep around. For example, some Intel owners run tools on their machines that strip out the PPC versions of binaries to preserve disk space. Some PowerPC owners strip out the Intel versions of the binaries, but then usually run into trouble when they try to migrate their disk to an Intel machine.<nobr> <wbr></nobr>:) Stripping out a version of the binary will even keep a signed binary valid.

FYI: Apple has a very similar sort of setup for language dependent resources.</htmltext>
<tokenext>" You are confusing NeXT and Apple 's approaches , I think .
Apple puts both all of the different architectures in the same file .
Your code is compiled twice , but it 's only linked once .
The PowerPC { 32,64 } and x86 { 32,64 } code all goes in different segments in the binary , but data is shared between all of them , so it takes less space than having 2-4 independent binary files .
" Actually , this is n't true .
The code is compiled four times ( for a 32/64 bit unibin ) , linked four times , and then all four executables are stitched together into one executable file .
With regards to disk space , Apple thought ahead , and the format supports stripping out of versions of the binary you do n't want to keep around .
For example , some Intel owners run tools on their machines that strip out the PPC versions of binaries to preserve disk space .
Some PowerPC owners strip out the Intel versions of the binaries , but then usually run into trouble when they try to migrate their disk to an Intel machine .
: ) Stripping out a version of the binary will even keep a signed binary valid .
FYI : Apple has a very similar sort of setup for language dependent resources .</tokentext>
<sentencetext>"You are confusing NeXT and Apple's approaches, I think.
Apple puts both all of the different architectures in the same file.
Your code is compiled twice, but it's only linked once.
The PowerPC {32,64} and x86 {32,64} code all goes in different segments in the binary, but data is shared between all of them, so it takes less space than having 2-4 independent binary files.
"

Actually, this isn't true.
The code is compiled four times (for a 32/64 bit unibin), linked four times, and then all four executables are stitched together into one executable file.
With regards to disk space, Apple thought ahead, and the format supports stripping out of versions of the binary you don't want to keep around.
For example, some Intel owners run tools on their machines that strip out the PPC versions of binaries to preserve disk space.
Some PowerPC owners strip out the Intel versions of the binaries, but then usually run into trouble when they try to migrate their disk to an Intel machine.
:) Stripping out a version of the binary will even keep a signed binary valid.
FYI: Apple has a very similar sort of setup for language dependent resources.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863729</id>
	<title>Is this really necessary? Or even advantageous?</title>
	<author>Tanuki64</author>
	<datestamp>1256477520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Most package manager can automatically create a binary package out of a source package. In many cases this resolves even problems with otherwise incompatible libraries. So for whom is such a fat binary advantageous? I'd assume mostly for closed source vendors. I have nothing against closed source in general, but if I pay for a software I expect at least a minimum of support. Such a fat binary does not look too userfriendly for me. Even if I can strip it down to my architecture. I suppose it does not solve the problems of incompatible libraries. I will follow the responses to this article, maybe I overlook someting and I will be convinced otherwise, but at the moment I would say: Superfluous.</htmltext>
<tokenext>Most package manager can automatically create a binary package out of a source package .
In many cases this resolves even problems with otherwise incompatible libraries .
So for whom is such a fat binary advantageous ?
I 'd assume mostly for closed source vendors .
I have nothing against closed source in general , but if I pay for a software I expect at least a minimum of support .
Such a fat binary does not look too userfriendly for me .
Even if I can strip it down to my architecture .
I suppose it does not solve the problems of incompatible libraries .
I will follow the responses to this article , maybe I overlook someting and I will be convinced otherwise , but at the moment I would say : Superfluous .</tokentext>
<sentencetext>Most package manager can automatically create a binary package out of a source package.
In many cases this resolves even problems with otherwise incompatible libraries.
So for whom is such a fat binary advantageous?
I'd assume mostly for closed source vendors.
I have nothing against closed source in general, but if I pay for a software I expect at least a minimum of support.
Such a fat binary does not look too userfriendly for me.
Even if I can strip it down to my architecture.
I suppose it does not solve the problems of incompatible libraries.
I will follow the responses to this article, maybe I overlook someting and I will be convinced otherwise, but at the moment I would say: Superfluous.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865665</id>
	<title>Re:oh boy, just pack all archs on a .deb</title>
	<author>jipn4</author>
	<datestamp>1256495940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>you know, just trick the good ol'<nobr> <wbr></nobr>.DEB package format to include several archs, then let to dpkg decide wich binaries to extract.</i></p><p>That's unnecessary, since the repository manager worries about architectures: it downloads the correct<nobr> <wbr></nobr>.deb file for your architecture.  It's better than a "fat<nobr> <wbr></nobr>.deb" format.</p></htmltext>
<tokenext>you know , just trick the good ol ' .DEB package format to include several archs , then let to dpkg decide wich binaries to extract.That 's unnecessary , since the repository manager worries about architectures : it downloads the correct .deb file for your architecture .
It 's better than a " fat .deb " format .</tokentext>
<sentencetext>you know, just trick the good ol' .DEB package format to include several archs, then let to dpkg decide wich binaries to extract.That's unnecessary, since the repository manager worries about architectures: it downloads the correct .deb file for your architecture.
It's better than a "fat .deb" format.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863643</id>
	<title>Yippy!!</title>
	<author>Anonymous</author>
	<datestamp>1256476500000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext>Yahoo!!</htmltext>
<tokenext>Yahoo !
!</tokentext>
<sentencetext>Yahoo!
!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</id>
	<title>Re:Only useful for non-free applications</title>
	<author>Anonymous</author>
	<datestamp>1256477640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Well, that's an important point but the author of this defends himself:  <p><div class="quote"><ul>
<li> Distributions no longer need to have separate downloads for various
     platforms. Given enough disc space, there's no reason you couldn't
     have one DVD<nobr> <wbr></nobr>.iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS
     system, doing the right thing at boot time. You can remove all the
     confusing text from your website about "which installer is right for me?"</li>
<li> You no longer need to have separate<nobr> <wbr></nobr>/lib,<nobr> <wbr></nobr>/lib32, and<nobr> <wbr></nobr>/lib64 trees.</li>
<li> Third party packagers no longer have to publish multiple<nobr> <wbr></nobr>.deb/.rpm/etc
     for different architectures. Installers like
     <a href="http://icculus.org/mojosetup/" title="icculus.org">MojoSetup</a> [icculus.org] benefit, too.</li>
<li> A download that is largely data and not executable code, such as a
     <a href="http://icculus.org/prey/" title="icculus.org">large video game</a> [icculus.org], doesn't need
     to use disproportionate amounts of disk space and bandwidth to supply
     builds for multiple architectures. Just supply one, with a slightly
     larger binary with the otherwise unchanged hundreds of megabytes of data.</li>
<li> You no longer need to use shell scripts and flakey logic to pick the right
     binary and libraries to load. Just run it, the system chooses the best
     one to run.</li>
<li> The ELF OSABI for your system changes someday? You can still support your
     legacy users.</li>
<li> Ship a single shared library that provides bindings for a scripting
     language and not have to worry about whether the scripting language
     itself is built for the same architecture as your bindings.</li>
<li> Ship web browser plugins that work out of the box with multiple platforms.</li>
<li> Ship kernel drivers for multiple processors in one file.</li>
<li> Transition to a new architecture in incremental steps.</li>
<li> Support 64-bit and 32-bit compatibility binaries in one file.</li>
<li> No more ia32 compatibility libraries! Even if your distro doesn't make
     a complete set of FatELF binaries available, they can still provide it
     for the handful of packages you need for 99\% of 32-bit apps you want to
     run on a 64-bit system.</li>
<li> Have a CPU that can handle different byte orders? Ship one binary that
     satisfies all configurations!</li>
<li> Ship one file that works across Linux and FreeBSD (without a platform
     compatibility layer on either of them).</li>
<li> One hard drive partition can be booted on different machines with
     different CPU architectures, for development and experimentation.
     Same root file system, different kernel and CPU architecture.</li>
<li> Prepare your app on a USB stick for sneakernet, know it'll work on
     whatever Linux box you are likely to plug it into.</li>
</ul></div><p>While you may be able to claim none of those points are overly compelling and target a very small part of the population, you <i>have</i> to recognize there's more than just satisfying non-free applications.  Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).</p></div>
	</htmltext>
<tokenext>Well , that 's an important point but the author of this defends himself : Distributions no longer need to have separate downloads for various platforms .
Given enough disc space , there 's no reason you could n't have one DVD .iso that installs an x86-64 , x86 , PowerPC , SPARC , and MIPS system , doing the right thing at boot time .
You can remove all the confusing text from your website about " which installer is right for me ?
" You no longer need to have separate /lib , /lib32 , and /lib64 trees .
Third party packagers no longer have to publish multiple .deb/.rpm/etc for different architectures .
Installers like MojoSetup [ icculus.org ] benefit , too .
A download that is largely data and not executable code , such as a large video game [ icculus.org ] , does n't need to use disproportionate amounts of disk space and bandwidth to supply builds for multiple architectures .
Just supply one , with a slightly larger binary with the otherwise unchanged hundreds of megabytes of data .
You no longer need to use shell scripts and flakey logic to pick the right binary and libraries to load .
Just run it , the system chooses the best one to run .
The ELF OSABI for your system changes someday ?
You can still support your legacy users .
Ship a single shared library that provides bindings for a scripting language and not have to worry about whether the scripting language itself is built for the same architecture as your bindings .
Ship web browser plugins that work out of the box with multiple platforms .
Ship kernel drivers for multiple processors in one file .
Transition to a new architecture in incremental steps .
Support 64-bit and 32-bit compatibility binaries in one file .
No more ia32 compatibility libraries !
Even if your distro does n't make a complete set of FatELF binaries available , they can still provide it for the handful of packages you need for 99 \ % of 32-bit apps you want to run on a 64-bit system .
Have a CPU that can handle different byte orders ?
Ship one binary that satisfies all configurations !
Ship one file that works across Linux and FreeBSD ( without a platform compatibility layer on either of them ) .
One hard drive partition can be booted on different machines with different CPU architectures , for development and experimentation .
Same root file system , different kernel and CPU architecture .
Prepare your app on a USB stick for sneakernet , know it 'll work on whatever Linux box you are likely to plug it into .
While you may be able to claim none of those points are overly compelling and target a very small part of the population , you have to recognize there 's more than just satisfying non-free applications .
Furthermore , I think you mean to say that it 's " only useful for non-open source applications " as there are tons of free software applications out there that are not open source but are free ( like Microsoft 's Express editions of Visual Studio ) .</tokentext>
<sentencetext>Well, that's an important point but the author of this defends himself:  
 Distributions no longer need to have separate downloads for various
     platforms.
Given enough disc space, there's no reason you couldn't
     have one DVD .iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS
     system, doing the right thing at boot time.
You can remove all the
     confusing text from your website about "which installer is right for me?
"
 You no longer need to have separate /lib, /lib32, and /lib64 trees.
Third party packagers no longer have to publish multiple .deb/.rpm/etc
     for different architectures.
Installers like
     MojoSetup [icculus.org] benefit, too.
A download that is largely data and not executable code, such as a
     large video game [icculus.org], doesn't need
     to use disproportionate amounts of disk space and bandwidth to supply
     builds for multiple architectures.
Just supply one, with a slightly
     larger binary with the otherwise unchanged hundreds of megabytes of data.
You no longer need to use shell scripts and flakey logic to pick the right
     binary and libraries to load.
Just run it, the system chooses the best
     one to run.
The ELF OSABI for your system changes someday?
You can still support your
     legacy users.
Ship a single shared library that provides bindings for a scripting
     language and not have to worry about whether the scripting language
     itself is built for the same architecture as your bindings.
Ship web browser plugins that work out of the box with multiple platforms.
Ship kernel drivers for multiple processors in one file.
Transition to a new architecture in incremental steps.
Support 64-bit and 32-bit compatibility binaries in one file.
No more ia32 compatibility libraries!
Even if your distro doesn't make
     a complete set of FatELF binaries available, they can still provide it
     for the handful of packages you need for 99\% of 32-bit apps you want to
     run on a 64-bit system.
Have a CPU that can handle different byte orders?
Ship one binary that
     satisfies all configurations!
Ship one file that works across Linux and FreeBSD (without a platform
     compatibility layer on either of them).
One hard drive partition can be booted on different machines with
     different CPU architectures, for development and experimentation.
Same root file system, different kernel and CPU architecture.
Prepare your app on a USB stick for sneakernet, know it'll work on
     whatever Linux box you are likely to plug it into.
While you may be able to claim none of those points are overly compelling and target a very small part of the population, you have to recognize there's more than just satisfying non-free applications.
Furthermore, I think you mean to say that it's "only useful for non-open source applications" as there are tons of free software applications out there that are not open source but are free (like Microsoft's Express editions of Visual Studio).
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863809</id>
	<title>Re:Apple dropped it</title>
	<author>PenguSven</author>
	<datestamp>1256478240000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><blockquote><div><p> Ask PPC owners that want to get the latest version of OS X.</p></div></blockquote><p>

No, Apple didn't drop support for Universal Binaries. Most apps available for Mac today are universal binaries and work on PPC or Intel macs, and in some cases support PPC 32, PPC 64, Intel 32 and Intel 64.

Just because a new OS doesn't support an older CPU architecture doesn't mean the functionality for Universal or "Fat" binaries is not supported.</p></div>
	</htmltext>
<tokenext>Ask PPC owners that want to get the latest version of OS X . No , Apple did n't drop support for Universal Binaries .
Most apps available for Mac today are universal binaries and work on PPC or Intel macs , and in some cases support PPC 32 , PPC 64 , Intel 32 and Intel 64 .
Just because a new OS does n't support an older CPU architecture does n't mean the functionality for Universal or " Fat " binaries is not supported .</tokentext>
<sentencetext> Ask PPC owners that want to get the latest version of OS X.

No, Apple didn't drop support for Universal Binaries.
Most apps available for Mac today are universal binaries and work on PPC or Intel macs, and in some cases support PPC 32, PPC 64, Intel 32 and Intel 64.
Just because a new OS doesn't support an older CPU architecture doesn't mean the functionality for Universal or "Fat" binaries is not supported.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863673</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867935</id>
	<title>SOHO User</title>
	<author>Vertana</author>
	<datestamp>1256475840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How many non technical people know the difference between 64 and 32-bit CPU's? Not a whole lot, if any. For those people, it's a bit daunting when their Linux friend/son/granddaughter/whatever tells them it'd be a great choice and the first thing they see is "Which download? 32 or 64?". Just put the<nobr> <wbr></nobr>.ISO with both 32-bit and 64-bit with a script that tests if the CPU is 64-bit capable at boot time. Or how about those people that don't know if their Mac has Intel or PPC chipset? Same thing, script at boot time to determine. If it's just two architectures the overhead could be well worth it.</p></htmltext>
<tokenext>How many non technical people know the difference between 64 and 32-bit CPU 's ?
Not a whole lot , if any .
For those people , it 's a bit daunting when their Linux friend/son/granddaughter/whatever tells them it 'd be a great choice and the first thing they see is " Which download ?
32 or 64 ? " .
Just put the .ISO with both 32-bit and 64-bit with a script that tests if the CPU is 64-bit capable at boot time .
Or how about those people that do n't know if their Mac has Intel or PPC chipset ?
Same thing , script at boot time to determine .
If it 's just two architectures the overhead could be well worth it .</tokentext>
<sentencetext>How many non technical people know the difference between 64 and 32-bit CPU's?
Not a whole lot, if any.
For those people, it's a bit daunting when their Linux friend/son/granddaughter/whatever tells them it'd be a great choice and the first thing they see is "Which download?
32 or 64?".
Just put the .ISO with both 32-bit and 64-bit with a script that tests if the CPU is 64-bit capable at boot time.
Or how about those people that don't know if their Mac has Intel or PPC chipset?
Same thing, script at boot time to determine.
If it's just two architectures the overhead could be well worth it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863859</id>
	<title>Re:Only useful for non-free applications</title>
	<author>Anonymous</author>
	<datestamp>1256479020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>And he even left another useful one off the list:</p><p>You can have one application binary sitting on a network drive somewhere and start it up on any client machine, regardless of architecture.</p></htmltext>
<tokenext>And he even left another useful one off the list : You can have one application binary sitting on a network drive somewhere and start it up on any client machine , regardless of architecture .</tokentext>
<sentencetext>And he even left another useful one off the list:You can have one application binary sitting on a network drive somewhere and start it up on any client machine, regardless of architecture.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867583</id>
	<title>Re:Apple Universal Binary is kinda of a joke.</title>
	<author>Anonymous</author>
	<datestamp>1256471400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Apple's fat binaries are just the Mach-O binaries for different architectures concatenated together with a header containing the offsets where the code for each architecture begins in the file. If you know what you're doing, you can actually extract the Mach-O binaries from the fat binary and distribute them separately (I've done it with a hex editor, but I'm sure there are plenty of other ways to do it).</p></htmltext>
<tokenext>Apple 's fat binaries are just the Mach-O binaries for different architectures concatenated together with a header containing the offsets where the code for each architecture begins in the file .
If you know what you 're doing , you can actually extract the Mach-O binaries from the fat binary and distribute them separately ( I 've done it with a hex editor , but I 'm sure there are plenty of other ways to do it ) .</tokentext>
<sentencetext>Apple's fat binaries are just the Mach-O binaries for different architectures concatenated together with a header containing the offsets where the code for each architecture begins in the file.
If you know what you're doing, you can actually extract the Mach-O binaries from the fat binary and distribute them separately (I've done it with a hex editor, but I'm sure there are plenty of other ways to do it).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865373</id>
	<title>Re:please stop trying to turn Linux into OS X</title>
	<author>99BottlesOfBeerInMyF</author>
	<datestamp>1256494080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Linux doesn't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.</p></div><p>I wish. I installed a wikimedia server a few weeks ago. Gee it's OSS but not in the package manager. Well lets spend half an hour futzing with the CLI again. Oh, and commercial, closed source software needed to do my job, not in any repositories even if it is freeware because there's no easy way to host those repositories and download from a Web site across package managers. Again, I'm running binary installers and trying to pick from eight of them for different architectures. Gee it sure would be nice to install an application on a network drive and let the whole lab get to it from their laptops when they're in. Oh, but then I need to install multiple copies and hope users know what architecture they're using. Installation on a flash drive that I can stick into multiple machines, no luck there either. Want to upgrade my laptop to a new one with a new architecture, oh joy I get to re-download and dig up registration codes for all my applications that aren't in the repository because I can't transfer them because they don't support my new machine. </p><p><div class="quote"><p>OS X needs fat binaries because it doesn't have package management.</p></div><p>Linux needs fat binaries both because their package management is insufficient and because they need it to support other use cases unrelated to package management.</p><p><div class="quote"><p> I wish people would stop trying to bring OS X (mis)features over to Linux. If I wanted to use OS X, I'd already be using it.</p></div><p>I wish Linux devs would get over their Not Invented Here syndrome and start to copy the really cool stuff from OS X that I miss when I'm using my Linux machine. I wish Apple would copy some more of the stuff from Linux too, but at least they're steadily pulling in more and more and every release those deficiencies get smaller. A whole decade after OS X added system services and Linux still has nothing to fill that hole. That's just sad.</p></div>
	</htmltext>
<tokenext>Linux does n't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.I wish .
I installed a wikimedia server a few weeks ago .
Gee it 's OSS but not in the package manager .
Well lets spend half an hour futzing with the CLI again .
Oh , and commercial , closed source software needed to do my job , not in any repositories even if it is freeware because there 's no easy way to host those repositories and download from a Web site across package managers .
Again , I 'm running binary installers and trying to pick from eight of them for different architectures .
Gee it sure would be nice to install an application on a network drive and let the whole lab get to it from their laptops when they 're in .
Oh , but then I need to install multiple copies and hope users know what architecture they 're using .
Installation on a flash drive that I can stick into multiple machines , no luck there either .
Want to upgrade my laptop to a new one with a new architecture , oh joy I get to re-download and dig up registration codes for all my applications that are n't in the repository because I ca n't transfer them because they do n't support my new machine .
OS X needs fat binaries because it does n't have package management.Linux needs fat binaries both because their package management is insufficient and because they need it to support other use cases unrelated to package management .
I wish people would stop trying to bring OS X ( mis ) features over to Linux .
If I wanted to use OS X , I 'd already be using it.I wish Linux devs would get over their Not Invented Here syndrome and start to copy the really cool stuff from OS X that I miss when I 'm using my Linux machine .
I wish Apple would copy some more of the stuff from Linux too , but at least they 're steadily pulling in more and more and every release those deficiencies get smaller .
A whole decade after OS X added system services and Linux still has nothing to fill that hole .
That 's just sad .</tokentext>
<sentencetext>Linux doesn't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.I wish.
I installed a wikimedia server a few weeks ago.
Gee it's OSS but not in the package manager.
Well lets spend half an hour futzing with the CLI again.
Oh, and commercial, closed source software needed to do my job, not in any repositories even if it is freeware because there's no easy way to host those repositories and download from a Web site across package managers.
Again, I'm running binary installers and trying to pick from eight of them for different architectures.
Gee it sure would be nice to install an application on a network drive and let the whole lab get to it from their laptops when they're in.
Oh, but then I need to install multiple copies and hope users know what architecture they're using.
Installation on a flash drive that I can stick into multiple machines, no luck there either.
Want to upgrade my laptop to a new one with a new architecture, oh joy I get to re-download and dig up registration codes for all my applications that aren't in the repository because I can't transfer them because they don't support my new machine.
OS X needs fat binaries because it doesn't have package management.Linux needs fat binaries both because their package management is insufficient and because they need it to support other use cases unrelated to package management.
I wish people would stop trying to bring OS X (mis)features over to Linux.
If I wanted to use OS X, I'd already be using it.I wish Linux devs would get over their Not Invented Here syndrome and start to copy the really cool stuff from OS X that I miss when I'm using my Linux machine.
I wish Apple would copy some more of the stuff from Linux too, but at least they're steadily pulling in more and more and every release those deficiencies get smaller.
A whole decade after OS X added system services and Linux still has nothing to fill that hole.
That's just sad.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864939</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867783</id>
	<title>Re:Only useful for non-free applications</title>
	<author>ToasterMonkey</author>
	<datestamp>1256473680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>They clearly meant free, not gratis. Gratis is such a weak feature of software that I don't think it deserves to share a meanin with free.</p></div><p>Gratis is THE reason "free software" is as popular as it it today.  I'm deadly serious.  It's the best marketing free software has because John Q Public does not give a rat's ass about the other meaning of that word unless it's used in the context of people.  The marketing implications alone are more than enough.</p></div>
	</htmltext>
<tokenext>They clearly meant free , not gratis .
Gratis is such a weak feature of software that I do n't think it deserves to share a meanin with free.Gratis is THE reason " free software " is as popular as it it today .
I 'm deadly serious .
It 's the best marketing free software has because John Q Public does not give a rat 's ass about the other meaning of that word unless it 's used in the context of people .
The marketing implications alone are more than enough .</tokentext>
<sentencetext>They clearly meant free, not gratis.
Gratis is such a weak feature of software that I don't think it deserves to share a meanin with free.Gratis is THE reason "free software" is as popular as it it today.
I'm deadly serious.
It's the best marketing free software has because John Q Public does not give a rat's ass about the other meaning of that word unless it's used in the context of people.
The marketing implications alone are more than enough.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863887</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867797</id>
	<title>Re:</title>
	<author>clint999</author>
	<datestamp>1256473800000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><i>If you have access to the source, you can always compile a version for your platform. The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.Since most apps for Linux are free and the source is available, this feature isn't as useful as it is on the Mac. Not that it shouldn't be created, but it makes sense to me why it took a while before someone started developing this for Linux.</i></htmltext>
<tokenext>If you have access to the source , you can always compile a version for your platform .
The 'fat binary ' principle is only useful for non-free applications , where the end-user ca n't compile the application himself and has to use the binary provided by the vendor.Since most apps for Linux are free and the source is available , this feature is n't as useful as it is on the Mac .
Not that it should n't be created , but it makes sense to me why it took a while before someone started developing this for Linux .</tokentext>
<sentencetext>If you have access to the source, you can always compile a version for your platform.
The 'fat binary' principle is only useful for non-free applications, where the end-user can't compile the application himself and has to use the binary provided by the vendor.Since most apps for Linux are free and the source is available, this feature isn't as useful as it is on the Mac.
Not that it shouldn't be created, but it makes sense to me why it took a while before someone started developing this for Linux.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863671</id>
	<title>We really care</title>
	<author>Anonymous</author>
	<datestamp>1256476980000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>Seriously, we care. I mean, really. This would so make my life easier, because I run the same binaries on everything, like, uh, well, shit. Why do I care again? If it could take care of the library problem instead, this would be a "good thing"</p></htmltext>
<tokenext>Seriously , we care .
I mean , really .
This would so make my life easier , because I run the same binaries on everything , like , uh , well , shit .
Why do I care again ?
If it could take care of the library problem instead , this would be a " good thing "</tokentext>
<sentencetext>Seriously, we care.
I mean, really.
This would so make my life easier, because I run the same binaries on everything, like, uh, well, shit.
Why do I care again?
If it could take care of the library problem instead, this would be a "good thing"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864939</id>
	<title>please stop trying to turn Linux into OS X</title>
	<author>jipn4</author>
	<datestamp>1256489640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Linux doesn't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.</p><p>OS X needs fat binaries because it doesn't have package management.  I wish people would stop trying to bring OS X (mis)features over to Linux.  If I wanted to use OS X, I'd already be using it.</p></htmltext>
<tokenext>Linux does n't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.OS X needs fat binaries because it does n't have package management .
I wish people would stop trying to bring OS X ( mis ) features over to Linux .
If I wanted to use OS X , I 'd already be using it .</tokentext>
<sentencetext>Linux doesn't need fat binaries because the package manager automatically installs the binaries that are appropriate for the machine.OS X needs fat binaries because it doesn't have package management.
I wish people would stop trying to bring OS X (mis)features over to Linux.
If I wanted to use OS X, I'd already be using it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864325</id>
	<title>Re:Not scalable</title>
	<author>Jesus\_666</author>
	<datestamp>1256483340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Have the net-installer be fat. That way people can just download one ISO without having to worry about whether it's compatible with their system. The installer determines which architecture it runs on (optionally fine-tunable so you can install IA32 on an AMD64 system) and automagically fetches the correct packages from the server.<br>
<br>
For regular installation CDs/DVDs this is a bit trickier but one could either include multiple package trees (space-intensive) or have all packages on the disc be fat and strip them down to the appropriate architecture(s) upon installation (less space-intensive but slower).<br>
<br>
By all means, still offer architecture-specific installation media but do offer universal ISOs so people can just toss the OS on without having to worry about what architecture they have. Remember, desktop Linux is something considered worthwile and asking nontechnical users to decide whether they want installation media for IA32, AMD64 or maybe PPC64 is not beginner-friendly. Assuming that IA32 is correct might backfire, too, as ARM netbooks are beginning to appear on the market.<br>
<br>
If Linux distros want to impress the casual user, "much easier to install than Windows 7" would be a good way to start.</htmltext>
<tokenext>Have the net-installer be fat .
That way people can just download one ISO without having to worry about whether it 's compatible with their system .
The installer determines which architecture it runs on ( optionally fine-tunable so you can install IA32 on an AMD64 system ) and automagically fetches the correct packages from the server .
For regular installation CDs/DVDs this is a bit trickier but one could either include multiple package trees ( space-intensive ) or have all packages on the disc be fat and strip them down to the appropriate architecture ( s ) upon installation ( less space-intensive but slower ) .
By all means , still offer architecture-specific installation media but do offer universal ISOs so people can just toss the OS on without having to worry about what architecture they have .
Remember , desktop Linux is something considered worthwile and asking nontechnical users to decide whether they want installation media for IA32 , AMD64 or maybe PPC64 is not beginner-friendly .
Assuming that IA32 is correct might backfire , too , as ARM netbooks are beginning to appear on the market .
If Linux distros want to impress the casual user , " much easier to install than Windows 7 " would be a good way to start .</tokentext>
<sentencetext>Have the net-installer be fat.
That way people can just download one ISO without having to worry about whether it's compatible with their system.
The installer determines which architecture it runs on (optionally fine-tunable so you can install IA32 on an AMD64 system) and automagically fetches the correct packages from the server.
For regular installation CDs/DVDs this is a bit trickier but one could either include multiple package trees (space-intensive) or have all packages on the disc be fat and strip them down to the appropriate architecture(s) upon installation (less space-intensive but slower).
By all means, still offer architecture-specific installation media but do offer universal ISOs so people can just toss the OS on without having to worry about what architecture they have.
Remember, desktop Linux is something considered worthwile and asking nontechnical users to decide whether they want installation media for IA32, AMD64 or maybe PPC64 is not beginner-friendly.
Assuming that IA32 is correct might backfire, too, as ARM netbooks are beginning to appear on the market.
If Linux distros want to impress the casual user, "much easier to install than Windows 7" would be a good way to start.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29873169</id>
	<title>Why?</title>
	<author>BumpyCarrot</author>
	<datestamp>1256575620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Binaries are only ever useful when compiled for the user's distro. Given that this usually happens at package level (at least for distros that even bother with package management), and that those packages are often platform-specific, I don't see what problem this would solve.  Perhaps if there was a distro that was definitively "Linux" that had the userbase to support it.  But then any candidate for such a role already has a package management system.</htmltext>
<tokenext>Binaries are only ever useful when compiled for the user 's distro .
Given that this usually happens at package level ( at least for distros that even bother with package management ) , and that those packages are often platform-specific , I do n't see what problem this would solve .
Perhaps if there was a distro that was definitively " Linux " that had the userbase to support it .
But then any candidate for such a role already has a package management system .</tokentext>
<sentencetext>Binaries are only ever useful when compiled for the user's distro.
Given that this usually happens at package level (at least for distros that even bother with package management), and that those packages are often platform-specific, I don't see what problem this would solve.
Perhaps if there was a distro that was definitively "Linux" that had the userbase to support it.
But then any candidate for such a role already has a package management system.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863783</id>
	<title>It does not</title>
	<author>Anonymous</author>
	<datestamp>1256478000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It does not allow a "single binary file to run natively" on several platforms. All Universal Binary is - is a bunch of precompiled binaries that run on their particular platform in a folder with<nobr> <wbr></nobr>.app extension. Very convenient for an end user, but takes a lot of room on hard disk.</p></htmltext>
<tokenext>It does not allow a " single binary file to run natively " on several platforms .
All Universal Binary is - is a bunch of precompiled binaries that run on their particular platform in a folder with .app extension .
Very convenient for an end user , but takes a lot of room on hard disk .</tokentext>
<sentencetext>It does not allow a "single binary file to run natively" on several platforms.
All Universal Binary is - is a bunch of precompiled binaries that run on their particular platform in a folder with .app extension.
Very convenient for an end user, but takes a lot of room on hard disk.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864049</id>
	<title>Yet another unnecessary archive format</title>
	<author>bit01</author>
	<datestamp>1256480940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Hmmm, yet another unnecessary, incompatible, redundant archive format requiring yet more tools and libraries to deal with.</p><p>What's wrong with putting whatever flavor of binary you want in a tar.gz archive, zip archive or folder and have the system smart enough to pick the right one (using the existing, standard file id's and tools) when you execute the archive or folder? Yes, I realize the system needs to map pages while executing but for archives that is trivially dealt with by extracting before executing.</p><p>The amount of fuzzy, shallow, magical thinking that happens with software is just amazing. Please, if you insist on recreating the wheel at least have the good sense to <em>think</em> about what you are doing and stop assuming that giving something a new name necessitates creating an entire new software infrastructure that will unnecessarily create complexity and problems for large numbers of people.</p><p>---</p><p> <em>For the copyright bargain to be valid all DRM'ed works should lose copyright.</em> </p></htmltext>
<tokenext>Hmmm , yet another unnecessary , incompatible , redundant archive format requiring yet more tools and libraries to deal with.What 's wrong with putting whatever flavor of binary you want in a tar.gz archive , zip archive or folder and have the system smart enough to pick the right one ( using the existing , standard file id 's and tools ) when you execute the archive or folder ?
Yes , I realize the system needs to map pages while executing but for archives that is trivially dealt with by extracting before executing.The amount of fuzzy , shallow , magical thinking that happens with software is just amazing .
Please , if you insist on recreating the wheel at least have the good sense to think about what you are doing and stop assuming that giving something a new name necessitates creating an entire new software infrastructure that will unnecessarily create complexity and problems for large numbers of people.--- For the copyright bargain to be valid all DRM'ed works should lose copyright .</tokentext>
<sentencetext>Hmmm, yet another unnecessary, incompatible, redundant archive format requiring yet more tools and libraries to deal with.What's wrong with putting whatever flavor of binary you want in a tar.gz archive, zip archive or folder and have the system smart enough to pick the right one (using the existing, standard file id's and tools) when you execute the archive or folder?
Yes, I realize the system needs to map pages while executing but for archives that is trivially dealt with by extracting before executing.The amount of fuzzy, shallow, magical thinking that happens with software is just amazing.
Please, if you insist on recreating the wheel at least have the good sense to think about what you are doing and stop assuming that giving something a new name necessitates creating an entire new software infrastructure that will unnecessarily create complexity and problems for large numbers of people.--- For the copyright bargain to be valid all DRM'ed works should lose copyright. </sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815</id>
	<title>Not scalable</title>
	<author>gdshaw</author>
	<datestamp>1256478300000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.</p><p>This is something you might decide to ignore if you are only supporting two architectures.  Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to.  I really don't think this idea makes sense.</p><p>(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?)</p></htmltext>
<tokenext>To a first approximation , the size of the binary will increase in proportion to the number of architectures supported.This is something you might decide to ignore if you are only supporting two architectures .
Debian Lenny supports twelve architectures , and I 've lost count of how many the Linux kernel itself has been ported to .
I really do n't think this idea makes sense .
( Besides , what 's wrong with simply shipping two or more binaries in the same package or tarball ?
)</tokentext>
<sentencetext>To a first approximation, the size of the binary will increase in proportion to the number of architectures supported.This is something you might decide to ignore if you are only supporting two architectures.
Debian Lenny supports twelve architectures, and I've lost count of how many the Linux kernel itself has been ported to.
I really don't think this idea makes sense.
(Besides, what's wrong with simply shipping two or more binaries in the same package or tarball?
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864929</id>
	<title>Re:Only useful for non-free applications</title>
	<author>Bazer</author>
	<datestamp>1256489520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In my opinion, it's a solution in search of a problem. He's proposing a system where on each and every update every client has to download a binary version for all supported platforms. Let's calculate how that would affect the binary size of my<nobr> <wbr></nobr>/usr/bin and<nobr> <wbr></nobr>/usr/lib. For the sake of the argument, let's say that the binary size for all 32-bit architectures is half of the size of their 64-bit version and every distribution ships the same binaries:</p><ul><li>Fedora 11 - x86\_64, 900MB - 100\%</li><li>Fedora 11 - x86\_64, i386, ppc - 2.2 GB - 250\%</li><li>Debian - alpha, amd64, arm, armel, hppa, i386, ia64, mips, mipsel, powerpc, sparc, s390 - 7,2 GB - 800\%</li></ul><p>That's <em>without</em> debugging symbols for each arch. You do the math for other ditributions. Think of the cost of updates in terms of bandwidth for updates.</p><p>This problem has been already solved by package managers and those are far from the weak link he makes them out to be. Moving the architecture detection from the installation phase to the run phase will only add to the problem. Instead of relying on my system vendor's package manager, I have to rely on every application vendor to do the right thing.</p></htmltext>
<tokenext>In my opinion , it 's a solution in search of a problem .
He 's proposing a system where on each and every update every client has to download a binary version for all supported platforms .
Let 's calculate how that would affect the binary size of my /usr/bin and /usr/lib .
For the sake of the argument , let 's say that the binary size for all 32-bit architectures is half of the size of their 64-bit version and every distribution ships the same binaries : Fedora 11 - x86 \ _64 , 900MB - 100 \ % Fedora 11 - x86 \ _64 , i386 , ppc - 2.2 GB - 250 \ % Debian - alpha , amd64 , arm , armel , hppa , i386 , ia64 , mips , mipsel , powerpc , sparc , s390 - 7,2 GB - 800 \ % That 's without debugging symbols for each arch .
You do the math for other ditributions .
Think of the cost of updates in terms of bandwidth for updates.This problem has been already solved by package managers and those are far from the weak link he makes them out to be .
Moving the architecture detection from the installation phase to the run phase will only add to the problem .
Instead of relying on my system vendor 's package manager , I have to rely on every application vendor to do the right thing .</tokentext>
<sentencetext>In my opinion, it's a solution in search of a problem.
He's proposing a system where on each and every update every client has to download a binary version for all supported platforms.
Let's calculate how that would affect the binary size of my /usr/bin and /usr/lib.
For the sake of the argument, let's say that the binary size for all 32-bit architectures is half of the size of their 64-bit version and every distribution ships the same binaries:Fedora 11 - x86\_64, 900MB - 100\%Fedora 11 - x86\_64, i386, ppc - 2.2 GB - 250\%Debian - alpha, amd64, arm, armel, hppa, i386, ia64, mips, mipsel, powerpc, sparc, s390 - 7,2 GB - 800\%That's without debugging symbols for each arch.
You do the math for other ditributions.
Think of the cost of updates in terms of bandwidth for updates.This problem has been already solved by package managers and those are far from the weak link he makes them out to be.
Moving the architecture detection from the installation phase to the run phase will only add to the problem.
Instead of relying on my system vendor's package manager, I have to rely on every application vendor to do the right thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864313
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864421
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867457
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863809
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863673
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863911
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864791
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864047
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864827
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863827
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866761
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864419
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864245
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863809
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863673
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863901
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863645
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866387
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867583
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864225
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863703
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866667
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863701
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865373
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864939
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29870211
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863719
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864861
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863737
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867085
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867823
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864801
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29880133
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863777
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863645
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864929
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864045
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864735
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867515
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864325
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29883225
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864369
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865665
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863859
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867409
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865283
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867783
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_25_0450232_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864079
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863679
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863741
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864047
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864369
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29883225
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863887
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864791
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867783
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864929
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863859
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866387
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864801
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863911
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864313
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864079
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863737
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864861
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863923
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864121
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867515
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867085
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29880133
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863815
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864225
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864325
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864045
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864421
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863645
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863901
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863777
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863673
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863809
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867457
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864245
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864419
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866761
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29869115
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863783
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864835
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29868163
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863671
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863893
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863659
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863719
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863701
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866667
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863703
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864939
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865373
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867935
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864959
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863717
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864735
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863825
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29870211
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867823
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867583
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863827
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864827
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29866129
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29864251
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_25_0450232.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29863811
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865665
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29867409
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_25_0450232.29865283
</commentlist>
</conversation>
