<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_16_1712250</id>
	<title>New Binary Diffing Algorithm Announced By Google</title>
	<author>timothy</author>
	<datestamp>1247764800000</datestamp>
	<htmltext><a href="http://rbheerampgmailcom/" rel="nofollow">bheer</a> writes <i>"Today Google's Open-Source Chromium project <a href="http://blog.chromium.org/2009/07/smaller-is-faster-and-safer-too.html">announced</a> a <a href="http://dev.chromium.org/developers/design-documents/software-updates-courgette">new compression technique called Courgette</a> geared towards distributing really small updates. Courgette achieves smaller diffs (about 9x in one example) than standard binary-diffing algorithms like bsdiff by disassembling the code and sending the assembler diffs over the wire. This, the Chromium devs say, will allow them to send smaller, more frequent updates, making users more secure. Since this will be released as open source, it should make distributing updates a lot easier for the open-source community."</i></htmltext>
<tokenext>bheer writes " Today Google 's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates .
Courgette achieves smaller diffs ( about 9x in one example ) than standard binary-diffing algorithms like bsdiff by disassembling the code and sending the assembler diffs over the wire .
This , the Chromium devs say , will allow them to send smaller , more frequent updates , making users more secure .
Since this will be released as open source , it should make distributing updates a lot easier for the open-source community .
"</tokentext>
<sentencetext>bheer writes "Today Google's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates.
Courgette achieves smaller diffs (about 9x in one example) than standard binary-diffing algorithms like bsdiff by disassembling the code and sending the assembler diffs over the wire.
This, the Chromium devs say, will allow them to send smaller, more frequent updates, making users more secure.
Since this will be released as open source, it should make distributing updates a lot easier for the open-source community.
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720635</id>
	<title>Re:Bad explanation</title>
	<author>CarpetShark</author>
	<datestamp>1247773380000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><blockquote><div><p>return = get\_Magna\_Carta\_text()<br>return unzip(compressed\_data[1:]</p></div></blockquote><p>Most of your point is good, but I suspect that, no matter what language you're using, ONE of these will give you a syntax error<nobr> <wbr></nobr>;)</p></div>
	</htmltext>
<tokenext>return = get \ _Magna \ _Carta \ _text ( ) return unzip ( compressed \ _data [ 1 : ] Most of your point is good , but I suspect that , no matter what language you 're using , ONE of these will give you a syntax error ; )</tokentext>
<sentencetext>return = get\_Magna\_Carta\_text()return unzip(compressed\_data[1:]Most of your point is good, but I suspect that, no matter what language you're using, ONE of these will give you a syntax error ;)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719867</id>
	<title>Re:uses a primitive automatic disassembler</title>
	<author>Anonymous</author>
	<datestamp>1247770500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>It's an interesting approach to a problem that should never have arisen. Why does everything have to be thrown into a 10MB dll? People are browsing on computers, not wristwatches, and there's no reason to abandon modular design to tangle things together for trivial performance gains. Google loves throwing its massive revenues at building and maintaining bad code that's obsessively optimized. Look at the unholy amounts of research and tricky implementation they put into Apps and Gmail and Wave.. inexplicably designing all their own widgets and developing the mind-bogglingly expensive Google App Engine into a Python- and Java- linkable library and stretching and extending the capabilities of Javascript and the DOM far beyond their intended purpose to get drag-and-drop, fancy transitions, live editing... all of this is standard fare in a desktop app but Google insists on coding it (inevitably ugly) to run in the browser. I should have expected the same thing from Chrome.</htmltext>
<tokenext>It 's an interesting approach to a problem that should never have arisen .
Why does everything have to be thrown into a 10MB dll ?
People are browsing on computers , not wristwatches , and there 's no reason to abandon modular design to tangle things together for trivial performance gains .
Google loves throwing its massive revenues at building and maintaining bad code that 's obsessively optimized .
Look at the unholy amounts of research and tricky implementation they put into Apps and Gmail and Wave.. inexplicably designing all their own widgets and developing the mind-bogglingly expensive Google App Engine into a Python- and Java- linkable library and stretching and extending the capabilities of Javascript and the DOM far beyond their intended purpose to get drag-and-drop , fancy transitions , live editing... all of this is standard fare in a desktop app but Google insists on coding it ( inevitably ugly ) to run in the browser .
I should have expected the same thing from Chrome .</tokentext>
<sentencetext>It's an interesting approach to a problem that should never have arisen.
Why does everything have to be thrown into a 10MB dll?
People are browsing on computers, not wristwatches, and there's no reason to abandon modular design to tangle things together for trivial performance gains.
Google loves throwing its massive revenues at building and maintaining bad code that's obsessively optimized.
Look at the unholy amounts of research and tricky implementation they put into Apps and Gmail and Wave.. inexplicably designing all their own widgets and developing the mind-bogglingly expensive Google App Engine into a Python- and Java- linkable library and stretching and extending the capabilities of Javascript and the DOM far beyond their intended purpose to get drag-and-drop, fancy transitions, live editing... all of this is standard fare in a desktop app but Google insists on coding it (inevitably ugly) to run in the browser.
I should have expected the same thing from Chrome.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720709</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>xenocide2</author>
	<datestamp>1247773620000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>I've been reviewing various proposals like this, and basically, it's a tradeoff mirrors don't want. This sorta stuff has been proposed for ages. I listened to a recording of the author of rsync give an introduction to the algorithm and program, and among the questions was "is this suitable for<nobr> <wbr></nobr>.deb?". The answer was "Not unless the archive is completely recompressed with gzip patched to be rsync compatible".</p><p>Eventually that patch landed, and you could conceivably do this. Except it expands the entire archive by 1-2 percent. And there's a lot of CPU overhead where there was none before.</p><p>Then someone cooked up zsync. It's the same thing as rsync except with a precalculated set of results for the server side. This looked like a winner. But someone else really wants LZMA compressed packages to fit more onto pressed CDs. So now we're at a fundamental impass: optimize for the distribution of install media to new users, or optimize for the distribution of updates to existing users.</p><p>The best resolution I've seen is to use LZMA compression on the entire CD volume, but that requires the kernel to get their ass in gear and allow yet another compression in the kernel. That may have finally happened, I haven't checked recently. But generally LZMA requires more RAM to operate, so that could raise the minimum requirements on installs.</p><p>In short, it's a balancing act of effort, bandwidth, CPU and RAM. What works for some may not work for all.</p></htmltext>
<tokenext>I 've been reviewing various proposals like this , and basically , it 's a tradeoff mirrors do n't want .
This sorta stuff has been proposed for ages .
I listened to a recording of the author of rsync give an introduction to the algorithm and program , and among the questions was " is this suitable for .deb ? " .
The answer was " Not unless the archive is completely recompressed with gzip patched to be rsync compatible " .Eventually that patch landed , and you could conceivably do this .
Except it expands the entire archive by 1-2 percent .
And there 's a lot of CPU overhead where there was none before.Then someone cooked up zsync .
It 's the same thing as rsync except with a precalculated set of results for the server side .
This looked like a winner .
But someone else really wants LZMA compressed packages to fit more onto pressed CDs .
So now we 're at a fundamental impass : optimize for the distribution of install media to new users , or optimize for the distribution of updates to existing users.The best resolution I 've seen is to use LZMA compression on the entire CD volume , but that requires the kernel to get their ass in gear and allow yet another compression in the kernel .
That may have finally happened , I have n't checked recently .
But generally LZMA requires more RAM to operate , so that could raise the minimum requirements on installs.In short , it 's a balancing act of effort , bandwidth , CPU and RAM .
What works for some may not work for all .</tokentext>
<sentencetext>I've been reviewing various proposals like this, and basically, it's a tradeoff mirrors don't want.
This sorta stuff has been proposed for ages.
I listened to a recording of the author of rsync give an introduction to the algorithm and program, and among the questions was "is this suitable for .deb?".
The answer was "Not unless the archive is completely recompressed with gzip patched to be rsync compatible".Eventually that patch landed, and you could conceivably do this.
Except it expands the entire archive by 1-2 percent.
And there's a lot of CPU overhead where there was none before.Then someone cooked up zsync.
It's the same thing as rsync except with a precalculated set of results for the server side.
This looked like a winner.
But someone else really wants LZMA compressed packages to fit more onto pressed CDs.
So now we're at a fundamental impass: optimize for the distribution of install media to new users, or optimize for the distribution of updates to existing users.The best resolution I've seen is to use LZMA compression on the entire CD volume, but that requires the kernel to get their ass in gear and allow yet another compression in the kernel.
That may have finally happened, I haven't checked recently.
But generally LZMA requires more RAM to operate, so that could raise the minimum requirements on installs.In short, it's a balancing act of effort, bandwidth, CPU and RAM.
What works for some may not work for all.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722873</id>
	<title>The dynamic deltup server network</title>
	<author>xororand</author>
	<datestamp>1247739420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A similar approach for distributing updates to source packages has been around for years: The <i>dynamic deltup server network</i>. You can tell their servers which source archives you already have and which new version you want. The server then unpacks both archives and sends you a deltup diff that can be used to create a bit-by-bit copy of the desired source archive, using the deltup program.<br>An example use case for this are source based operating system distributions, like Gentoo GNU/Linux. The saved bandwidth is usually significant, often more than 90\%.</p><p><a href="http://linux01.gwdg.de/~nlissne/dynamic.html" title="linux01.gwdg.de">http://linux01.gwdg.de/~nlissne/dynamic.html</a> [linux01.gwdg.de]</p></htmltext>
<tokenext>A similar approach for distributing updates to source packages has been around for years : The dynamic deltup server network .
You can tell their servers which source archives you already have and which new version you want .
The server then unpacks both archives and sends you a deltup diff that can be used to create a bit-by-bit copy of the desired source archive , using the deltup program.An example use case for this are source based operating system distributions , like Gentoo GNU/Linux .
The saved bandwidth is usually significant , often more than 90 \ % .http : //linux01.gwdg.de/ ~ nlissne/dynamic.html [ linux01.gwdg.de ]</tokentext>
<sentencetext>A similar approach for distributing updates to source packages has been around for years: The dynamic deltup server network.
You can tell their servers which source archives you already have and which new version you want.
The server then unpacks both archives and sends you a deltup diff that can be used to create a bit-by-bit copy of the desired source archive, using the deltup program.An example use case for this are source based operating system distributions, like Gentoo GNU/Linux.
The saved bandwidth is usually significant, often more than 90\%.http://linux01.gwdg.de/~nlissne/dynamic.html [linux01.gwdg.de]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</id>
	<title>Like many brilliant ideas...</title>
	<author>istartedi</author>
	<datestamp>1247769480000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>...it makes you smack yourself on the head and go "why
hasn't everybody been doing this for years?".</p><p>The idea is simple, and reminds me of something I learned
in school regarding signals.  Some operations are easy to
perform in the frequency domain, so you do the Fourier transform,
perform the operation, and then transform back.</p><p>This is really just the same idea applied to the problem
of patches.  They're small in source; but big in binary.  It
seems so obvious that you could apply a transform,patch,reverse
process... but only when pointed out and demonstrated.</p><p>It's almost like my favorite invention:  the phonograph.</p><p>The instructions for making an Edison phonograph could have
been understood and executed by any craftsman going back thousands
of years.  Yet, it wasn't done until the late 19th century.</p><p>Are the inventors that brilliant, or are we just that stupid.</p></htmltext>
<tokenext>...it makes you smack yourself on the head and go " why has n't everybody been doing this for years ?
" .The idea is simple , and reminds me of something I learned in school regarding signals .
Some operations are easy to perform in the frequency domain , so you do the Fourier transform , perform the operation , and then transform back.This is really just the same idea applied to the problem of patches .
They 're small in source ; but big in binary .
It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.It 's almost like my favorite invention : the phonograph.The instructions for making an Edison phonograph could have been understood and executed by any craftsman going back thousands of years .
Yet , it was n't done until the late 19th century.Are the inventors that brilliant , or are we just that stupid .</tokentext>
<sentencetext>...it makes you smack yourself on the head and go "why
hasn't everybody been doing this for years?
".The idea is simple, and reminds me of something I learned
in school regarding signals.
Some operations are easy to
perform in the frequency domain, so you do the Fourier transform,
perform the operation, and then transform back.This is really just the same idea applied to the problem
of patches.
They're small in source; but big in binary.
It
seems so obvious that you could apply a transform,patch,reverse
process... but only when pointed out and demonstrated.It's almost like my favorite invention:  the phonograph.The instructions for making an Edison phonograph could have
been understood and executed by any craftsman going back thousands
of years.
Yet, it wasn't done until the late 19th century.Are the inventors that brilliant, or are we just that stupid.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719975</id>
	<title>Re:Solving the wrong problem</title>
	<author>BuR4N</author>
	<datestamp>1247770920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It has of course nothing to do with what you implying, its all about saving money (yea, stuff like bandwidth etc do cost money)</p></htmltext>
<tokenext>It has of course nothing to do with what you implying , its all about saving money ( yea , stuff like bandwidth etc do cost money )</tokentext>
<sentencetext>It has of course nothing to do with what you implying, its all about saving money (yea, stuff like bandwidth etc do cost money)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719829</id>
	<title>Microsoft patch API</title>
	<author>Anonymous</author>
	<datestamp>1247770320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A bit related, Microsoft have provided at binary patch API for years that can take advantage of the internal structure of<nobr> <wbr></nobr>.EXE files to achieve better compression. It also does multi-way patching so you can specify n source files and one target file... the patch will contain enough information to patch any of those n source files up to the target file.</p></htmltext>
<tokenext>A bit related , Microsoft have provided at binary patch API for years that can take advantage of the internal structure of .EXE files to achieve better compression .
It also does multi-way patching so you can specify n source files and one target file... the patch will contain enough information to patch any of those n source files up to the target file .</tokentext>
<sentencetext>A bit related, Microsoft have provided at binary patch API for years that can take advantage of the internal structure of .EXE files to achieve better compression.
It also does multi-way patching so you can specify n source files and one target file... the patch will contain enough information to patch any of those n source files up to the target file.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723723</id>
	<title>Compare2Crack</title>
	<author>anton\_kg</author>
	<datestamp>1247743200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Hey, my favorite Compare2Crack.v.0.10.(S)1995 just got opensourced! I wonder how long will take for crackers to adopt distribution of "bugfixes" for commercial software as well.</htmltext>
<tokenext>Hey , my favorite Compare2Crack.v.0.10 .
( S ) 1995 just got opensourced !
I wonder how long will take for crackers to adopt distribution of " bugfixes " for commercial software as well .</tokentext>
<sentencetext>Hey, my favorite Compare2Crack.v.0.10.
(S)1995 just got opensourced!
I wonder how long will take for crackers to adopt distribution of "bugfixes" for commercial software as well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719699</id>
	<title>binary diff</title>
	<author>visible.frylock</author>
	<datestamp>1247769840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>If you're not familiar with the process of binary diff (I wasn't) there's a paper linked from the article that explains some about bsdiff:</p><p><a href="http://www.daemonology.net/papers/bsdiff.pdf" title="daemonology.net" rel="nofollow">http://www.daemonology.net/papers/bsdiff.pdf</a> [daemonology.net]</p><p>Wayback from 2007/07/09:<br><a href="http://web.archive.org/web/20070709234208/http://www.daemonology.net/papers/bsdiff.pdf" title="archive.org" rel="nofollow">http://web.archive.org/web/20070709234208/http://www.daemonology.net/papers/bsdiff.pdf</a> [archive.org]</p></htmltext>
<tokenext>If you 're not familiar with the process of binary diff ( I was n't ) there 's a paper linked from the article that explains some about bsdiff : http : //www.daemonology.net/papers/bsdiff.pdf [ daemonology.net ] Wayback from 2007/07/09 : http : //web.archive.org/web/20070709234208/http : //www.daemonology.net/papers/bsdiff.pdf [ archive.org ]</tokentext>
<sentencetext>If you're not familiar with the process of binary diff (I wasn't) there's a paper linked from the article that explains some about bsdiff:http://www.daemonology.net/papers/bsdiff.pdf [daemonology.net]Wayback from 2007/07/09:http://web.archive.org/web/20070709234208/http://www.daemonology.net/papers/bsdiff.pdf [archive.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719965</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247770920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It's definitely not its brilliance... Maybe it's its frame of reference?</p></htmltext>
<tokenext>It 's definitely not its brilliance... Maybe it 's its frame of reference ?</tokentext>
<sentencetext>It's definitely not its brilliance... Maybe it's its frame of reference?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719779</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723059</id>
	<title>Re:uses a primitive automatic disassembler</title>
	<author>Grishnakh</author>
	<datestamp>1247740200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Look at the unholy amounts of research and tricky implementation they put into Apps and Gmail and Wave.. inexplicably designing all their own widgets and developing the mind-bogglingly expensive Google App Engine into a Python- and Java- linkable library and stretching and extending the capabilities of Javascript and the DOM far beyond their intended purpose to get drag-and-drop, fancy transitions, live editing... all of this is standard fare in a desktop app but Google insists on coding it (inevitably ugly) to run in the browser.</i></p><p>I think this is unfortunately completely unavoidable if you're going to run apps (like GMail) remotely, and within a web browser.  Yes, it's a lot simpler to do this stuff in a desktop app, but Google doesn't have the luxury of making things like GMail a desktop app.  A big part of the reason for using webmail in the first place is that you can run it from just about any computer, without needing to install special software.  If you needed special software to run GMail, why even bother with it?  You could just download Thunderbird or Mutt or whatever email client you like the best and download your email with POP or IMAP.  People like webmail services like GMail because it frees them from being tied to using one computer for their email.</p><p>However, running apps remotely doesn't require a web browser either.  Unix users have been able to run apps remotely for almost 25 years now using the X Window System.  However, that never caught on outside of Unix OSes and Linux, plus there's security issues.  You can't exactly hop on your friend's XP box and run an X application from a remote server, unless he happens to have Exceed installed (for $$$).</p><p>I do agree that modern web development is a nightmare, with PHP or other scripting engines, HTML, Javascript, etc. all mashed together to create a very unclean programming environment.  It'd be a lot better if web development just went away altogether, replaced by something basically just like X, running apps remotely.  Unfortunately, because of history, we're stuck with things the way they are.</p></htmltext>
<tokenext>Look at the unholy amounts of research and tricky implementation they put into Apps and Gmail and Wave.. inexplicably designing all their own widgets and developing the mind-bogglingly expensive Google App Engine into a Python- and Java- linkable library and stretching and extending the capabilities of Javascript and the DOM far beyond their intended purpose to get drag-and-drop , fancy transitions , live editing... all of this is standard fare in a desktop app but Google insists on coding it ( inevitably ugly ) to run in the browser.I think this is unfortunately completely unavoidable if you 're going to run apps ( like GMail ) remotely , and within a web browser .
Yes , it 's a lot simpler to do this stuff in a desktop app , but Google does n't have the luxury of making things like GMail a desktop app .
A big part of the reason for using webmail in the first place is that you can run it from just about any computer , without needing to install special software .
If you needed special software to run GMail , why even bother with it ?
You could just download Thunderbird or Mutt or whatever email client you like the best and download your email with POP or IMAP .
People like webmail services like GMail because it frees them from being tied to using one computer for their email.However , running apps remotely does n't require a web browser either .
Unix users have been able to run apps remotely for almost 25 years now using the X Window System .
However , that never caught on outside of Unix OSes and Linux , plus there 's security issues .
You ca n't exactly hop on your friend 's XP box and run an X application from a remote server , unless he happens to have Exceed installed ( for $ $ $ ) .I do agree that modern web development is a nightmare , with PHP or other scripting engines , HTML , Javascript , etc .
all mashed together to create a very unclean programming environment .
It 'd be a lot better if web development just went away altogether , replaced by something basically just like X , running apps remotely .
Unfortunately , because of history , we 're stuck with things the way they are .</tokentext>
<sentencetext>Look at the unholy amounts of research and tricky implementation they put into Apps and Gmail and Wave.. inexplicably designing all their own widgets and developing the mind-bogglingly expensive Google App Engine into a Python- and Java- linkable library and stretching and extending the capabilities of Javascript and the DOM far beyond their intended purpose to get drag-and-drop, fancy transitions, live editing... all of this is standard fare in a desktop app but Google insists on coding it (inevitably ugly) to run in the browser.I think this is unfortunately completely unavoidable if you're going to run apps (like GMail) remotely, and within a web browser.
Yes, it's a lot simpler to do this stuff in a desktop app, but Google doesn't have the luxury of making things like GMail a desktop app.
A big part of the reason for using webmail in the first place is that you can run it from just about any computer, without needing to install special software.
If you needed special software to run GMail, why even bother with it?
You could just download Thunderbird or Mutt or whatever email client you like the best and download your email with POP or IMAP.
People like webmail services like GMail because it frees them from being tied to using one computer for their email.However, running apps remotely doesn't require a web browser either.
Unix users have been able to run apps remotely for almost 25 years now using the X Window System.
However, that never caught on outside of Unix OSes and Linux, plus there's security issues.
You can't exactly hop on your friend's XP box and run an X application from a remote server, unless he happens to have Exceed installed (for $$$).I do agree that modern web development is a nightmare, with PHP or other scripting engines, HTML, Javascript, etc.
all mashed together to create a very unclean programming environment.
It'd be a lot better if web development just went away altogether, replaced by something basically just like X, running apps remotely.
Unfortunately, because of history, we're stuck with things the way they are.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719867</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720141</id>
	<title>Re:Rsync</title>
	<author>NoCowardsHere</author>
	<datestamp>1247771520000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Not really. Where Google's algorithm really shines is in exactly the field they designed it for: efficiently trying to update a large number of identical binary files of known content (particularly those representing code) on many remote computers, by sending a compact list of the differences.</p><p>Rsync actually has to solve a different problem: figuring out where differences exist between two files separated over a slow link when you DON'T precisely know the content of the remote file, but know it's likely similar to a local one. Its rolling-checksum algorithm is very good at doing this pretty efficiently for many types of files.</p></htmltext>
<tokenext>Not really .
Where Google 's algorithm really shines is in exactly the field they designed it for : efficiently trying to update a large number of identical binary files of known content ( particularly those representing code ) on many remote computers , by sending a compact list of the differences.Rsync actually has to solve a different problem : figuring out where differences exist between two files separated over a slow link when you DO N'T precisely know the content of the remote file , but know it 's likely similar to a local one .
Its rolling-checksum algorithm is very good at doing this pretty efficiently for many types of files .</tokentext>
<sentencetext>Not really.
Where Google's algorithm really shines is in exactly the field they designed it for: efficiently trying to update a large number of identical binary files of known content (particularly those representing code) on many remote computers, by sending a compact list of the differences.Rsync actually has to solve a different problem: figuring out where differences exist between two files separated over a slow link when you DON'T precisely know the content of the remote file, but know it's likely similar to a local one.
Its rolling-checksum algorithm is very good at doing this pretty efficiently for many types of files.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719431</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719431</id>
	<title>Rsync</title>
	<author>Anonymous</author>
	<datestamp>1247768940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Would this algorithm be useful to the rsync team?</p></htmltext>
<tokenext>Would this algorithm be useful to the rsync team ?</tokentext>
<sentencetext>Would this algorithm be useful to the rsync team?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720469</id>
	<title>Re:Rsync</title>
	<author>ChrisMounce</author>
	<datestamp>1247772720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Maybe. But I can think of a few issues:<br><br>1. This is geared toward a specific type of file (x86 executable), not generic data files.<br>2. Adding an educated-guess-where-all-the-pointers-are system might just mess with the rsync protocol.<br>3. Google has the advantage of knowing, with a quick version number check, exactly what changes need to be made: most data flows from server to client. The rsync destination would have to send back two sets of rolling checksums: first for the disassembly, then for the guess made using the patched disassembly. Don't know how big of an effect this would be on efficiency, but it would be at least slightly slower than what Google can achieve.<br><br>I really like the idea, though, and I'm a big fan of rsync. It would be interesting if there was a general purpose system for guessing changes in files, given that X changed.</htmltext>
<tokenext>Maybe .
But I can think of a few issues : 1 .
This is geared toward a specific type of file ( x86 executable ) , not generic data files.2 .
Adding an educated-guess-where-all-the-pointers-are system might just mess with the rsync protocol.3 .
Google has the advantage of knowing , with a quick version number check , exactly what changes need to be made : most data flows from server to client .
The rsync destination would have to send back two sets of rolling checksums : first for the disassembly , then for the guess made using the patched disassembly .
Do n't know how big of an effect this would be on efficiency , but it would be at least slightly slower than what Google can achieve.I really like the idea , though , and I 'm a big fan of rsync .
It would be interesting if there was a general purpose system for guessing changes in files , given that X changed .</tokentext>
<sentencetext>Maybe.
But I can think of a few issues:1.
This is geared toward a specific type of file (x86 executable), not generic data files.2.
Adding an educated-guess-where-all-the-pointers-are system might just mess with the rsync protocol.3.
Google has the advantage of knowing, with a quick version number check, exactly what changes need to be made: most data flows from server to client.
The rsync destination would have to send back two sets of rolling checksums: first for the disassembly, then for the guess made using the patched disassembly.
Don't know how big of an effect this would be on efficiency, but it would be at least slightly slower than what Google can achieve.I really like the idea, though, and I'm a big fan of rsync.
It would be interesting if there was a general purpose system for guessing changes in files, given that X changed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719431</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720125</id>
	<title>BCJ filter</title>
	<author>hpa</author>
	<datestamp>1247771460000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>The concept of a Branch-Call-Jump (BCJ) filter is well-known in the data compression community, and is a standard part of quite a few deployed compression products.  Used as a front end to a conventional compression algorithm -- or, in this case, a binary compression algorithm -- does indeed give significant improvements.  The application to binary diff is particularly interesting, since it means you can deal with branches and other references *over* the compressed region, so this is really rather clever.</p></htmltext>
<tokenext>The concept of a Branch-Call-Jump ( BCJ ) filter is well-known in the data compression community , and is a standard part of quite a few deployed compression products .
Used as a front end to a conventional compression algorithm -- or , in this case , a binary compression algorithm -- does indeed give significant improvements .
The application to binary diff is particularly interesting , since it means you can deal with branches and other references * over * the compressed region , so this is really rather clever .</tokentext>
<sentencetext>The concept of a Branch-Call-Jump (BCJ) filter is well-known in the data compression community, and is a standard part of quite a few deployed compression products.
Used as a front end to a conventional compression algorithm -- or, in this case, a binary compression algorithm -- does indeed give significant improvements.
The application to binary diff is particularly interesting, since it means you can deal with branches and other references *over* the compressed region, so this is really rather clever.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720093</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247771340000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The phonograph could have been executed thousands of years ago using what materials? I don't belive a practical phonograph could be made without modern materials.</p></htmltext>
<tokenext>The phonograph could have been executed thousands of years ago using what materials ?
I do n't belive a practical phonograph could be made without modern materials .</tokentext>
<sentencetext>The phonograph could have been executed thousands of years ago using what materials?
I don't belive a practical phonograph could be made without modern materials.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726509</id>
	<title>source code patch</title>
	<author>shird</author>
	<datestamp>1247771040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If Google's OS is to become open source - why the need for binary diffs at all?</p><p>Isn't it feasible that the client store all the source and the patch process just involves a diff to the original source code that gets recompiled?</p><p>Also, this dissassemble,edit,reassemble scheme isn't new. There are a few viruses out there that perform similar actions in order to create space in the executable for themselves. pretty ingenious actually.</p></htmltext>
<tokenext>If Google 's OS is to become open source - why the need for binary diffs at all ? Is n't it feasible that the client store all the source and the patch process just involves a diff to the original source code that gets recompiled ? Also , this dissassemble,edit,reassemble scheme is n't new .
There are a few viruses out there that perform similar actions in order to create space in the executable for themselves .
pretty ingenious actually .</tokentext>
<sentencetext>If Google's OS is to become open source - why the need for binary diffs at all?Isn't it feasible that the client store all the source and the patch process just involves a diff to the original source code that gets recompiled?Also, this dissassemble,edit,reassemble scheme isn't new.
There are a few viruses out there that perform similar actions in order to create space in the executable for themselves.
pretty ingenious actually.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719683</id>
	<title>Re:Bad explanation</title>
	<author>Anonymous</author>
	<datestamp>1247769780000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>I can compress <i>any</i> document, down to a single <b>but</b>,</p></div><p>Oh crap.  There goes any chance of this being a <i>technical</i> discussion.</p></div>
	</htmltext>
<tokenext>I can compress any document , down to a single but,Oh crap .
There goes any chance of this being a technical discussion .</tokentext>
<sentencetext>I can compress any document, down to a single but,Oh crap.
There goes any chance of this being a technical discussion.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721151</id>
	<title>Semantic Content FTW</title>
	<author>Tiger</author>
	<datestamp>1247775480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Of course, if you just generate a patch file with the source changes, and zip that up, it will be even tinier.</p><p>Except for that upgrade of the underlying FooLib from 2.1 to 2.2 that's part of your hotfix.  Well, all you need then is to just get a patch file for that too, and include that.</p><p>And then compile the whole sucker on the other end.</p><p>Everyone's got plenty of CPU, right?  And we're just about all using trivially decompilable bytecode anyway, so if you make your patchfile based on the source changes after compilation and decompilation, you've got all the right transforms in place to come up with a pretty effectively minimal changeset.</p><p>Of course, we have plenty of bandwidth anyway, and programs are small.  Media files are not, but I'm not going to get any space savings trying to disassemble that picture of your mom.</p></htmltext>
<tokenext>Of course , if you just generate a patch file with the source changes , and zip that up , it will be even tinier.Except for that upgrade of the underlying FooLib from 2.1 to 2.2 that 's part of your hotfix .
Well , all you need then is to just get a patch file for that too , and include that.And then compile the whole sucker on the other end.Everyone 's got plenty of CPU , right ?
And we 're just about all using trivially decompilable bytecode anyway , so if you make your patchfile based on the source changes after compilation and decompilation , you 've got all the right transforms in place to come up with a pretty effectively minimal changeset.Of course , we have plenty of bandwidth anyway , and programs are small .
Media files are not , but I 'm not going to get any space savings trying to disassemble that picture of your mom .</tokentext>
<sentencetext>Of course, if you just generate a patch file with the source changes, and zip that up, it will be even tinier.Except for that upgrade of the underlying FooLib from 2.1 to 2.2 that's part of your hotfix.
Well, all you need then is to just get a patch file for that too, and include that.And then compile the whole sucker on the other end.Everyone's got plenty of CPU, right?
And we're just about all using trivially decompilable bytecode anyway, so if you make your patchfile based on the source changes after compilation and decompilation, you've got all the right transforms in place to come up with a pretty effectively minimal changeset.Of course, we have plenty of bandwidth anyway, and programs are small.
Media files are not, but I'm not going to get any space savings trying to disassemble that picture of your mom.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720459</id>
	<title>this i)s goatseMx</title>
	<author>Anonymous</author>
	<datestamp>1247772660000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><A HREF="http://goat.cx/" title="goat.cx" rel="nofollow">*BSD but FreeBSD acco(rding tothis</a> [goat.cx]</htmltext>
<tokenext>* BSD but FreeBSD acco ( rding tothis [ goat.cx ]</tokentext>
<sentencetext>*BSD but FreeBSD acco(rding tothis [goat.cx]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720173</id>
	<title>Re:That's just a dissembler. How about bittorrent?</title>
	<author>tvjunky</author>
	<datestamp>1247771640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This is diffs the dissembled version of the original against the update on the server, then does the opposite on the client. I couldn't help but think of this as similar to Gentoo's model<nobr> <wbr></nobr>... download a compressed diff of the source and then recompile. Both have the same problem: too much client-side CPU usage (though Gentoo's is an extreme of this). Isn't Google Chrome OS primarily targeting netbooks? Can such things handle that level of extra client-side computation without leaving users frustrated?</p> </div><p>I don't think this is really a problem in this case. In the time even a slow computer, by today's standards, has downloaded a kilobyte over a WAN link it has easily performed millions of CPU operations on it. The same would be true for any kind of compression really. Since Bandwidth through your pipe is just orders of magnitudes slower than anything that happens within your machine, this added level of complexity is clearly more beneficial than a direct approach. That's why it makes sense to compress files on the server (or the client uploading it to the server), transfer them and decompress them on the client, even if the client is quite slow.</p><p><div class="quote"><p>I'd rather improve the distribution model. Since packages are all signed, SSL and friends aren't needed for the transfer, nor does it need to come from a trusted authority. Bittorrent comes to mind. I'm quite disappointed that the apt-torrent project never went anywhere. It's clearly the solution.</p></div><p>With patches between minor versions at about 80kB (as stated in TFA), I don't think that a distribution using bittorrent would really be the way to go here. Add to this the fact that google has quite a lot of bandwidth at their disposal and I don't see this happening anytime soon.<br>I aggree however that it may be a good idea to transfer large amounts of linux packeges that way. But with a lot of smaller packages the protocol overhead of bittorrent might become a limiting factor regarding its usefulness.</p></div>
	</htmltext>
<tokenext>This is diffs the dissembled version of the original against the update on the server , then does the opposite on the client .
I could n't help but think of this as similar to Gentoo 's model ... download a compressed diff of the source and then recompile .
Both have the same problem : too much client-side CPU usage ( though Gentoo 's is an extreme of this ) .
Is n't Google Chrome OS primarily targeting netbooks ?
Can such things handle that level of extra client-side computation without leaving users frustrated ?
I do n't think this is really a problem in this case .
In the time even a slow computer , by today 's standards , has downloaded a kilobyte over a WAN link it has easily performed millions of CPU operations on it .
The same would be true for any kind of compression really .
Since Bandwidth through your pipe is just orders of magnitudes slower than anything that happens within your machine , this added level of complexity is clearly more beneficial than a direct approach .
That 's why it makes sense to compress files on the server ( or the client uploading it to the server ) , transfer them and decompress them on the client , even if the client is quite slow.I 'd rather improve the distribution model .
Since packages are all signed , SSL and friends are n't needed for the transfer , nor does it need to come from a trusted authority .
Bittorrent comes to mind .
I 'm quite disappointed that the apt-torrent project never went anywhere .
It 's clearly the solution.With patches between minor versions at about 80kB ( as stated in TFA ) , I do n't think that a distribution using bittorrent would really be the way to go here .
Add to this the fact that google has quite a lot of bandwidth at their disposal and I do n't see this happening anytime soon.I aggree however that it may be a good idea to transfer large amounts of linux packeges that way .
But with a lot of smaller packages the protocol overhead of bittorrent might become a limiting factor regarding its usefulness .</tokentext>
<sentencetext>This is diffs the dissembled version of the original against the update on the server, then does the opposite on the client.
I couldn't help but think of this as similar to Gentoo's model ... download a compressed diff of the source and then recompile.
Both have the same problem: too much client-side CPU usage (though Gentoo's is an extreme of this).
Isn't Google Chrome OS primarily targeting netbooks?
Can such things handle that level of extra client-side computation without leaving users frustrated?
I don't think this is really a problem in this case.
In the time even a slow computer, by today's standards, has downloaded a kilobyte over a WAN link it has easily performed millions of CPU operations on it.
The same would be true for any kind of compression really.
Since Bandwidth through your pipe is just orders of magnitudes slower than anything that happens within your machine, this added level of complexity is clearly more beneficial than a direct approach.
That's why it makes sense to compress files on the server (or the client uploading it to the server), transfer them and decompress them on the client, even if the client is quite slow.I'd rather improve the distribution model.
Since packages are all signed, SSL and friends aren't needed for the transfer, nor does it need to come from a trusted authority.
Bittorrent comes to mind.
I'm quite disappointed that the apt-torrent project never went anywhere.
It's clearly the solution.With patches between minor versions at about 80kB (as stated in TFA), I don't think that a distribution using bittorrent would really be the way to go here.
Add to this the fact that google has quite a lot of bandwidth at their disposal and I don't see this happening anytime soon.I aggree however that it may be a good idea to transfer large amounts of linux packeges that way.
But with a lot of smaller packages the protocol overhead of bittorrent might become a limiting factor regarding its usefulness.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720969</id>
	<title>Re:Also less overhead for Google</title>
	<author>Anonymous</author>
	<datestamp>1247774700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Less overhead for Google but more overhead for everyone else. How much extra power is being consumed by making thousands, maybe millions of machines recompile code rather doing it a handful of times on Google's servers?</p></htmltext>
<tokenext>Less overhead for Google but more overhead for everyone else .
How much extra power is being consumed by making thousands , maybe millions of machines recompile code rather doing it a handful of times on Google 's servers ?</tokentext>
<sentencetext>Less overhead for Google but more overhead for everyone else.
How much extra power is being consumed by making thousands, maybe millions of machines recompile code rather doing it a handful of times on Google's servers?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719487</id>
	<title>Evil bastages</title>
	<author>Anonymous</author>
	<datestamp>1247769180000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext>I wonder what the tin foil hat crowd will have to say about this?  Somebodies gonna hafta rant about how this is evil because it allows for more efficient tracking of everything we do.  Or how they hate the google updater always running using up their precious bandwidths/slots on the task manager list.  Somebody will troll and it'll get marked insightful or something stupid.  And that last sentence guaranteed I won't get nothing but a -1 troll either, but oh well.</htmltext>
<tokenext>I wonder what the tin foil hat crowd will have to say about this ?
Somebodies gon na hafta rant about how this is evil because it allows for more efficient tracking of everything we do .
Or how they hate the google updater always running using up their precious bandwidths/slots on the task manager list .
Somebody will troll and it 'll get marked insightful or something stupid .
And that last sentence guaranteed I wo n't get nothing but a -1 troll either , but oh well .</tokentext>
<sentencetext>I wonder what the tin foil hat crowd will have to say about this?
Somebodies gonna hafta rant about how this is evil because it allows for more efficient tracking of everything we do.
Or how they hate the google updater always running using up their precious bandwidths/slots on the task manager list.
Somebody will troll and it'll get marked insightful or something stupid.
And that last sentence guaranteed I won't get nothing but a -1 troll either, but oh well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720027</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247771160000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><blockquote><div><p>Like many brilliant ideas... it makes you smack yourself on the head and go "why hasn't everybody been doing this for years?".</p></div></blockquote><p>Probably because the old solution was:</p><p>A) Simple<br>B) Good enough for most purposes.</p><p>Sure, you can shave 80\% off your patch filesize... but unless you're as big as google, patch bandwidth probably isn't a major priority -- you've likely got much more important things to dedicate engineering resources to.</p><p>You know how they say "Necessity is the mother of invention"?  Well, when an invention isn't really necessary for most folks, it tends to show up a little later than it might otherwise have.</p></div>
	</htmltext>
<tokenext>Like many brilliant ideas... it makes you smack yourself on the head and go " why has n't everybody been doing this for years ?
" .Probably because the old solution was : A ) SimpleB ) Good enough for most purposes.Sure , you can shave 80 \ % off your patch filesize... but unless you 're as big as google , patch bandwidth probably is n't a major priority -- you 've likely got much more important things to dedicate engineering resources to.You know how they say " Necessity is the mother of invention " ?
Well , when an invention is n't really necessary for most folks , it tends to show up a little later than it might otherwise have .</tokentext>
<sentencetext>Like many brilliant ideas... it makes you smack yourself on the head and go "why hasn't everybody been doing this for years?
".Probably because the old solution was:A) SimpleB) Good enough for most purposes.Sure, you can shave 80\% off your patch filesize... but unless you're as big as google, patch bandwidth probably isn't a major priority -- you've likely got much more important things to dedicate engineering resources to.You know how they say "Necessity is the mother of invention"?
Well, when an invention isn't really necessary for most folks, it tends to show up a little later than it might otherwise have.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724181</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Pollardito</author>
	<datestamp>1247745360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Sure, you can shave 80\% off your patch filesize... but unless you're as big as google, patch bandwidth probably isn't a major priority</p></div><p>It's priority tends to scale though.  Once you're as big as Google you have so many people downloading it that if you can shave just a little off each download it adds up to something that matters even to someone as big as you.</p></div>
	</htmltext>
<tokenext>Sure , you can shave 80 \ % off your patch filesize... but unless you 're as big as google , patch bandwidth probably is n't a major priorityIt 's priority tends to scale though .
Once you 're as big as Google you have so many people downloading it that if you can shave just a little off each download it adds up to something that matters even to someone as big as you .</tokentext>
<sentencetext>Sure, you can shave 80\% off your patch filesize... but unless you're as big as google, patch bandwidth probably isn't a major priorityIt's priority tends to scale though.
Once you're as big as Google you have so many people downloading it that if you can shave just a little off each download it adds up to something that matters even to someone as big as you.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720027</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719915</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>six</author>
	<datestamp>1247770680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>Binary executable files contain a lot of addresses (variables, jump locations,<nobr> <wbr></nobr>...) that are generated by the assembler at compile time.</p><p>Now consider you just add one 1-byte instruction somewhere in the middle of your program (let's say "nop"). When you compile it again, all the code that reference addresses beyond the insert point will have changed because the address has been incremented. So these 4 bytes added to your source code could mean addresses that get incremented in the compiled file in thousands of places.</p><p>What they do basically is take the binary file, disassemble it back to pseudo source code (not real asm I guess), and diff that against old version. The patch engine on the client end does the same disassembling, applies the patch, and reassembles the patched source code to an executable file.</p><p>This means diffs gets much smaller (4 bytes vs. 1000s in my extreme example), but also makes the diff/patch process much more complex, slower, and not portable.</p></htmltext>
<tokenext>Binary executable files contain a lot of addresses ( variables , jump locations , ... ) that are generated by the assembler at compile time.Now consider you just add one 1-byte instruction somewhere in the middle of your program ( let 's say " nop " ) .
When you compile it again , all the code that reference addresses beyond the insert point will have changed because the address has been incremented .
So these 4 bytes added to your source code could mean addresses that get incremented in the compiled file in thousands of places.What they do basically is take the binary file , disassemble it back to pseudo source code ( not real asm I guess ) , and diff that against old version .
The patch engine on the client end does the same disassembling , applies the patch , and reassembles the patched source code to an executable file.This means diffs gets much smaller ( 4 bytes vs. 1000s in my extreme example ) , but also makes the diff/patch process much more complex , slower , and not portable .</tokentext>
<sentencetext>Binary executable files contain a lot of addresses (variables, jump locations, ...) that are generated by the assembler at compile time.Now consider you just add one 1-byte instruction somewhere in the middle of your program (let's say "nop").
When you compile it again, all the code that reference addresses beyond the insert point will have changed because the address has been incremented.
So these 4 bytes added to your source code could mean addresses that get incremented in the compiled file in thousands of places.What they do basically is take the binary file, disassemble it back to pseudo source code (not real asm I guess), and diff that against old version.
The patch engine on the client end does the same disassembling, applies the patch, and reassembles the patched source code to an executable file.This means diffs gets much smaller (4 bytes vs. 1000s in my extreme example), but also makes the diff/patch process much more complex, slower, and not portable.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724601</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>VoltageX</author>
	<datestamp>1247748360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'd ask Blizzard what their patch bandwidth is too. Even though they use BT, I'm sure the users would appreciate the smaller download/upload.</htmltext>
<tokenext>I 'd ask Blizzard what their patch bandwidth is too .
Even though they use BT , I 'm sure the users would appreciate the smaller download/upload .</tokentext>
<sentencetext>I'd ask Blizzard what their patch bandwidth is too.
Even though they use BT, I'm sure the users would appreciate the smaller download/upload.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720027</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557</id>
	<title>Bad explanation</title>
	<author>DoofusOfDeath</author>
	<datestamp>1247769420000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><blockquote><div><p> Courgette achieves smaller diffs (about 9x in one example)</p></div></blockquote><p>That's potentially very misleading.  I can compress <i>any</i> document, down to a single but, if my compression algorithm is sufficiently tailored to that document.  For example:<br><tt><br>if (compressed\_data[0] == 0):<br>
&nbsp; &nbsp; &nbsp; return = get\_Magna\_Carta\_text()<br>else:<br>
&nbsp; &nbsp; &nbsp; return unzip(compressed\_data[1:])<br></tt><br>What we need to know is the overall distribution of compression ratios, or at least the <i>average</i> compression ratio, over a large population of documents.</p></div>
	</htmltext>
<tokenext>Courgette achieves smaller diffs ( about 9x in one example ) That 's potentially very misleading .
I can compress any document , down to a single but , if my compression algorithm is sufficiently tailored to that document .
For example : if ( compressed \ _data [ 0 ] = = 0 ) :       return = get \ _Magna \ _Carta \ _text ( ) else :       return unzip ( compressed \ _data [ 1 : ] ) What we need to know is the overall distribution of compression ratios , or at least the average compression ratio , over a large population of documents .</tokentext>
<sentencetext> Courgette achieves smaller diffs (about 9x in one example)That's potentially very misleading.
I can compress any document, down to a single but, if my compression algorithm is sufficiently tailored to that document.
For example:if (compressed\_data[0] == 0):
      return = get\_Magna\_Carta\_text()else:
      return unzip(compressed\_data[1:])What we need to know is the overall distribution of compression ratios, or at least the average compression ratio, over a large population of documents.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720309</id>
	<title>Re:Solving the wrong problem</title>
	<author>Anonymous</author>
	<datestamp>1247772060000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>Out of curiosity, could you please point us to some of your code so we can do a comparison?</p><p>Thanks.</p></htmltext>
<tokenext>Out of curiosity , could you please point us to some of your code so we can do a comparison ? Thanks .</tokentext>
<sentencetext>Out of curiosity, could you please point us to some of your code so we can do a comparison?Thanks.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719501</id>
	<title>Microsoft version?</title>
	<author>Anonymous</author>
	<datestamp>1247769180000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Any takers that Microsoft will release their own version of this, but compile the assembly before sending it over the wire?</p><p>Maybe they can call it "Compiled Assembly from Disassembly" (CAD).</p></htmltext>
<tokenext>Any takers that Microsoft will release their own version of this , but compile the assembly before sending it over the wire ? Maybe they can call it " Compiled Assembly from Disassembly " ( CAD ) .</tokentext>
<sentencetext>Any takers that Microsoft will release their own version of this, but compile the assembly before sending it over the wire?Maybe they can call it "Compiled Assembly from Disassembly" (CAD).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719777</id>
	<title>Re:dictionary</title>
	<author>Anonymous</author>
	<datestamp>1247770140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Who wouldn't want to get backdoored by a zucchini?</p></htmltext>
<tokenext>Who would n't want to get backdoored by a zucchini ?</tokentext>
<sentencetext>Who wouldn't want to get backdoored by a zucchini?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719335</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719559</id>
	<title>Not a 'binary diffing' program. . .</title>
	<author>Anonymous</author>
	<datestamp>1247769420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>It seems to me that the moniker 'binary diff' doesn't really apply here, does it? My understanding of a binary diff algorithm/program has always been one that can take any arbitrary binary file, not knowing anything about what *type* of file it was (e.g. the file could be an image file, video file, executable, word document, zip file, CAD file, or whatever) and create a diff for that binary 'data'.</p><p>It sounds like this is very specific to x86 executables?</p><p>Still, whatever you call it, it's good to see progress being made. I just wonder why you can't create small, efficient diffs for any kind of binary file?</p></htmltext>
<tokenext>It seems to me that the moniker 'binary diff ' does n't really apply here , does it ?
My understanding of a binary diff algorithm/program has always been one that can take any arbitrary binary file , not knowing anything about what * type * of file it was ( e.g .
the file could be an image file , video file , executable , word document , zip file , CAD file , or whatever ) and create a diff for that binary 'data'.It sounds like this is very specific to x86 executables ? Still , whatever you call it , it 's good to see progress being made .
I just wonder why you ca n't create small , efficient diffs for any kind of binary file ?</tokentext>
<sentencetext>It seems to me that the moniker 'binary diff' doesn't really apply here, does it?
My understanding of a binary diff algorithm/program has always been one that can take any arbitrary binary file, not knowing anything about what *type* of file it was (e.g.
the file could be an image file, video file, executable, word document, zip file, CAD file, or whatever) and create a diff for that binary 'data'.It sounds like this is very specific to x86 executables?Still, whatever you call it, it's good to see progress being made.
I just wonder why you can't create small, efficient diffs for any kind of binary file?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719681</id>
	<title>Cougarette?</title>
	<author>Anonymous</author>
	<datestamp>1247769780000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Am I the only one who misread "Courgette"?<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Am I the only one who misread " Courgette " ?
: )</tokentext>
<sentencetext>Am I the only one who misread "Courgette"?
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719313</id>
	<title>Google</title>
	<author>Anonymous</author>
	<datestamp>1247768580000</datestamp>
	<modclass>Funny</modclass>
	<modscore>0</modscore>
	<htmltext><p>can suck my diff!</p></htmltext>
<tokenext>can suck my diff !</tokentext>
<sentencetext>can suck my diff!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720749</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>Anonymous</author>
	<datestamp>1247773800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>ok, now this time in olde english.</p></htmltext>
<tokenext>ok , now this time in olde english .</tokentext>
<sentencetext>ok, now this time in olde english.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719915</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720597</id>
	<title>That's odd.</title>
	<author>haelduksf</author>
	<datestamp>1247773200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>Google's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates today.</i>
<br> <br>
That's a pretty small market- I can't imagine that there are many teams that will release really small updates today.</htmltext>
<tokenext>Google 's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates today .
That 's a pretty small market- I ca n't imagine that there are many teams that will release really small updates today .</tokentext>
<sentencetext>Google's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates today.
That's a pretty small market- I can't imagine that there are many teams that will release really small updates today.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719865</id>
	<title>Re:Also less overhead for Google</title>
	<author>pembo13</author>
	<datestamp>1247770500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>&gt; I'd be honestly pretty disappointed in any major distro that doesn't start implementing a binary diff solution around this.</p><p>Fedora already has <a href="https://fedoraproject.org/wiki/Features/Presto" title="fedoraproject.org">Presto using DeltaRPMs</a> [fedoraproject.org]</p></htmltext>
<tokenext>&gt; I 'd be honestly pretty disappointed in any major distro that does n't start implementing a binary diff solution around this.Fedora already has Presto using DeltaRPMs [ fedoraproject.org ]</tokentext>
<sentencetext>&gt; I'd be honestly pretty disappointed in any major distro that doesn't start implementing a binary diff solution around this.Fedora already has Presto using DeltaRPMs [fedoraproject.org]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719961</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247770920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>"The instructions for making an Edison phonograph could have been understood and executed by any craftsman going back thousands of years. Yet, it wasn't done until the late 19th century.</p><p>Are the inventors that brilliant, or are we just that stupid."</p><p>Yet Leonardo Da Vinci had the idea for a helicopter and it wasn't for several centuries that man took to the skies, let alone in an actual helicopter.  Sometimes the ideas are there but not the desire or the means to move the process along, and sometimes the idea comes at exactly the right time.  It's certainly possible that earlier craftsmen could have been able to reproduce a phonograph if shown how, but I have to wonder if society would have been ready for such a device.</p></htmltext>
<tokenext>" The instructions for making an Edison phonograph could have been understood and executed by any craftsman going back thousands of years .
Yet , it was n't done until the late 19th century.Are the inventors that brilliant , or are we just that stupid .
" Yet Leonardo Da Vinci had the idea for a helicopter and it was n't for several centuries that man took to the skies , let alone in an actual helicopter .
Sometimes the ideas are there but not the desire or the means to move the process along , and sometimes the idea comes at exactly the right time .
It 's certainly possible that earlier craftsmen could have been able to reproduce a phonograph if shown how , but I have to wonder if society would have been ready for such a device .</tokentext>
<sentencetext>"The instructions for making an Edison phonograph could have been understood and executed by any craftsman going back thousands of years.
Yet, it wasn't done until the late 19th century.Are the inventors that brilliant, or are we just that stupid.
"Yet Leonardo Da Vinci had the idea for a helicopter and it wasn't for several centuries that man took to the skies, let alone in an actual helicopter.
Sometimes the ideas are there but not the desire or the means to move the process along, and sometimes the idea comes at exactly the right time.
It's certainly possible that earlier craftsmen could have been able to reproduce a phonograph if shown how, but I have to wonder if society would have been ready for such a device.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719779</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247770140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Are the inventors that brilliant, or are we just that stupid.</p></div><p>
Isn't this simply "another level of indirection"...?<nobr> <wbr></nobr>:)
</p><p>
I think to answer your question, its not brilliance but the ability to step away from the norm and take a fresh look at established things that brings about such innovation. (IMHO)
</p></div>
	</htmltext>
<tokenext>Are the inventors that brilliant , or are we just that stupid .
Is n't this simply " another level of indirection " ... ?
: ) I think to answer your question , its not brilliance but the ability to step away from the norm and take a fresh look at established things that brings about such innovation .
( IMHO )</tokentext>
<sentencetext>Are the inventors that brilliant, or are we just that stupid.
Isn't this simply "another level of indirection"...?
:)

I think to answer your question, its not brilliance but the ability to step away from the norm and take a fresh look at established things that brings about such innovation.
(IMHO)

	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720201</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>unfunk</author>
	<datestamp>1247771760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I'd like to see a car analogy here, but it needs to involve the car's transmission diff</htmltext>
<tokenext>I 'd like to see a car analogy here , but it needs to involve the car 's transmission diff</tokentext>
<sentencetext>I'd like to see a car analogy here, but it needs to involve the car's transmission diff</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720663</id>
	<title>Re:uses a primitive automatic disassembler</title>
	<author>Cyberax</author>
	<datestamp>1247773500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>It would work fine, if you include Java/.NET specific disassemblers.</p><p>In fact, you can already compress Java JARs about ten times by using pack200 algorithm (it works essentially the same way).</p></htmltext>
<tokenext>It would work fine , if you include Java/.NET specific disassemblers.In fact , you can already compress Java JARs about ten times by using pack200 algorithm ( it works essentially the same way ) .</tokentext>
<sentencetext>It would work fine, if you include Java/.NET specific disassemblers.In fact, you can already compress Java JARs about ten times by using pack200 algorithm (it works essentially the same way).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720387</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>Anonymous</author>
	<datestamp>1247772360000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p>Everything except "capiche", yeah.</p></htmltext>
<tokenext>Everything except " capiche " , yeah .</tokentext>
<sentencetext>Everything except "capiche", yeah.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719953</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726909</id>
	<title>Overly Complex..</title>
	<author>thesupraman</author>
	<datestamp>1247863920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>From my first pass over it this actually looks like a very strange way to describe a rather simple approach.</p><p>More generically they could achieve the same, if not more, by using a differential PPM or LZ method, which is very simple to design/implement, needs zeto knowledge of what data it is handling, and  is in no way a new idea.</p><p>I suspect whomever designed this had a good idea but not that enough compression experience to know solutions were already on hand.</p><p>In effect the new binary should be compressed with a dictionary compresser pre-initialised with the contents of the old binary as an initial seed and everything needed is achieved - the initial compression time can be large for a large binary, however as this is a single compress, millions of decompress situation, it would not matter at all (and by large I mean minutes, not days).</p><p>Still, their more complex description/implementation sounds much more whizzy and gets more attention, no doubt.</p></htmltext>
<tokenext>From my first pass over it this actually looks like a very strange way to describe a rather simple approach.More generically they could achieve the same , if not more , by using a differential PPM or LZ method , which is very simple to design/implement , needs zeto knowledge of what data it is handling , and is in no way a new idea.I suspect whomever designed this had a good idea but not that enough compression experience to know solutions were already on hand.In effect the new binary should be compressed with a dictionary compresser pre-initialised with the contents of the old binary as an initial seed and everything needed is achieved - the initial compression time can be large for a large binary , however as this is a single compress , millions of decompress situation , it would not matter at all ( and by large I mean minutes , not days ) .Still , their more complex description/implementation sounds much more whizzy and gets more attention , no doubt .</tokentext>
<sentencetext>From my first pass over it this actually looks like a very strange way to describe a rather simple approach.More generically they could achieve the same, if not more, by using a differential PPM or LZ method, which is very simple to design/implement, needs zeto knowledge of what data it is handling, and  is in no way a new idea.I suspect whomever designed this had a good idea but not that enough compression experience to know solutions were already on hand.In effect the new binary should be compressed with a dictionary compresser pre-initialised with the contents of the old binary as an initial seed and everything needed is achieved - the initial compression time can be large for a large binary, however as this is a single compress, millions of decompress situation, it would not matter at all (and by large I mean minutes, not days).Still, their more complex description/implementation sounds much more whizzy and gets more attention, no doubt.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719891</id>
	<title>Re:Bad explanation</title>
	<author>Anonymous</author>
	<datestamp>1247770620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>You could reduce that to zero bytes. Return the result of get\_Magna\_Carta\_text() every time, and require that the sourcetext be the Magna Carta. The output is undefined for all other inputs<nobr> <wbr></nobr>:)</htmltext>
<tokenext>You could reduce that to zero bytes .
Return the result of get \ _Magna \ _Carta \ _text ( ) every time , and require that the sourcetext be the Magna Carta .
The output is undefined for all other inputs : )</tokentext>
<sentencetext>You could reduce that to zero bytes.
Return the result of get\_Magna\_Carta\_text() every time, and require that the sourcetext be the Magna Carta.
The output is undefined for all other inputs :)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28725159</id>
	<title>Re:That's just a dissembler. How about bittorrent?</title>
	<author>Anonymous</author>
	<datestamp>1247753040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I think you mean "disassemble". Dissemble is not the opposite of assemble and means something quite different.</p></htmltext>
<tokenext>I think you mean " disassemble " .
Dissemble is not the opposite of assemble and means something quite different .</tokentext>
<sentencetext>I think you mean "disassemble".
Dissemble is not the opposite of assemble and means something quite different.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719897</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719451</id>
	<title>The cool thing is...</title>
	<author>salimma</author>
	<datestamp>1247769060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>The cool thing is, one can easily extend this to other executable formats, as long as the assembler is readily available client-side: Windows users could relate to those pesky, resource-hogging Java updates, and<nobr> <wbr></nobr>.NET and Mono applications could similarly benefit.</p><p>This is, interestingly, the second binary diffing innovation that affects me in the past few months. Fedora just turned on delta updates with Fedora 11, a feature borrowed from the openSUSE folks.</p></htmltext>
<tokenext>The cool thing is , one can easily extend this to other executable formats , as long as the assembler is readily available client-side : Windows users could relate to those pesky , resource-hogging Java updates , and .NET and Mono applications could similarly benefit.This is , interestingly , the second binary diffing innovation that affects me in the past few months .
Fedora just turned on delta updates with Fedora 11 , a feature borrowed from the openSUSE folks .</tokentext>
<sentencetext>The cool thing is, one can easily extend this to other executable formats, as long as the assembler is readily available client-side: Windows users could relate to those pesky, resource-hogging Java updates, and .NET and Mono applications could similarly benefit.This is, interestingly, the second binary diffing innovation that affects me in the past few months.
Fedora just turned on delta updates with Fedora 11, a feature borrowed from the openSUSE folks.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720289</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>DiegoBravo</author>
	<datestamp>1247772000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm not sure about the benefits.</p><p>1) The bulk of many (most?) software packages are resources, not executables<br>2) A lot of the executables is a lot of linker/DLL overhead, specially the smaller ones<br>3) The optimizers (I think) remix the assembly instructions, so small changes in the program logic result in a lot of changes in assembly, ergo, in machine code. The best solution in terms of BW remains sending diffs in high level source code.</p></htmltext>
<tokenext>I 'm not sure about the benefits.1 ) The bulk of many ( most ?
) software packages are resources , not executables2 ) A lot of the executables is a lot of linker/DLL overhead , specially the smaller ones3 ) The optimizers ( I think ) remix the assembly instructions , so small changes in the program logic result in a lot of changes in assembly , ergo , in machine code .
The best solution in terms of BW remains sending diffs in high level source code .</tokentext>
<sentencetext>I'm not sure about the benefits.1) The bulk of many (most?
) software packages are resources, not executables2) A lot of the executables is a lot of linker/DLL overhead, specially the smaller ones3) The optimizers (I think) remix the assembly instructions, so small changes in the program logic result in a lot of changes in assembly, ergo, in machine code.
The best solution in terms of BW remains sending diffs in high level source code.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719953</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719645</id>
	<title>Courgette?</title>
	<author>Anonymous</author>
	<datestamp>1247769660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Sounds like a old horny midget woman.</p></htmltext>
<tokenext>Sounds like a old horny midget woman .</tokentext>
<sentencetext>Sounds like a old horny midget woman.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719335</id>
	<title>dictionary</title>
	<author>Anonymous</author>
	<datestamp>1247768700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><b>courgette (kr-zht)</b>
<br> <br>
<i>n. Chiefly British </i>
<br> <br>
A zucchini.</htmltext>
<tokenext>courgette ( kr-zht ) n. Chiefly British A zucchini .</tokentext>
<sentencetext>courgette (kr-zht)
 
n. Chiefly British 
 
A zucchini.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721577</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247777280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Brass... not exactly iron age technology.</p></htmltext>
<tokenext>Brass... not exactly iron age technology .</tokentext>
<sentencetext>Brass... not exactly iron age technology.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720093</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629</id>
	<title>Solving the wrong problem</title>
	<author>Anonymous</author>
	<datestamp>1247769600000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext><p>
A better binary diffing algorithm is useful for source control. But for security?
If the code is so awful that the bandwidth required for security updates is a problem, the product is defective by design.
</p><p>
It sounds like Google tried "agile programming" on trusted code, and now has to deal with the consequences of debugging a pile of crap.</p></htmltext>
<tokenext>A better binary diffing algorithm is useful for source control .
But for security ?
If the code is so awful that the bandwidth required for security updates is a problem , the product is defective by design .
It sounds like Google tried " agile programming " on trusted code , and now has to deal with the consequences of debugging a pile of crap .</tokentext>
<sentencetext>
A better binary diffing algorithm is useful for source control.
But for security?
If the code is so awful that the bandwidth required for security updates is a problem, the product is defective by design.
It sounds like Google tried "agile programming" on trusted code, and now has to deal with the consequences of debugging a pile of crap.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721343</id>
	<title>Re:Bad explanation</title>
	<author>Calyth</author>
	<datestamp>1247776380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It is a rather poor explanation...<br>But their compressor is catered to executables, so that it transmit a sort of primitive assembly language, so that the diffs are smaller, transmit that, and have the client end to apply and reassemble.</p><p>So their compression algorithm has little applications outside of executables.</p></htmltext>
<tokenext>It is a rather poor explanation...But their compressor is catered to executables , so that it transmit a sort of primitive assembly language , so that the diffs are smaller , transmit that , and have the client end to apply and reassemble.So their compression algorithm has little applications outside of executables .</tokentext>
<sentencetext>It is a rather poor explanation...But their compressor is catered to executables, so that it transmit a sort of primitive assembly language, so that the diffs are smaller, transmit that, and have the client end to apply and reassemble.So their compression algorithm has little applications outside of executables.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723413</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>Anonymous</author>
	<datestamp>1247741880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p>This is really just the same idea applied to the problem of patches. They're small in source; but big in binary. It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.</p></div></blockquote><p>This is nothing new.  Transform-patch-reverse is an age-old technique for binary diffs of compressed files, for example.</p><p>Even this specific approach is nothing new; I can think of at least one example of a patch to a program being distributed in the form of a script that disassembled the program, patched the source code, and compiled it again.  I think that was done in about 2005, and I doubt it was the first.</p></div>
	</htmltext>
<tokenext>This is really just the same idea applied to the problem of patches .
They 're small in source ; but big in binary .
It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.This is nothing new .
Transform-patch-reverse is an age-old technique for binary diffs of compressed files , for example.Even this specific approach is nothing new ; I can think of at least one example of a patch to a program being distributed in the form of a script that disassembled the program , patched the source code , and compiled it again .
I think that was done in about 2005 , and I doubt it was the first .</tokentext>
<sentencetext>This is really just the same idea applied to the problem of patches.
They're small in source; but big in binary.
It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.This is nothing new.
Transform-patch-reverse is an age-old technique for binary diffs of compressed files, for example.Even this specific approach is nothing new; I can think of at least one example of a patch to a program being distributed in the form of a script that disassembled the program, patched the source code, and compiled it again.
I think that was done in about 2005, and I doubt it was the first.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722897</id>
	<title>Re:Evil bastages</title>
	<author>CarpetShark</author>
	<datestamp>1247739540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Somebody will troll and it'll get marked insightful or something stupid. And that last sentence guaranteed I won't get nothing but a -1 troll either, but oh well.</p></div></blockquote><p>Don't be so hard on yourself.  I'm sure you were already modded down way before <em>that</em> sentence.</p></div>
	</htmltext>
<tokenext>Somebody will troll and it 'll get marked insightful or something stupid .
And that last sentence guaranteed I wo n't get nothing but a -1 troll either , but oh well.Do n't be so hard on yourself .
I 'm sure you were already modded down way before that sentence .</tokentext>
<sentencetext>Somebody will troll and it'll get marked insightful or something stupid.
And that last sentence guaranteed I won't get nothing but a -1 troll either, but oh well.Don't be so hard on yourself.
I'm sure you were already modded down way before that sentence.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719487</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720895</id>
	<title>Re:Also less overhead for Google</title>
	<author>klui</author>
	<datestamp>1247774400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>It's pretty cool they do this. As opposed to iTunes "updates" of 70+MB--really no updates at all since they're just the monolithic install package.</htmltext>
<tokenext>It 's pretty cool they do this .
As opposed to iTunes " updates " of 70 + MB--really no updates at all since they 're just the monolithic install package .</tokentext>
<sentencetext>It's pretty cool they do this.
As opposed to iTunes "updates" of 70+MB--really no updates at all since they're just the monolithic install package.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719911</id>
	<title>Re:Also less overhead for Google</title>
	<author>Ed Avis</author>
	<datestamp>1247770680000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Fedora is already using binary diffs to speed up downloading updates - see <a href="https://fedorahosted.org/presto/" title="fedorahosted.org" rel="nofollow">yum-presto</a> [fedorahosted.org].  With a better binary diff algorithm, the RPM updates can hopefully be made even smaller.  As the Google developers note, making smaller update packages isn't just a 'nice to have' - it really makes a difference in getting vulnerabilities patched faster and cuts the bandwidth bill for the vendor and its mirror sites.  Remembering my experiences downloading updates over a 56k modem, I am also strongly in favour of anything that makes updating faster for the user.</p></htmltext>
<tokenext>Fedora is already using binary diffs to speed up downloading updates - see yum-presto [ fedorahosted.org ] .
With a better binary diff algorithm , the RPM updates can hopefully be made even smaller .
As the Google developers note , making smaller update packages is n't just a 'nice to have ' - it really makes a difference in getting vulnerabilities patched faster and cuts the bandwidth bill for the vendor and its mirror sites .
Remembering my experiences downloading updates over a 56k modem , I am also strongly in favour of anything that makes updating faster for the user .</tokentext>
<sentencetext>Fedora is already using binary diffs to speed up downloading updates - see yum-presto [fedorahosted.org].
With a better binary diff algorithm, the RPM updates can hopefully be made even smaller.
As the Google developers note, making smaller update packages isn't just a 'nice to have' - it really makes a difference in getting vulnerabilities patched faster and cuts the bandwidth bill for the vendor and its mirror sites.
Remembering my experiences downloading updates over a 56k modem, I am also strongly in favour of anything that makes updating faster for the user.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726469</id>
	<title>Re:Also less overhead for Google</title>
	<author>Anonymous</author>
	<datestamp>1247770200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I doubt this is of much concern to them. How many people will be running their new OS and doing updates, vs the bandwidth required for your average google maps session or you-tube video, done by many more people? Even the daily search results and ad serving to the content network would far exceed this.</p><p>I'm pretty sure the amount of bandwidth would be trivial in comparison. It's not like Ubuntu or MS require people to re-download the entire OS to perform a patch.</p><p>Besides which, with the network of mirrors they already have in place, they are quite capable of serving this at minimal relative cost.</p></htmltext>
<tokenext>I doubt this is of much concern to them .
How many people will be running their new OS and doing updates , vs the bandwidth required for your average google maps session or you-tube video , done by many more people ?
Even the daily search results and ad serving to the content network would far exceed this.I 'm pretty sure the amount of bandwidth would be trivial in comparison .
It 's not like Ubuntu or MS require people to re-download the entire OS to perform a patch.Besides which , with the network of mirrors they already have in place , they are quite capable of serving this at minimal relative cost .</tokentext>
<sentencetext>I doubt this is of much concern to them.
How many people will be running their new OS and doing updates, vs the bandwidth required for your average google maps session or you-tube video, done by many more people?
Even the daily search results and ad serving to the content network would far exceed this.I'm pretty sure the amount of bandwidth would be trivial in comparison.
It's not like Ubuntu or MS require people to re-download the entire OS to perform a patch.Besides which, with the network of mirrors they already have in place, they are quite capable of serving this at minimal relative cost.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724739</id>
	<title>Re:Bad explanation</title>
	<author>Anonymous</author>
	<datestamp>1247749320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Where does compression figure into this?</p><p>You have two executables, you need to transmit the smallest amount of data to turn a copy of one into the other.  Usually this is just a code change or two, but a few bytes difference can cause a ripple of changes in jump locations (anything not branching by a relative offset for example).  Some branches also would be affected by this.</p><p>If you disassemble the exe first and assign labels to jump and branch locations, and then transmit the difference in the code, you drop the need to explicitly change these jump targets, because they are implicitly changed when you apply the diff and "re-assemble".</p><p>It's not the same as compression.</p></htmltext>
<tokenext>Where does compression figure into this ? You have two executables , you need to transmit the smallest amount of data to turn a copy of one into the other .
Usually this is just a code change or two , but a few bytes difference can cause a ripple of changes in jump locations ( anything not branching by a relative offset for example ) .
Some branches also would be affected by this.If you disassemble the exe first and assign labels to jump and branch locations , and then transmit the difference in the code , you drop the need to explicitly change these jump targets , because they are implicitly changed when you apply the diff and " re-assemble " .It 's not the same as compression .</tokentext>
<sentencetext>Where does compression figure into this?You have two executables, you need to transmit the smallest amount of data to turn a copy of one into the other.
Usually this is just a code change or two, but a few bytes difference can cause a ripple of changes in jump locations (anything not branching by a relative offset for example).
Some branches also would be affected by this.If you disassemble the exe first and assign labels to jump and branch locations, and then transmit the difference in the code, you drop the need to explicitly change these jump targets, because they are implicitly changed when you apply the diff and "re-assemble".It's not the same as compression.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722443</id>
	<title>Re:Like many brilliant ideas...</title>
	<author>0xABADC0DA</author>
	<datestamp>1247737320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>...it makes you smack yourself on the head and go "why hasn't everybody been doing this for years?".<nobr> <wbr></nobr>... It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.</p></div><p>Basically google created a big custom 'transform' that applies only to x86 exe files.  The reason why this is a boring story and why nobody has done this before is because nobody has found a generic, simple way to do this.  And they still haven't.</p><p>If google had actually created a 'new binary diffing algorithm' instead of a specific hack, and this worked for most binary files that have similarities <i>that</i> would be newsworthy.  For instance if it could out of the box create small diffs for<nobr> <wbr></nobr>.exe,<nobr> <wbr></nobr>.doc,<nobr> <wbr></nobr>.xls,<nobr> <wbr></nobr>.ttf,<nobr> <wbr></nobr>.3ds,<nobr> <wbr></nobr>... that would be something.</p></div>
	</htmltext>
<tokenext>...it makes you smack yourself on the head and go " why has n't everybody been doing this for years ? " .
... It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.Basically google created a big custom 'transform ' that applies only to x86 exe files .
The reason why this is a boring story and why nobody has done this before is because nobody has found a generic , simple way to do this .
And they still have n't.If google had actually created a 'new binary diffing algorithm ' instead of a specific hack , and this worked for most binary files that have similarities that would be newsworthy .
For instance if it could out of the box create small diffs for .exe , .doc , .xls , .ttf , .3ds , ... that would be something .</tokentext>
<sentencetext> ...it makes you smack yourself on the head and go "why hasn't everybody been doing this for years?".
... It seems so obvious that you could apply a transform,patch,reverse process... but only when pointed out and demonstrated.Basically google created a big custom 'transform' that applies only to x86 exe files.
The reason why this is a boring story and why nobody has done this before is because nobody has found a generic, simple way to do this.
And they still haven't.If google had actually created a 'new binary diffing algorithm' instead of a specific hack, and this worked for most binary files that have similarities that would be newsworthy.
For instance if it could out of the box create small diffs for .exe, .doc, .xls, .ttf, .3ds, ... that would be something.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720697</id>
	<title>Re:wait a minute</title>
	<author>Anonymous</author>
	<datestamp>1247773560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I read it and thought: Wait a minute, that doesn't make any sense.</p><p>I can see how diffing the commands instead of the pure bytes leads to better results. But why do you need the text representation of these commands (the assembler code)? The algorithm should work equally well when used on the machine code. So just split up the binary file into single machine code commands and then use diff.</p></htmltext>
<tokenext>I read it and thought : Wait a minute , that does n't make any sense.I can see how diffing the commands instead of the pure bytes leads to better results .
But why do you need the text representation of these commands ( the assembler code ) ?
The algorithm should work equally well when used on the machine code .
So just split up the binary file into single machine code commands and then use diff .</tokentext>
<sentencetext>I read it and thought: Wait a minute, that doesn't make any sense.I can see how diffing the commands instead of the pure bytes leads to better results.
But why do you need the text representation of these commands (the assembler code)?
The algorithm should work equally well when used on the machine code.
So just split up the binary file into single machine code commands and then use diff.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719457</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726973</id>
	<title>Re:More frequent updates?</title>
	<author>Anonymous</author>
	<datestamp>1247822040000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I think it does matter. Time again I click 'Install Later' when Firefox wants to install all over again. I cannot be bothered to wait for this. I want to browse and browse now!</p></htmltext>
<tokenext>I think it does matter .
Time again I click 'Install Later ' when Firefox wants to install all over again .
I can not be bothered to wait for this .
I want to browse and browse now !</tokentext>
<sentencetext>I think it does matter.
Time again I click 'Install Later' when Firefox wants to install all over again.
I cannot be bothered to wait for this.
I want to browse and browse now!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721727</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28778355</id>
	<title>Re:uses a primitive automatic disassembler</title>
	<author>WuphonsReach</author>
	<datestamp>1248197520000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>You can't exactly hop on your friend's XP box and run an X application from a remote server, unless he happens to have Exceed installed (for $$$).</i> <br>
<br>
Actually, you can.  There do exist free X servers for Windows, see <a href="http://sourceforge.net/projects/xming/" title="sourceforge.net">XMing</a> [sourceforge.net].</htmltext>
<tokenext>You ca n't exactly hop on your friend 's XP box and run an X application from a remote server , unless he happens to have Exceed installed ( for $ $ $ ) .
Actually , you can .
There do exist free X servers for Windows , see XMing [ sourceforge.net ] .</tokentext>
<sentencetext>You can't exactly hop on your friend's XP box and run an X application from a remote server, unless he happens to have Exceed installed (for $$$).
Actually, you can.
There do exist free X servers for Windows, see XMing [sourceforge.net].</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723059</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721211</id>
	<title>Anyone like zucchini?</title>
	<author>Anonymous</author>
	<datestamp>1247775780000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>In case anyone has missed the reference of the name "Courgette", it's French for summer squash/zucchini type vegetables. So, Courgette as in squash, and squash as in make smaller.</p></htmltext>
<tokenext>In case anyone has missed the reference of the name " Courgette " , it 's French for summer squash/zucchini type vegetables .
So , Courgette as in squash , and squash as in make smaller .</tokentext>
<sentencetext>In case anyone has missed the reference of the name "Courgette", it's French for summer squash/zucchini type vegetables.
So, Courgette as in squash, and squash as in make smaller.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726753</id>
	<title>Re:Also less overhead for Google</title>
	<author>xtracto</author>
	<datestamp>1247861700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Google has to pay the cost for maintaining servers and handling bandwidth for all the OS updates they push out. The more efficient they are in this process, the more money the save.</p><p>The good news is that the same benefits could be applied to Red Hat, Ubuntu, openSUSE, etc. Lower costs helps the profitability of companies trying to make a profit on Linux.</p><p>The end users also see benefits in that their packages download quicker. I'd be honestly pretty disappointed in any major distro that doesn't start implementing a binary diff solution around this.</p></div><p>Open source Binary diff/patch tools have been available for a while. However, the majority of the stupid Linux distributions (Ubuntu, Fedora, Mint, etc) keep sending me hundreds of Megabites for each *update* I do, because they send the whole programs to reinstall (on top of the others). That is not an "update" that is a reinstallation!</p></div>
	</htmltext>
<tokenext>Google has to pay the cost for maintaining servers and handling bandwidth for all the OS updates they push out .
The more efficient they are in this process , the more money the save.The good news is that the same benefits could be applied to Red Hat , Ubuntu , openSUSE , etc .
Lower costs helps the profitability of companies trying to make a profit on Linux.The end users also see benefits in that their packages download quicker .
I 'd be honestly pretty disappointed in any major distro that does n't start implementing a binary diff solution around this.Open source Binary diff/patch tools have been available for a while .
However , the majority of the stupid Linux distributions ( Ubuntu , Fedora , Mint , etc ) keep sending me hundreds of Megabites for each * update * I do , because they send the whole programs to reinstall ( on top of the others ) .
That is not an " update " that is a reinstallation !</tokentext>
<sentencetext>Google has to pay the cost for maintaining servers and handling bandwidth for all the OS updates they push out.
The more efficient they are in this process, the more money the save.The good news is that the same benefits could be applied to Red Hat, Ubuntu, openSUSE, etc.
Lower costs helps the profitability of companies trying to make a profit on Linux.The end users also see benefits in that their packages download quicker.
I'd be honestly pretty disappointed in any major distro that doesn't start implementing a binary diff solution around this.Open source Binary diff/patch tools have been available for a while.
However, the majority of the stupid Linux distributions (Ubuntu, Fedora, Mint, etc) keep sending me hundreds of Megabites for each *update* I do, because they send the whole programs to reinstall (on top of the others).
That is not an "update" that is a reinstallation!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719469</id>
	<title>I had this idea</title>
	<author>Fantom42</author>
	<datestamp>1247769060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I remember having this idea a while back and thinking that it would surely be out there already implemented.  When it wasn't, I didn't follow through and try to invent it myself.  I was focused on another project at the time.  Never really came back to that original idea.  Awesome that someone did, and that they are keeping it open so everyone can use it!  Thanks, Google!</p></htmltext>
<tokenext>I remember having this idea a while back and thinking that it would surely be out there already implemented .
When it was n't , I did n't follow through and try to invent it myself .
I was focused on another project at the time .
Never really came back to that original idea .
Awesome that someone did , and that they are keeping it open so everyone can use it !
Thanks , Google !</tokentext>
<sentencetext>I remember having this idea a while back and thinking that it would surely be out there already implemented.
When it wasn't, I didn't follow through and try to invent it myself.
I was focused on another project at the time.
Never really came back to that original idea.
Awesome that someone did, and that they are keeping it open so everyone can use it!
Thanks, Google!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720417</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>eric-x</author>
	<datestamp>1247772480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>It makes patches smaller by doing a few extra voodoo pre and post processing steps. Many changes in the binary can be tracked back to only 1 change in the original source code, the voodoo knows how to do this and stores just that one master change instead all of the resulting changes. Car analogy: instead of storing the activity of each piston in a car engine it just stores the fact if the engine is running or not.</p></htmltext>
<tokenext>It makes patches smaller by doing a few extra voodoo pre and post processing steps .
Many changes in the binary can be tracked back to only 1 change in the original source code , the voodoo knows how to do this and stores just that one master change instead all of the resulting changes .
Car analogy : instead of storing the activity of each piston in a car engine it just stores the fact if the engine is running or not .</tokentext>
<sentencetext>It makes patches smaller by doing a few extra voodoo pre and post processing steps.
Many changes in the binary can be tracked back to only 1 change in the original source code, the voodoo knows how to do this and stores just that one master change instead all of the resulting changes.
Car analogy: instead of storing the activity of each piston in a car engine it just stores the fact if the engine is running or not.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719859</id>
	<title>Re:The cool thing is...</title>
	<author>salimma</author>
	<datestamp>1247770440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ah, delta RPM turns out to use normal compression, no binary diffing is involved:</p><p>$ rpm -q deltarpm --requires<br>libbz2.so.1()(64bit)<br>libc.so.6()(64bit)<br>libc.so.6(GLIBC\_2.2.5)(64bit)<br>libc.so.6(GLIBC\_2.3.4)(64bit)<br>libc.so.6(GLIBC\_2.4)(64bit)<br>libc.so.6(GLIBC\_2.7)(64bit)<br>librpm.so.0()(64bit)<br>librpmio.so.0()(64bit)<br>rpmlib(CompressedFileNames) = 3.0.4-1<br>rpmlib(FileDigests) = 4.6.0-1<br>rpmlib(PayloadFilesHavePrefix) = 4.0-1<br>rtld(GNU\_HASH)</p></htmltext>
<tokenext>Ah , delta RPM turns out to use normal compression , no binary diffing is involved : $ rpm -q deltarpm --requireslibbz2.so.1 ( ) ( 64bit ) libc.so.6 ( ) ( 64bit ) libc.so.6 ( GLIBC \ _2.2.5 ) ( 64bit ) libc.so.6 ( GLIBC \ _2.3.4 ) ( 64bit ) libc.so.6 ( GLIBC \ _2.4 ) ( 64bit ) libc.so.6 ( GLIBC \ _2.7 ) ( 64bit ) librpm.so.0 ( ) ( 64bit ) librpmio.so.0 ( ) ( 64bit ) rpmlib ( CompressedFileNames ) = 3.0.4-1rpmlib ( FileDigests ) = 4.6.0-1rpmlib ( PayloadFilesHavePrefix ) = 4.0-1rtld ( GNU \ _HASH )</tokentext>
<sentencetext>Ah, delta RPM turns out to use normal compression, no binary diffing is involved:$ rpm -q deltarpm --requireslibbz2.so.1()(64bit)libc.so.6()(64bit)libc.so.6(GLIBC\_2.2.5)(64bit)libc.so.6(GLIBC\_2.3.4)(64bit)libc.so.6(GLIBC\_2.4)(64bit)libc.so.6(GLIBC\_2.7)(64bit)librpm.so.0()(64bit)librpmio.so.0()(64bit)rpmlib(CompressedFileNames) = 3.0.4-1rpmlib(FileDigests) = 4.6.0-1rpmlib(PayloadFilesHavePrefix) = 4.0-1rtld(GNU\_HASH)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719451</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721727</id>
	<title>More frequent updates?</title>
	<author>watanuki</author>
	<datestamp>1247777820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>This, the Chromium devs say, will allow them to send smaller, more frequent updates, making users more secure.</p></div><p>Is patch size usually a factor in determining whether to send out a patch or not?</p><p>In my experience, people update because a patch fixes some bugs or introduce some features they want.  The size of the patch doesn't matter unless the user is severely bandwidth-limited.</p><p>For developers, patches are usually sent out on a schedule (e.g. Microsoft's patch Tuesday), when certain milestone is reached, or when a sufficiently dangerous bug is found and need to be fixed quickly.  I am not any organization that sends out patches based on 'the size of the diffs accumulated so far', but please enlighten me if you know of one.</p></div>
	</htmltext>
<tokenext>This , the Chromium devs say , will allow them to send smaller , more frequent updates , making users more secure.Is patch size usually a factor in determining whether to send out a patch or not ? In my experience , people update because a patch fixes some bugs or introduce some features they want .
The size of the patch does n't matter unless the user is severely bandwidth-limited.For developers , patches are usually sent out on a schedule ( e.g .
Microsoft 's patch Tuesday ) , when certain milestone is reached , or when a sufficiently dangerous bug is found and need to be fixed quickly .
I am not any organization that sends out patches based on 'the size of the diffs accumulated so far ' , but please enlighten me if you know of one .</tokentext>
<sentencetext>This, the Chromium devs say, will allow them to send smaller, more frequent updates, making users more secure.Is patch size usually a factor in determining whether to send out a patch or not?In my experience, people update because a patch fixes some bugs or introduce some features they want.
The size of the patch doesn't matter unless the user is severely bandwidth-limited.For developers, patches are usually sent out on a schedule (e.g.
Microsoft's patch Tuesday), when certain milestone is reached, or when a sufficiently dangerous bug is found and need to be fixed quickly.
I am not any organization that sends out patches based on 'the size of the diffs accumulated so far', but please enlighten me if you know of one.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722027</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>Fantom42</author>
	<datestamp>1247735820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Imagine you had a tape of somenoe talking and you needed to tell someone what was different from the last time they spoke.  One way to do that would be to listen to the language and words and compare their meanings to figure out the difference.  The other way would be to just send them the new tape to replace the previous one.  A similar problem happens with new versions of software.  You want to send the changes but you end up having to send the whole thing.</p><p>This technology essentially writes down what the compiled code "says" inasmuch as what is required to see what is different, and then sends only that part of it.</p></htmltext>
<tokenext>Imagine you had a tape of somenoe talking and you needed to tell someone what was different from the last time they spoke .
One way to do that would be to listen to the language and words and compare their meanings to figure out the difference .
The other way would be to just send them the new tape to replace the previous one .
A similar problem happens with new versions of software .
You want to send the changes but you end up having to send the whole thing.This technology essentially writes down what the compiled code " says " inasmuch as what is required to see what is different , and then sends only that part of it .</tokentext>
<sentencetext>Imagine you had a tape of somenoe talking and you needed to tell someone what was different from the last time they spoke.
One way to do that would be to listen to the language and words and compare their meanings to figure out the difference.
The other way would be to just send them the new tape to replace the previous one.
A similar problem happens with new versions of software.
You want to send the changes but you end up having to send the whole thing.This technology essentially writes down what the compiled code "says" inasmuch as what is required to see what is different, and then sends only that part of it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721053</id>
	<title>Re:Solving the wrong problem</title>
	<author>Anonymous</author>
	<datestamp>1247775060000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>If the code is so awful that the bandwidth required for security updates is a problem, the product is defective by design.</p> </div><p>
You don't understand what the phrase "defective by design" means. It's used by anti-DRM folks to describe "features" that nobody wants and that actually reduce the usefulness of a product, but which are inserted into the product intentionally by the manufacturer out of a misguided desire to support DRM. If a bug/feature is "by design" then you should not expect a patch for it, ever.</p><p>A product that needs lots of security patches, on the other hand, is not defective by design; rather, it is simply badly designed.</p><p>Don't go out of your way to use catchphrases when simple English will do.</p></div>
	</htmltext>
<tokenext>If the code is so awful that the bandwidth required for security updates is a problem , the product is defective by design .
You do n't understand what the phrase " defective by design " means .
It 's used by anti-DRM folks to describe " features " that nobody wants and that actually reduce the usefulness of a product , but which are inserted into the product intentionally by the manufacturer out of a misguided desire to support DRM .
If a bug/feature is " by design " then you should not expect a patch for it , ever.A product that needs lots of security patches , on the other hand , is not defective by design ; rather , it is simply badly designed.Do n't go out of your way to use catchphrases when simple English will do .</tokentext>
<sentencetext>If the code is so awful that the bandwidth required for security updates is a problem, the product is defective by design.
You don't understand what the phrase "defective by design" means.
It's used by anti-DRM folks to describe "features" that nobody wants and that actually reduce the usefulness of a product, but which are inserted into the product intentionally by the manufacturer out of a misguided desire to support DRM.
If a bug/feature is "by design" then you should not expect a patch for it, ever.A product that needs lots of security patches, on the other hand, is not defective by design; rather, it is simply badly designed.Don't go out of your way to use catchphrases when simple English will do.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719875</id>
	<title>A solution in search of a problem?</title>
	<author>alexborges</author>
	<datestamp>1247770500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I mean, whats the need, really?</p><p>Cant we all be merry and unixize our minds and think: hey, all components of the sw are files, lets just separate the heavy parts that are less prone to any security issue (GUI), from all the libs and executables, and lets just send those through the wire as a package....</p><p>We do that in linux and it works(TM): we package everything, have dependencies and then only update what needs to be updated without weird binary patching.</p><p>Now, not to say that its not usefull to have a new bindiff algorithm: lets plug it into git and svn, a specialized switch (--Gelf-exec-compression) in rsync also comes to mind, and that should make them more powerfull and all...</p></htmltext>
<tokenext>I mean , whats the need , really ? Cant we all be merry and unixize our minds and think : hey , all components of the sw are files , lets just separate the heavy parts that are less prone to any security issue ( GUI ) , from all the libs and executables , and lets just send those through the wire as a package....We do that in linux and it works ( TM ) : we package everything , have dependencies and then only update what needs to be updated without weird binary patching.Now , not to say that its not usefull to have a new bindiff algorithm : lets plug it into git and svn , a specialized switch ( --Gelf-exec-compression ) in rsync also comes to mind , and that should make them more powerfull and all.. .</tokentext>
<sentencetext>I mean, whats the need, really?Cant we all be merry and unixize our minds and think: hey, all components of the sw are files, lets just separate the heavy parts that are less prone to any security issue (GUI), from all the libs and executables, and lets just send those through the wire as a package....We do that in linux and it works(TM): we package everything, have dependencies and then only update what needs to be updated without weird binary patching.Now, not to say that its not usefull to have a new bindiff algorithm: lets plug it into git and svn, a specialized switch (--Gelf-exec-compression) in rsync also comes to mind, and that should make them more powerfull and all...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722261</id>
	<title>What is EFF take on this?</title>
	<author>Anonymous</author>
	<datestamp>1247736660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Electronic Frontier Foundation</p></htmltext>
<tokenext>Electronic Frontier Foundation</tokentext>
<sentencetext>Electronic Frontier Foundation</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28746627</id>
	<title>Courgette-in-the-middle?</title>
	<author>javarome</author>
	<datestamp>1248001920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I may be missing something but I'm surprised that nobody raised the security issue of putting an man-in-the-middle able to disassemble the expected code and replace it by its own. What about signatures in this stuff?</p></htmltext>
<tokenext>I may be missing something but I 'm surprised that nobody raised the security issue of putting an man-in-the-middle able to disassemble the expected code and replace it by its own .
What about signatures in this stuff ?</tokentext>
<sentencetext>I may be missing something but I'm surprised that nobody raised the security issue of putting an man-in-the-middle able to disassemble the expected code and replace it by its own.
What about signatures in this stuff?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28727027</id>
	<title>many ways to do that</title>
	<author>ei4anb</author>
	<datestamp>1247823360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I worked on a project back in the 80's where we had half a million lines of code running on a high availability machine with 384 32bit CPUs. <p>As bandwidth was a tad limited in those days we too looked for an efficient way to distribute updates. The solution was to distribute the smaller bug fixes as patches, similar to debug scripts. The loader would run those debug scripts after loading the program. To apply a patch the customer would put the patch file in the same folder as the program, restart the program on the hot standby side of the cluster and provoke a switchover to the standby. </p><p>The patch was applied without any downtime. If the customer wanted to back-out the bug fix then all they had to do was delete the patch file and switch back to the unpatched side of the cluster. </p><p>Most patches were small and we only had a few hundred bytes to send out at a time. Afterwards the world upgraded to Windows and forgot such technology<nobr> <wbr></nobr>:-(</p></htmltext>
<tokenext>I worked on a project back in the 80 's where we had half a million lines of code running on a high availability machine with 384 32bit CPUs .
As bandwidth was a tad limited in those days we too looked for an efficient way to distribute updates .
The solution was to distribute the smaller bug fixes as patches , similar to debug scripts .
The loader would run those debug scripts after loading the program .
To apply a patch the customer would put the patch file in the same folder as the program , restart the program on the hot standby side of the cluster and provoke a switchover to the standby .
The patch was applied without any downtime .
If the customer wanted to back-out the bug fix then all they had to do was delete the patch file and switch back to the unpatched side of the cluster .
Most patches were small and we only had a few hundred bytes to send out at a time .
Afterwards the world upgraded to Windows and forgot such technology : - (</tokentext>
<sentencetext>I worked on a project back in the 80's where we had half a million lines of code running on a high availability machine with 384 32bit CPUs.
As bandwidth was a tad limited in those days we too looked for an efficient way to distribute updates.
The solution was to distribute the smaller bug fixes as patches, similar to debug scripts.
The loader would run those debug scripts after loading the program.
To apply a patch the customer would put the patch file in the same folder as the program, restart the program on the hot standby side of the cluster and provoke a switchover to the standby.
The patch was applied without any downtime.
If the customer wanted to back-out the bug fix then all they had to do was delete the patch file and switch back to the unpatched side of the cluster.
Most patches were small and we only had a few hundred bytes to send out at a time.
Afterwards the world upgraded to Windows and forgot such technology :-(</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720337</id>
	<title>LLVM</title>
	<author>reg</author>
	<datestamp>1247772180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If you're going to do this in the long run, it would make more sense to distribute LLVM byte code and compile that to native code on the host...  That way you have small diffs to the byte code, and also you get targeted optimization (and maybe even custom profile guided optimization).</htmltext>
<tokenext>If you 're going to do this in the long run , it would make more sense to distribute LLVM byte code and compile that to native code on the host... That way you have small diffs to the byte code , and also you get targeted optimization ( and maybe even custom profile guided optimization ) .</tokentext>
<sentencetext>If you're going to do this in the long run, it would make more sense to distribute LLVM byte code and compile that to native code on the host...  That way you have small diffs to the byte code, and also you get targeted optimization (and maybe even custom profile guided optimization).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719575</id>
	<title>Today?</title>
	<author>Anonymous</author>
	<datestamp>1247769480000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p><i>Google's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates today.</i></p><p>Better hurry! It won't work tomorrow!</p></htmltext>
<tokenext>Google 's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates today.Better hurry !
It wo n't work tomorrow !</tokentext>
<sentencetext>Google's Open-Source Chromium project announced a new compression technique called Courgette geared towards distributing really small updates today.Better hurry!
It won't work tomorrow!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351</id>
	<title>uses a primitive automatic disassembler</title>
	<author>flowsnake</author>
	<datestamp>1247768700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>An interesting approach - I wonder if this would also work as well on compiled bytecode like<nobr> <wbr></nobr>.NET or Java uses?</htmltext>
<tokenext>An interesting approach - I wonder if this would also work as well on compiled bytecode like .NET or Java uses ?</tokentext>
<sentencetext>An interesting approach - I wonder if this would also work as well on compiled bytecode like .NET or Java uses?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724443</id>
	<title>Open Source community benefits from binary diff</title>
	<author>omkhar</author>
	<datestamp>1247747160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>So the OPEN SOURCE community will benefit from BINARY diffs.</p><p>uhuh. I think we're just fine with patch/diff.</p><p>oh, what's that? You meant the people that DISTRIBUTE BINARY version of OPEN SOURCE programs will benefit? Ahhh, now I see.</p></htmltext>
<tokenext>So the OPEN SOURCE community will benefit from BINARY diffs.uhuh .
I think we 're just fine with patch/diff.oh , what 's that ?
You meant the people that DISTRIBUTE BINARY version of OPEN SOURCE programs will benefit ?
Ahhh , now I see .</tokentext>
<sentencetext>So the OPEN SOURCE community will benefit from BINARY diffs.uhuh.
I think we're just fine with patch/diff.oh, what's that?
You meant the people that DISTRIBUTE BINARY version of OPEN SOURCE programs will benefit?
Ahhh, now I see.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719641</id>
	<title>Nothing terribly new</title>
	<author>davidwr</author>
	<datestamp>1247769660000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>There was a old, pre-X MacOS binary-diff program that diffed each element of the <a href="http://en.wikipedia.org/wiki/Resource\_fork" title="wikipedia.org" rel="nofollow">resource fork</a> [wikipedia.org] independently.</p><p>Back in those days, non-code elements usually were stored in unique "resources," one icon might be in ICON 1, another might be in ICON 2, etc.</p><p>Code was split among one or more resources.</p><p>If a code change only affected one section of code, the only 1 CODE resource would be affected.</p><p>Since each resource was limited to 32KB, a diff that only affected one resource would never be larger than twice that size.</p><p>If only a single byte changed, the diff was only the overhead to say what byte changed, the old value, and the new value, much like "diff" on text files only on a byte rather than line basis.</p><p>So, conceptually, this isn't all that new.</p></htmltext>
<tokenext>There was a old , pre-X MacOS binary-diff program that diffed each element of the resource fork [ wikipedia.org ] independently.Back in those days , non-code elements usually were stored in unique " resources , " one icon might be in ICON 1 , another might be in ICON 2 , etc.Code was split among one or more resources.If a code change only affected one section of code , the only 1 CODE resource would be affected.Since each resource was limited to 32KB , a diff that only affected one resource would never be larger than twice that size.If only a single byte changed , the diff was only the overhead to say what byte changed , the old value , and the new value , much like " diff " on text files only on a byte rather than line basis.So , conceptually , this is n't all that new .</tokentext>
<sentencetext>There was a old, pre-X MacOS binary-diff program that diffed each element of the resource fork [wikipedia.org] independently.Back in those days, non-code elements usually were stored in unique "resources," one icon might be in ICON 1, another might be in ICON 2, etc.Code was split among one or more resources.If a code change only affected one section of code, the only 1 CODE resource would be affected.Since each resource was limited to 32KB, a diff that only affected one resource would never be larger than twice that size.If only a single byte changed, the diff was only the overhead to say what byte changed, the old value, and the new value, much like "diff" on text files only on a byte rather than line basis.So, conceptually, this isn't all that new.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719655</id>
	<title>Re:Google</title>
	<author>Anonymous</author>
	<datestamp>1247769660000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I just downloaded <a href="http://www.google.com/chrome/intl/en/eula\_dev.html?dl=mac" title="google.com" rel="nofollow">Google Chrome</a> [google.com] 3.0.192.0 for <a href="http://www.apple.com/mac/" title="apple.com" rel="nofollow">Mac</a> [apple.com] and it crashed before I could even open a page. There is no excuse for this; my <a href="http://support.apple.com/kb/SP506" title="apple.com" rel="nofollow">Mac Pro</a> [apple.com] is perfect in every way with eight <a href="http://www.intel.com/Assets/PDF/prodbrief/xeon-5500.pdf" title="intel.com" rel="nofollow">2.93 GHz cores</a> [intel.com], 32 GB RAM, and a fresh install of <a href="http://www.apple.com/macosx/" title="apple.com" rel="nofollow">Mac OS X</a> [apple.com] Leopard v10.5.7. Ergo any crashing Google Chrome does is Google Chrome's own fault!</p><p>Why is it that <a href="http://www.apple.com/" title="apple.com" rel="nofollow">Apple</a> [apple.com] and <a href="http://www.mozilla.com/" title="mozilla.com" rel="nofollow">Mozilla</a> [mozilla.com] can do this but <a href="http://www.google.com/" title="google.com" rel="nofollow">Google</a> [google.com] can't? I ran <a href="http://www.microsoft.com/windows/internet-explorer/default.aspx" title="microsoft.com" rel="nofollow">Internet Explorer 8</a> [microsoft.com] for months before its final release, <a href="http://www.trollaxor.com/2009/07/some-questions-comments-about-firefox.html" title="trollaxor.com" rel="nofollow">Firefox 3.5</a> [trollaxor.com] since its 3.1 days, and found <a href="http://developer.apple.com/safari/" title="apple.com" rel="nofollow">Safari 4 Developer Preview</a> [apple.com] more stable than Safari 3. In fact, even <a href="http://www.webkit.org/" title="webkit.org" rel="nofollow">WebKit</a> [webkit.org] is more stable than Chrome.</p><p>What really baffles me, however, isn't the <a href="http://www.computerworld.com/s/article/93833/Kernel\_flaw\_makes\_Linux\_crash\_easily?taxonomyId=122" title="computerworld.com" rel="nofollow">instability</a> [computerworld.com] I've come to expect from Google, but that Google has the <a href="http://www.bullsballs.com/" title="bullsballs.com" rel="nofollow">audacity</a> [bullsballs.com] to ask for personal user info to improve its browser. Is the search engine maker datamonger really so desperate for my private information that it's stooped to the level of <a href="http://el.wikisource.org/wiki/Y&amp;\%23206;&amp;\%23180;&amp;\%23207;&amp;\%23207;f&amp;\%23207;f&amp;\%23206;&amp;\%23181;&amp;\%23206;&amp;\%23185;&amp;\%23206;&amp;\%23177;/&amp;\%23206;&amp;\%23180;" title="wikisource.org" rel="nofollow">Trojan horses</a> [wikisource.org] to get it?</p><p>They should ask me that when it doesn't crash on launch.</p><p>Everything Google does is just another way to sieve personal data away for targeting ads. This kind of <a href="http://www.google-watch.org/bigbro.html" title="google-watch.org" rel="nofollow">Big Brother</a> [google-watch.org] crap is more repulsive than the <a href="http://www.trollaxor.com/search/label/Fat\%20Perl\%20Hacker" title="trollaxor.com" rel="nofollow">fat</a> [trollaxor.com] <a href="http://www.shelleytherepublican.com/wp-content/uploads/2006/07/obese\_hacker.jpg" title="shelleytherepublican.com" rel="nofollow">programmers</a> [shelleytherepublican.com] that make it possible. Google, with its deep pockets and <a href="http://www.nytimes.com/2004/06/06/business/yourmoney/06digi.html" title="nytimes.com" rel="nofollow">doctoral scholars</a> [nytimes.com], thinks that by holding user data hostage it can maneuver around Apple and <a href="http://www.micrsoft.com/" title="micrsoft.com" rel="nofollow">Microsoft</a> [micrsoft.com]. While this may be true, I'm not willing to be a part of it.</p><p>In using Google's <a href="http://www.google.com/search?q=google" title="google.com" rel="nofollow">search</a> [google.com], <a href="http://mail.google.com/" title="google.com" rel="nofollow">Gmail</a> [google.com], <a href="http://www.apple.com/safari/" title="apple.com" rel="nofollow">Chrome</a> [apple.com] or whatever else the <a href="http://en.wikipedia.org/wiki/Computron\_(Transformers)" title="wikipedia.org" rel="nofollow">faceless robot</a> [wikipedia.org] of a company invents, the user is surrendering their personal information to a <a href="http://images.google.com/images?q=borg" title="google.com" rel="nofollow">giant hivemind</a> [google.com]. No longer are their personal preferences some choice they make; they're a string of data processed by a Google algorithm: <i>Google <a href="http://en.wikipedia.org/wiki/Nazi\_concentration\_camps" title="wikipedia.org" rel="nofollow">dehumanizes</a> [wikipedia.org] its users!</i> </p><p>So while Google is arrogant enough to paint spyware shiny so it can parse our browsing habits, the least they could do is make sure it doesn't crash. If Apple, Microsoft, and Mozilla can get their preview releases right, why can't Google? And now they're making their own <a href="http://scobleizer.com/2007/11/12/google-android-we-want-developers-but/" title="scobleizer.com" rel="nofollow">operating</a> [scobleizer.com] <a href="http://www.pcworld.com/businesscenter/article/168058/five\_reasons\_google\_chrome\_os\_will\_fail.html" title="pcworld.com" rel="nofollow">systems</a> [pcworld.com]?</p><p>Get real, Google! I'll use your crashing codebloat when my Mac is cold and dead and I'm looking for handouts. Until then, quit <a href="http://goatse.info/" title="goatse.info" rel="nofollow">mining</a> [goatse.info] my personal data!</p></htmltext>
<tokenext>I just downloaded Google Chrome [ google.com ] 3.0.192.0 for Mac [ apple.com ] and it crashed before I could even open a page .
There is no excuse for this ; my Mac Pro [ apple.com ] is perfect in every way with eight 2.93 GHz cores [ intel.com ] , 32 GB RAM , and a fresh install of Mac OS X [ apple.com ] Leopard v10.5.7 .
Ergo any crashing Google Chrome does is Google Chrome 's own fault ! Why is it that Apple [ apple.com ] and Mozilla [ mozilla.com ] can do this but Google [ google.com ] ca n't ?
I ran Internet Explorer 8 [ microsoft.com ] for months before its final release , Firefox 3.5 [ trollaxor.com ] since its 3.1 days , and found Safari 4 Developer Preview [ apple.com ] more stable than Safari 3 .
In fact , even WebKit [ webkit.org ] is more stable than Chrome.What really baffles me , however , is n't the instability [ computerworld.com ] I 've come to expect from Google , but that Google has the audacity [ bullsballs.com ] to ask for personal user info to improve its browser .
Is the search engine maker datamonger really so desperate for my private information that it 's stooped to the level of Trojan horses [ wikisource.org ] to get it ? They should ask me that when it does n't crash on launch.Everything Google does is just another way to sieve personal data away for targeting ads .
This kind of Big Brother [ google-watch.org ] crap is more repulsive than the fat [ trollaxor.com ] programmers [ shelleytherepublican.com ] that make it possible .
Google , with its deep pockets and doctoral scholars [ nytimes.com ] , thinks that by holding user data hostage it can maneuver around Apple and Microsoft [ micrsoft.com ] .
While this may be true , I 'm not willing to be a part of it.In using Google 's search [ google.com ] , Gmail [ google.com ] , Chrome [ apple.com ] or whatever else the faceless robot [ wikipedia.org ] of a company invents , the user is surrendering their personal information to a giant hivemind [ google.com ] .
No longer are their personal preferences some choice they make ; they 're a string of data processed by a Google algorithm : Google dehumanizes [ wikipedia.org ] its users !
So while Google is arrogant enough to paint spyware shiny so it can parse our browsing habits , the least they could do is make sure it does n't crash .
If Apple , Microsoft , and Mozilla can get their preview releases right , why ca n't Google ?
And now they 're making their own operating [ scobleizer.com ] systems [ pcworld.com ] ? Get real , Google !
I 'll use your crashing codebloat when my Mac is cold and dead and I 'm looking for handouts .
Until then , quit mining [ goatse.info ] my personal data !</tokentext>
<sentencetext>I just downloaded Google Chrome [google.com] 3.0.192.0 for Mac [apple.com] and it crashed before I could even open a page.
There is no excuse for this; my Mac Pro [apple.com] is perfect in every way with eight 2.93 GHz cores [intel.com], 32 GB RAM, and a fresh install of Mac OS X [apple.com] Leopard v10.5.7.
Ergo any crashing Google Chrome does is Google Chrome's own fault!Why is it that Apple [apple.com] and Mozilla [mozilla.com] can do this but Google [google.com] can't?
I ran Internet Explorer 8 [microsoft.com] for months before its final release, Firefox 3.5 [trollaxor.com] since its 3.1 days, and found Safari 4 Developer Preview [apple.com] more stable than Safari 3.
In fact, even WebKit [webkit.org] is more stable than Chrome.What really baffles me, however, isn't the instability [computerworld.com] I've come to expect from Google, but that Google has the audacity [bullsballs.com] to ask for personal user info to improve its browser.
Is the search engine maker datamonger really so desperate for my private information that it's stooped to the level of Trojan horses [wikisource.org] to get it?They should ask me that when it doesn't crash on launch.Everything Google does is just another way to sieve personal data away for targeting ads.
This kind of Big Brother [google-watch.org] crap is more repulsive than the fat [trollaxor.com] programmers [shelleytherepublican.com] that make it possible.
Google, with its deep pockets and doctoral scholars [nytimes.com], thinks that by holding user data hostage it can maneuver around Apple and Microsoft [micrsoft.com].
While this may be true, I'm not willing to be a part of it.In using Google's search [google.com], Gmail [google.com], Chrome [apple.com] or whatever else the faceless robot [wikipedia.org] of a company invents, the user is surrendering their personal information to a giant hivemind [google.com].
No longer are their personal preferences some choice they make; they're a string of data processed by a Google algorithm: Google dehumanizes [wikipedia.org] its users!
So while Google is arrogant enough to paint spyware shiny so it can parse our browsing habits, the least they could do is make sure it doesn't crash.
If Apple, Microsoft, and Mozilla can get their preview releases right, why can't Google?
And now they're making their own operating [scobleizer.com] systems [pcworld.com]?Get real, Google!
I'll use your crashing codebloat when my Mac is cold and dead and I'm looking for handouts.
Until then, quit mining [goatse.info] my personal data!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719313</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719953</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>chris\_eineke</author>
	<datestamp>1247770860000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>A compiler takes source codes and turns them into assembler code. That's lines of human-readable machine instruction mnemonics (for example, "Copy from here to here." "Is that bigger than zero?"). The assembler takes those lines and turns them into machine instructions, a sequence of binary numbers.</p><p>Finding the difference between two huge gobs of binary numbers is difficult. Instead, they turn the binary numbers back into lines of mnemonics and use a algorithm that finds the difference between two huge listings of mnemonics.</p><p>That method is easier because the listings of a program that has been changed slightly can be very similar to the listing of a unmodified program. That has to do with how compilers work.</p><p>Capiche?<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>A compiler takes source codes and turns them into assembler code .
That 's lines of human-readable machine instruction mnemonics ( for example , " Copy from here to here .
" " Is that bigger than zero ? " ) .
The assembler takes those lines and turns them into machine instructions , a sequence of binary numbers.Finding the difference between two huge gobs of binary numbers is difficult .
Instead , they turn the binary numbers back into lines of mnemonics and use a algorithm that finds the difference between two huge listings of mnemonics.That method is easier because the listings of a program that has been changed slightly can be very similar to the listing of a unmodified program .
That has to do with how compilers work.Capiche ?
; )</tokentext>
<sentencetext>A compiler takes source codes and turns them into assembler code.
That's lines of human-readable machine instruction mnemonics (for example, "Copy from here to here.
" "Is that bigger than zero?").
The assembler takes those lines and turns them into machine instructions, a sequence of binary numbers.Finding the difference between two huge gobs of binary numbers is difficult.
Instead, they turn the binary numbers back into lines of mnemonics and use a algorithm that finds the difference between two huge listings of mnemonics.That method is easier because the listings of a program that has been changed slightly can be very similar to the listing of a unmodified program.
That has to do with how compilers work.Capiche?
;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720971</id>
	<title>Re:uses a primitive automatic disassembler</title>
	<author>PCM2</author>
	<datestamp>1247774760000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>An interesting approach - I wonder if this would also work as well on compiled bytecode like<nobr> <wbr></nobr>.NET or Java uses?</p></div><p>I suspect that it would. I once heard Manuel De Icaza give a talk in the early days of the Mono project. He said the "bytecode" that came out of the C# compiler was analogous to the output that comes out of the front end of a C compiler. The JIT is the equivalent of the C compiler's back end, only it runs right before execution. I suspect that what Google's decompiler is doing is reverting the binary to something similar to the C compiler's internal representation -- and if so, this method would work pretty much the same for bytecode. But that's just a guess.</p></div>
	</htmltext>
<tokenext>An interesting approach - I wonder if this would also work as well on compiled bytecode like .NET or Java uses ? I suspect that it would .
I once heard Manuel De Icaza give a talk in the early days of the Mono project .
He said the " bytecode " that came out of the C # compiler was analogous to the output that comes out of the front end of a C compiler .
The JIT is the equivalent of the C compiler 's back end , only it runs right before execution .
I suspect that what Google 's decompiler is doing is reverting the binary to something similar to the C compiler 's internal representation -- and if so , this method would work pretty much the same for bytecode .
But that 's just a guess .</tokentext>
<sentencetext>An interesting approach - I wonder if this would also work as well on compiled bytecode like .NET or Java uses?I suspect that it would.
I once heard Manuel De Icaza give a talk in the early days of the Mono project.
He said the "bytecode" that came out of the C# compiler was analogous to the output that comes out of the front end of a C compiler.
The JIT is the equivalent of the C compiler's back end, only it runs right before execution.
I suspect that what Google's decompiler is doing is reverting the binary to something similar to the C compiler's internal representation -- and if so, this method would work pretty much the same for bytecode.
But that's just a guess.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720099</id>
	<title>Re:Can a layman get an explanation in English?</title>
	<author>Anonymous</author>
	<datestamp>1247771400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'm waiting until they announce a project "geared towards distributing really small updates yesterday."  Now THAT will be fast.</p><p>(Yes I'm looking at you, editors.)</p></htmltext>
<tokenext>I 'm waiting until they announce a project " geared towards distributing really small updates yesterday .
" Now THAT will be fast .
( Yes I 'm looking at you , editors .
)</tokentext>
<sentencetext>I'm waiting until they announce a project "geared towards distributing really small updates yesterday.
"  Now THAT will be fast.
(Yes I'm looking at you, editors.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722453</id>
	<title>What?</title>
	<author>microbee</author>
	<datestamp>1247737380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Since this will be released as open source, it should make distributing updates a lot easier for the open-source community.</p></div><p>We open source community don't have no F**KING business in binary distribution!</p></div>
	</htmltext>
<tokenext>Since this will be released as open source , it should make distributing updates a lot easier for the open-source community.We open source community do n't have no F * * KING business in binary distribution !</tokentext>
<sentencetext>Since this will be released as open source, it should make distributing updates a lot easier for the open-source community.We open source community don't have no F**KING business in binary distribution!
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720579</id>
	<title>wrong title, of course</title>
	<author>TheGratefulNet</author>
	<datestamp>1247773140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>its not 'binary diffing'.</p><p>does it work with gif files?  jpg?  png?  dng?  mov? mpg?</p><p>its meant for PROGRAM binary files.  of a specific type of processor, at that.</p><p>that's fine.</p><p>but please say so, and don't imply its binary.  its really specific and not helpful for image (pic, sound) formats.</p></htmltext>
<tokenext>its not 'binary diffing'.does it work with gif files ?
jpg ? png ?
dng ? mov ?
mpg ? its meant for PROGRAM binary files .
of a specific type of processor , at that.that 's fine.but please say so , and do n't imply its binary .
its really specific and not helpful for image ( pic , sound ) formats .</tokentext>
<sentencetext>its not 'binary diffing'.does it work with gif files?
jpg?  png?
dng?  mov?
mpg?its meant for PROGRAM binary files.
of a specific type of processor, at that.that's fine.but please say so, and don't imply its binary.
its really specific and not helpful for image (pic, sound) formats.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726887</id>
	<title>Re:uses a primitive automatic disassembler</title>
	<author>disambiguated</author>
	<datestamp>1247863560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I don't know about Java, but<nobr> <wbr></nobr>.net already has symbolic offsets, and so would not benefit from (or need) this. I'd guess that Java is the same.</htmltext>
<tokenext>I do n't know about Java , but .net already has symbolic offsets , and so would not benefit from ( or need ) this .
I 'd guess that Java is the same .</tokentext>
<sentencetext>I don't know about Java, but .net already has symbolic offsets, and so would not benefit from (or need) this.
I'd guess that Java is the same.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719457</id>
	<title>wait a minute</title>
	<author>six</author>
	<datestamp>1247769060000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p><i>announced a new compression technique called Courgette geared towards distributing really small updates</i></p><p>I just RTFA, this has nothing to do with a compression technique.</p><p>What they developed is a technique to make small diffs from *executable binary files* and it doesn't look like it's portable to anything other than x86 because the patch engine has to embed an architecture specific assembler + disasembler.</p></htmltext>
<tokenext>announced a new compression technique called Courgette geared towards distributing really small updatesI just RTFA , this has nothing to do with a compression technique.What they developed is a technique to make small diffs from * executable binary files * and it does n't look like it 's portable to anything other than x86 because the patch engine has to embed an architecture specific assembler + disasembler .</tokentext>
<sentencetext>announced a new compression technique called Courgette geared towards distributing really small updatesI just RTFA, this has nothing to do with a compression technique.What they developed is a technique to make small diffs from *executable binary files* and it doesn't look like it's portable to anything other than x86 because the patch engine has to embed an architecture specific assembler + disasembler.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720555</id>
	<title>Re:Solving the wrong problem</title>
	<author>Ed Avis</author>
	<datestamp>1247773020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Right because we all know that sensible coders get it right first time, and have every feature you'll ever want already implemented in version 1.0, and so never need to push out a new version of their program, and if for some unthinkable reason they do want to make a new release, the users positively enjoy watching the download creep through as slowly as possible...</p><p>BTW, have you seen the size of the package updates for your Linux distribution recently?</p></htmltext>
<tokenext>Right because we all know that sensible coders get it right first time , and have every feature you 'll ever want already implemented in version 1.0 , and so never need to push out a new version of their program , and if for some unthinkable reason they do want to make a new release , the users positively enjoy watching the download creep through as slowly as possible...BTW , have you seen the size of the package updates for your Linux distribution recently ?</tokentext>
<sentencetext>Right because we all know that sensible coders get it right first time, and have every feature you'll ever want already implemented in version 1.0, and so never need to push out a new version of their program, and if for some unthinkable reason they do want to make a new release, the users positively enjoy watching the download creep through as slowly as possible...BTW, have you seen the size of the package updates for your Linux distribution recently?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737</id>
	<title>Can a layman get an explanation in English?</title>
	<author>AP31R0N</author>
	<datestamp>1247769960000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Please?</p></htmltext>
<tokenext>Please ?</tokentext>
<sentencetext>Please?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719897</id>
	<title>That's just a dissembler.  How about bittorrent?</title>
	<author>Khopesh</author>
	<datestamp>1247770620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>This is diffs the dissembled version of the original against the update on the server, then does the opposite on the client.  I couldn't help but think of this as similar to Gentoo's model<nobr> <wbr></nobr>... download a compressed diff of the source and then recompile.  Both have the same problem:  too much client-side CPU usage (though Gentoo's is an extreme of this).  Isn't Google Chrome OS primarily targeting netbooks?  Can such things handle that level of extra client-side computation without leaving users frustrated?

</p><p>I'd rather improve the distribution model.  Since packages are all signed, SSL and friends aren't needed for the transfer, nor does it need to come from a trusted authority.  Bittorrent comes to mind.  I'm quite disappointed that the <a href="http://sianka.free.fr/" title="sianka.free.fr">apt-torrent project</a> [sianka.free.fr] never <a href="http://brainstorm.ubuntu.com/idea/7792/" title="ubuntu.com">went anywhere</a> [ubuntu.com].  It's clearly the solution.</p></htmltext>
<tokenext>This is diffs the dissembled version of the original against the update on the server , then does the opposite on the client .
I could n't help but think of this as similar to Gentoo 's model ... download a compressed diff of the source and then recompile .
Both have the same problem : too much client-side CPU usage ( though Gentoo 's is an extreme of this ) .
Is n't Google Chrome OS primarily targeting netbooks ?
Can such things handle that level of extra client-side computation without leaving users frustrated ?
I 'd rather improve the distribution model .
Since packages are all signed , SSL and friends are n't needed for the transfer , nor does it need to come from a trusted authority .
Bittorrent comes to mind .
I 'm quite disappointed that the apt-torrent project [ sianka.free.fr ] never went anywhere [ ubuntu.com ] .
It 's clearly the solution .</tokentext>
<sentencetext>This is diffs the dissembled version of the original against the update on the server, then does the opposite on the client.
I couldn't help but think of this as similar to Gentoo's model ... download a compressed diff of the source and then recompile.
Both have the same problem:  too much client-side CPU usage (though Gentoo's is an extreme of this).
Isn't Google Chrome OS primarily targeting netbooks?
Can such things handle that level of extra client-side computation without leaving users frustrated?
I'd rather improve the distribution model.
Since packages are all signed, SSL and friends aren't needed for the transfer, nor does it need to come from a trusted authority.
Bittorrent comes to mind.
I'm quite disappointed that the apt-torrent project [sianka.free.fr] never went anywhere [ubuntu.com].
It's clearly the solution.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720523</id>
	<title>Not terribly novel</title>
	<author>Anonymous</author>
	<datestamp>1247772900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>This is a classic trade-off; you can compress better by sacrifizing platfom-indepence. It has been done before, e.g, the ExeDiff tool for the DEC Alpha. The only news here is that its being done by Google. Interested parties should refer to: G. Motta, J. Gustafson, and S. Chen. "Differential Compression of Executable Code," Proceedings of the Data Compression Conference (DCC 07), Mar. 2007, pp. 103-112 for an overview of the available tools and how they perform.</p><p>Regards,<br>Jacob Gorm Hansen, author of the EDelta linear-time executable differ.</p></htmltext>
<tokenext>This is a classic trade-off ; you can compress better by sacrifizing platfom-indepence .
It has been done before , e.g , the ExeDiff tool for the DEC Alpha .
The only news here is that its being done by Google .
Interested parties should refer to : G. Motta , J. Gustafson , and S. Chen. " Differential Compression of Executable Code , " Proceedings of the Data Compression Conference ( DCC 07 ) , Mar .
2007 , pp .
103-112 for an overview of the available tools and how they perform.Regards,Jacob Gorm Hansen , author of the EDelta linear-time executable differ .</tokentext>
<sentencetext>This is a classic trade-off; you can compress better by sacrifizing platfom-indepence.
It has been done before, e.g, the ExeDiff tool for the DEC Alpha.
The only news here is that its being done by Google.
Interested parties should refer to: G. Motta, J. Gustafson, and S. Chen. "Differential Compression of Executable Code," Proceedings of the Data Compression Conference (DCC 07), Mar.
2007, pp.
103-112 for an overview of the available tools and how they perform.Regards,Jacob Gorm Hansen, author of the EDelta linear-time executable differ.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411</id>
	<title>Also less overhead for Google</title>
	<author>Anonymous</author>
	<datestamp>1247768880000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Google has to pay the cost for maintaining servers and handling bandwidth for all the OS updates they push out. The more efficient they are in this process, the more money the save.</p><p>The good news is that the same benefits could be applied to Red Hat, Ubuntu, openSUSE, etc. Lower costs helps the profitability of companies trying to make a profit on Linux.</p><p>The end users also see benefits in that their packages download quicker. I'd be honestly pretty disappointed in any major distro that doesn't start implementing a binary diff solution around this.</p></htmltext>
<tokenext>Google has to pay the cost for maintaining servers and handling bandwidth for all the OS updates they push out .
The more efficient they are in this process , the more money the save.The good news is that the same benefits could be applied to Red Hat , Ubuntu , openSUSE , etc .
Lower costs helps the profitability of companies trying to make a profit on Linux.The end users also see benefits in that their packages download quicker .
I 'd be honestly pretty disappointed in any major distro that does n't start implementing a binary diff solution around this .</tokentext>
<sentencetext>Google has to pay the cost for maintaining servers and handling bandwidth for all the OS updates they push out.
The more efficient they are in this process, the more money the save.The good news is that the same benefits could be applied to Red Hat, Ubuntu, openSUSE, etc.
Lower costs helps the profitability of companies trying to make a profit on Linux.The end users also see benefits in that their packages download quicker.
I'd be honestly pretty disappointed in any major distro that doesn't start implementing a binary diff solution around this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28728179</id>
	<title>Re:Bad explanation</title>
	<author>Anonymous</author>
	<datestamp>1247837880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My pseudocode compiler (aka. "brain") parsed both of them just fine.</p></htmltext>
<tokenext>My pseudocode compiler ( aka .
" brain " ) parsed both of them just fine .</tokentext>
<sentencetext>My pseudocode compiler (aka.
"brain") parsed both of them just fine.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720635</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719873</id>
	<title>Re:Solving the wrong problem</title>
	<author>salimma</author>
	<datestamp>1247770500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Binaries should, with rare occasion, not be under source control anyway.</p></htmltext>
<tokenext>Binaries should , with rare occasion , not be under source control anyway .</tokentext>
<sentencetext>Binaries should, with rare occasion, not be under source control anyway.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722443
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724601
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720027
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721343
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720173
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724739
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726753
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719965
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719779
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722027
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720555
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720141
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719431
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720969
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721577
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720093
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720469
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719431
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719961
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720309
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720201
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28725159
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719897
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719655
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719313
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28778355
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723059
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719867
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723413
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720289
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719953
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720709
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720663
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719859
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719451
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722897
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719487
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28728179
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720635
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726909
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721053
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720749
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719915
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719873
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719777
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719335
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720099
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719911
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720417
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720971
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719975
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719891
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720895
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720387
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719953
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719683
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719865
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724181
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720027
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726469
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720697
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719457
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_16_1712250_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726973
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721727
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722453
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719897
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28725159
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720173
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719451
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719859
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719335
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719777
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719313
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719655
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726509
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719411
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720969
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719865
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726469
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726753
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720895
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719911
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719575
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719431
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720469
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720141
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719587
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722443
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720709
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719779
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719965
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720093
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721577
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719961
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723413
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720027
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724601
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724181
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719557
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721343
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720635
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28728179
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724739
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719891
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719683
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719351
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726909
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719867
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28723059
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28778355
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720971
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720663
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726887
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719737
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720201
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719953
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720387
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720289
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719915
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720749
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720099
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720417
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722027
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719629
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721053
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720555
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720309
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719975
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719873
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719487
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28722897
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719469
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28721727
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28726973
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719699
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28724443
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719457
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28720697
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719559
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_16_1712250.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_16_1712250.28719641
</commentlist>
</conversation>
