<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_13_2129229</id>
	<title>Sequencing a Human Genome In a Week</title>
	<author>kdawson</author>
	<datestamp>1247484480000</datestamp>
	<htmltext>blackbearnh writes <i>"The Human Genome Project took 13 years to sequence a single human's genetic information in full. At Washington University's Genome Center, <a href="http://radar.oreilly.com/2009/07/sequencing-a-genome-a-week.html">they can now do one in a week</a>. But when you're generating that much data, just keeping track of it can become a major challenge. David Dooling is in charge of managing the massive output of the Center's herd of gene sequencing machines, and making it available to researchers inside the Center and around the world. He'll be talking about his work at OSCON, and gave O'Reilly Radar a sense of where the state of the art in genome sequencing is heading. 'Now we can run these instruments. We can generate a lot of data. We can align it to the human reference. We can detect the variance. We can determine which variance exists in one genome versus another genome. Those variances that are cancerous, specific to the cancer genome, we can annotate those and say these are in genes. ... Now the difficulty is following up on all of those and figuring out what they mean for the cancer. ... We know that they exist in the cancer genome, but which ones are drivers and which ones are passengers? ... [F]inding which ones are actually causative is becoming more and more the challenge now.'"</i></htmltext>
<tokenext>blackbearnh writes " The Human Genome Project took 13 years to sequence a single human 's genetic information in full .
At Washington University 's Genome Center , they can now do one in a week .
But when you 're generating that much data , just keeping track of it can become a major challenge .
David Dooling is in charge of managing the massive output of the Center 's herd of gene sequencing machines , and making it available to researchers inside the Center and around the world .
He 'll be talking about his work at OSCON , and gave O'Reilly Radar a sense of where the state of the art in genome sequencing is heading .
'Now we can run these instruments .
We can generate a lot of data .
We can align it to the human reference .
We can detect the variance .
We can determine which variance exists in one genome versus another genome .
Those variances that are cancerous , specific to the cancer genome , we can annotate those and say these are in genes .
... Now the difficulty is following up on all of those and figuring out what they mean for the cancer .
... We know that they exist in the cancer genome , but which ones are drivers and which ones are passengers ?
... [ F ] inding which ones are actually causative is becoming more and more the challenge now .
' "</tokentext>
<sentencetext>blackbearnh writes "The Human Genome Project took 13 years to sequence a single human's genetic information in full.
At Washington University's Genome Center, they can now do one in a week.
But when you're generating that much data, just keeping track of it can become a major challenge.
David Dooling is in charge of managing the massive output of the Center's herd of gene sequencing machines, and making it available to researchers inside the Center and around the world.
He'll be talking about his work at OSCON, and gave O'Reilly Radar a sense of where the state of the art in genome sequencing is heading.
'Now we can run these instruments.
We can generate a lot of data.
We can align it to the human reference.
We can detect the variance.
We can determine which variance exists in one genome versus another genome.
Those variances that are cancerous, specific to the cancer genome, we can annotate those and say these are in genes.
... Now the difficulty is following up on all of those and figuring out what they mean for the cancer.
... We know that they exist in the cancer genome, but which ones are drivers and which ones are passengers?
... [F]inding which ones are actually causative is becoming more and more the challenge now.
'"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685311</id>
	<title>Re:Here's what I want to know...</title>
	<author>interkin3tic</author>
	<datestamp>1247496420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same.</p></div><p>You're talking about for different individuals?  There will be differences, yes, but most of that difference should be in non-coding regions.  The actual regions making proteins should be nearly identical.  I only work with a few DNA sequences that code for proteins, so that's all I'd be interested in, but there are other applications for medicine that the variation in non-coding regions would be important.</p></div>
	</htmltext>
<tokenext>Suppose they sequence a specific human 's genome .
Now they do it again .
Will the two sequences be the same.You 're talking about for different individuals ?
There will be differences , yes , but most of that difference should be in non-coding regions .
The actual regions making proteins should be nearly identical .
I only work with a few DNA sequences that code for proteins , so that 's all I 'd be interested in , but there are other applications for medicine that the variation in non-coding regions would be important .</tokentext>
<sentencetext>Suppose they sequence a specific human's genome.
Now they do it again.
Will the two sequences be the same.You're talking about for different individuals?
There will be differences, yes, but most of that difference should be in non-coding regions.
The actual regions making proteins should be nearly identical.
I only work with a few DNA sequences that code for proteins, so that's all I'd be interested in, but there are other applications for medicine that the variation in non-coding regions would be important.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688061</id>
	<title>Re:How do you know it's NOT comments?</title>
	<author>SlashWombat</author>
	<datestamp>1247566440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>How do you know it's NOT comments?</p></div><p>Come on, how many programmers do you know that write comments, meaningful or not? I personally have a massive descriptive dialogue running down the side. "Real" programmers have told me that is excessive. Looking at their code I find one comment every 20 to fifty lines, and descriptive identifiers, like i, x or y. The genome will be just like that. (Also, given that any big project ends up with lots of dead code. (yes, I know the compiler identifies that, but<nobr> <wbr></nobr>...)</p></div>
	</htmltext>
<tokenext>How do you know it 's NOT comments ? Come on , how many programmers do you know that write comments , meaningful or not ?
I personally have a massive descriptive dialogue running down the side .
" Real " programmers have told me that is excessive .
Looking at their code I find one comment every 20 to fifty lines , and descriptive identifiers , like i , x or y. The genome will be just like that .
( Also , given that any big project ends up with lots of dead code .
( yes , I know the compiler identifies that , but ... )</tokentext>
<sentencetext>How do you know it's NOT comments?Come on, how many programmers do you know that write comments, meaningful or not?
I personally have a massive descriptive dialogue running down the side.
"Real" programmers have told me that is excessive.
Looking at their code I find one comment every 20 to fifty lines, and descriptive identifiers, like i, x or y. The genome will be just like that.
(Also, given that any big project ends up with lots of dead code.
(yes, I know the compiler identifies that, but ...)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685271</id>
	<title>Re:DNA GATC</title>
	<author>interkin3tic</author>
	<datestamp>1247496000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At least it's backed up well.  3 backups of almost everything ain't bad.</p><p>Two strands on each chromesome... I'm probably in the wrong crowd of nerds...</p></htmltext>
<tokenext>At least it 's backed up well .
3 backups of almost everything ai n't bad.Two strands on each chromesome... I 'm probably in the wrong crowd of nerds.. .</tokentext>
<sentencetext>At least it's backed up well.
3 backups of almost everything ain't bad.Two strands on each chromesome... I'm probably in the wrong crowd of nerds...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28687447</id>
	<title>Re:Humans have ~810.6 MiB of DNA</title>
	<author>johannesg</author>
	<datestamp>1247602740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>So, what's going on here?  Are the file formats used to store this data *that* bloated?</p></div><p>&lt;genome species="human"&gt;...<nobr> <wbr></nobr>;-)</p></div>
	</htmltext>
<tokenext>So , what 's going on here ?
Are the file formats used to store this data * that * bloated ? .. .
; - )</tokentext>
<sentencetext>So, what's going on here?
Are the file formats used to store this data *that* bloated?...
;-)
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685211</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686047</id>
	<title>I also manage a Next-gen Sequencing Machine</title>
	<author>Anonymous</author>
	<datestamp>1247502120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>Next gen sequencing eats up huge amounts of space. Every run on our Illumina Genome Analyzer II machine takes up 4 terabytes of intermediate data, most of which comes from the something like 100,000+ 20 Mb bitmap picture files taken from the flowcells. All that much data is an ass load of work to process. Just today I got a little lazy with my Perl programming and let the program go unsupervised...and it ate up 32 gb of ram and froze up the server. Took redhat 3 full hours to decide it had enough of the swapping and kill the process.</p><p>For people not familiar with current generation sequencing machines, they can scan between 30-80 bp reads and use alignment programs to match up the reads to species databases. The reaction/imaging takes 2 days, prep takes about a week, processing images takes another 2 days, alignment takes about 4. The Illumina machine achieves higher throughput than the ABI ones but gives shorter reads; we get about 4 billion nt per run if we do everything right. Keep in mind though, that 4 billion that they mention in the summary is misleading: the read cover distribution is not uniform (ie you do not cover every nucleotide of the human's 3 billion nt genome). To ensure 95\%+ coverage, you'd have to use 20-40 runs on the Illumina machine...in other words, about 6-10 months of non-stop work to get a reasonable degree of coverage over the entire human genome (at which point you can use programs to "assemble" the reads in a contiguous genome). WashU is very wealthy so they have quite a few of these machines available to work at any given time.</p><p>the main problem these days is that processing all that much data requires a huge amount of computer knowhow (writing software, algorithms, installing software, using other people's poorly documented programs), and a good understanding of statistics and algorithms, especially when it comes to efficiency. Another problem they never mention are artifacts from the chemical protocol; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap (usually only the last few bases are unreliable); turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on. Even for good reads, a lot of the bases can be suspect so you have to do a huge amount of averaging, filtering, and statistical analysis to make sure your results/graphs are accurate.</p></htmltext>
<tokenext>Next gen sequencing eats up huge amounts of space .
Every run on our Illumina Genome Analyzer II machine takes up 4 terabytes of intermediate data , most of which comes from the something like 100,000 + 20 Mb bitmap picture files taken from the flowcells .
All that much data is an ass load of work to process .
Just today I got a little lazy with my Perl programming and let the program go unsupervised...and it ate up 32 gb of ram and froze up the server .
Took redhat 3 full hours to decide it had enough of the swapping and kill the process.For people not familiar with current generation sequencing machines , they can scan between 30-80 bp reads and use alignment programs to match up the reads to species databases .
The reaction/imaging takes 2 days , prep takes about a week , processing images takes another 2 days , alignment takes about 4 .
The Illumina machine achieves higher throughput than the ABI ones but gives shorter reads ; we get about 4 billion nt per run if we do everything right .
Keep in mind though , that 4 billion that they mention in the summary is misleading : the read cover distribution is not uniform ( ie you do not cover every nucleotide of the human 's 3 billion nt genome ) .
To ensure 95 \ % + coverage , you 'd have to use 20-40 runs on the Illumina machine...in other words , about 6-10 months of non-stop work to get a reasonable degree of coverage over the entire human genome ( at which point you can use programs to " assemble " the reads in a contiguous genome ) .
WashU is very wealthy so they have quite a few of these machines available to work at any given time.the main problem these days is that processing all that much data requires a huge amount of computer knowhow ( writing software , algorithms , installing software , using other people 's poorly documented programs ) , and a good understanding of statistics and algorithms , especially when it comes to efficiency .
Another problem they never mention are artifacts from the chemical protocol ; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap ( usually only the last few bases are unreliable ) ; turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on .
Even for good reads , a lot of the bases can be suspect so you have to do a huge amount of averaging , filtering , and statistical analysis to make sure your results/graphs are accurate .</tokentext>
<sentencetext>Next gen sequencing eats up huge amounts of space.
Every run on our Illumina Genome Analyzer II machine takes up 4 terabytes of intermediate data, most of which comes from the something like 100,000+ 20 Mb bitmap picture files taken from the flowcells.
All that much data is an ass load of work to process.
Just today I got a little lazy with my Perl programming and let the program go unsupervised...and it ate up 32 gb of ram and froze up the server.
Took redhat 3 full hours to decide it had enough of the swapping and kill the process.For people not familiar with current generation sequencing machines, they can scan between 30-80 bp reads and use alignment programs to match up the reads to species databases.
The reaction/imaging takes 2 days, prep takes about a week, processing images takes another 2 days, alignment takes about 4.
The Illumina machine achieves higher throughput than the ABI ones but gives shorter reads; we get about 4 billion nt per run if we do everything right.
Keep in mind though, that 4 billion that they mention in the summary is misleading: the read cover distribution is not uniform (ie you do not cover every nucleotide of the human's 3 billion nt genome).
To ensure 95\%+ coverage, you'd have to use 20-40 runs on the Illumina machine...in other words, about 6-10 months of non-stop work to get a reasonable degree of coverage over the entire human genome (at which point you can use programs to "assemble" the reads in a contiguous genome).
WashU is very wealthy so they have quite a few of these machines available to work at any given time.the main problem these days is that processing all that much data requires a huge amount of computer knowhow (writing software, algorithms, installing software, using other people's poorly documented programs), and a good understanding of statistics and algorithms, especially when it comes to efficiency.
Another problem they never mention are artifacts from the chemical protocol; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap (usually only the last few bases are unreliable); turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on.
Even for good reads, a lot of the bases can be suspect so you have to do a huge amount of averaging, filtering, and statistical analysis to make sure your results/graphs are accurate.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28687817</id>
	<title>Such naivety</title>
	<author>Anonymous</author>
	<datestamp>1247563680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>Finding which ones are actually causative is becoming more and more the challenge now</i></p><p>Ah, the simple magic bullet solution. Better tell Mother Nature that there will be bullets with her name on them.</p></htmltext>
<tokenext>Finding which ones are actually causative is becoming more and more the challenge nowAh , the simple magic bullet solution .
Better tell Mother Nature that there will be bullets with her name on them .</tokentext>
<sentencetext>Finding which ones are actually causative is becoming more and more the challenge nowAh, the simple magic bullet solution.
Better tell Mother Nature that there will be bullets with her name on them.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685211</id>
	<title>Humans have ~810.6 MiB of DNA</title>
	<author>izomiac</author>
	<datestamp>1247495220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>The human genome is approximately 3.4 billion base pairs long.  There are four bases, so this would correspond to 2 bits of information per base.  2 * 3,400,000,000<nobr> <wbr></nobr>/8<nobr> <wbr></nobr>/1024<nobr> <wbr></nobr>/1024 = 810.6 MiB of data per sequence.  That doesn't seem like it'd be too difficult.  With a little compression it'd fit on a CD.  Now, I suppose each section is sequenced multiple times and you'd want some parity, but it still seems like something that'd easily fit on a DVD (especially if alternate sequences are all diff'd from the first).  Perhaps throw in another disc for pre-computed analysis results and that ought to be it.<br> <br>
So, what's going on here?  Are the file formats used to store this data *that* bloated?  Or are they trying to include structural information beyond sequence?  What am I missing that makes this an unwieldy amount of data?<br> <br>
(I have to laugh at how Vista is apparently 20 times more complex than the people that use it...)</htmltext>
<tokenext>The human genome is approximately 3.4 billion base pairs long .
There are four bases , so this would correspond to 2 bits of information per base .
2 * 3,400,000,000 /8 /1024 /1024 = 810.6 MiB of data per sequence .
That does n't seem like it 'd be too difficult .
With a little compression it 'd fit on a CD .
Now , I suppose each section is sequenced multiple times and you 'd want some parity , but it still seems like something that 'd easily fit on a DVD ( especially if alternate sequences are all diff 'd from the first ) .
Perhaps throw in another disc for pre-computed analysis results and that ought to be it .
So , what 's going on here ?
Are the file formats used to store this data * that * bloated ?
Or are they trying to include structural information beyond sequence ?
What am I missing that makes this an unwieldy amount of data ?
( I have to laugh at how Vista is apparently 20 times more complex than the people that use it... )</tokentext>
<sentencetext>The human genome is approximately 3.4 billion base pairs long.
There are four bases, so this would correspond to 2 bits of information per base.
2 * 3,400,000,000 /8 /1024 /1024 = 810.6 MiB of data per sequence.
That doesn't seem like it'd be too difficult.
With a little compression it'd fit on a CD.
Now, I suppose each section is sequenced multiple times and you'd want some parity, but it still seems like something that'd easily fit on a DVD (especially if alternate sequences are all diff'd from the first).
Perhaps throw in another disc for pre-computed analysis results and that ought to be it.
So, what's going on here?
Are the file formats used to store this data *that* bloated?
Or are they trying to include structural information beyond sequence?
What am I missing that makes this an unwieldy amount of data?
(I have to laugh at how Vista is apparently 20 times more complex than the people that use it...)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685447</id>
	<title>Re:Passing this data back to the scientist</title>
	<author>goombah99</author>
	<datestamp>1247497920000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>a whole human genome will fit on a CD.</p><p>if you just transmit the diffs from the generic human you could put it in an e-mail</p></htmltext>
<tokenext>a whole human genome will fit on a CD.if you just transmit the diffs from the generic human you could put it in an e-mail</tokentext>
<sentencetext>a whole human genome will fit on a CD.if you just transmit the diffs from the generic human you could put it in an e-mail</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684339</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685293</id>
	<title>Re:Here's what I want to know...</title>
	<author>maxume</author>
	<datestamp>1247496240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Except cells do undergo the occasional survivable mutation, and then there are the people that integrated what would have been a twin, and so on.</p></htmltext>
<tokenext>Except cells do undergo the occasional survivable mutation , and then there are the people that integrated what would have been a twin , and so on .</tokentext>
<sentencetext>Except cells do undergo the occasional survivable mutation, and then there are the people that integrated what would have been a twin, and so on.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685195</id>
	<title>DNA is digital</title>
	<author>EndoplasmicRidiculus</author>
	<datestamp>1247495100000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Four bases and not much in between.</htmltext>
<tokenext>Four bases and not much in between .</tokentext>
<sentencetext>Four bases and not much in between.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684425</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685391</id>
	<title>Re:Humans have ~810.6 MiB of DNA</title>
	<author>rnaiguy</author>
	<datestamp>1247497320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You have to take into account that sequencing machines do not just spit out a pretty string of A, C, T, G. For the older sequencing method, the raw data from the sequencing machine consists of 4 intensity traces (one for each base), so you have to record 4 waves, which are then interpreted (sometimes imperfectly) by software to give you the sequence. The raw data does have to be stored and moved around for some period of time, and often needs to be stored for other analyses. This data is around 200 kilobytes for less than 1 kilobase of sequence. The newer methods collect data as a series of very high-resolution images (something like a black image with ~10 million colored spots), which take up TONS of space, and take substantial processing power to interpret and turn into nucleotide sequence. I don't have exact numbers though, since I haven't worked with them directly, only the preprocessed data (which is still several gigabytes for a gigabase of sequence, since it contains data on the quality/certainty of each base read and such)</htmltext>
<tokenext>You have to take into account that sequencing machines do not just spit out a pretty string of A , C , T , G. For the older sequencing method , the raw data from the sequencing machine consists of 4 intensity traces ( one for each base ) , so you have to record 4 waves , which are then interpreted ( sometimes imperfectly ) by software to give you the sequence .
The raw data does have to be stored and moved around for some period of time , and often needs to be stored for other analyses .
This data is around 200 kilobytes for less than 1 kilobase of sequence .
The newer methods collect data as a series of very high-resolution images ( something like a black image with ~ 10 million colored spots ) , which take up TONS of space , and take substantial processing power to interpret and turn into nucleotide sequence .
I do n't have exact numbers though , since I have n't worked with them directly , only the preprocessed data ( which is still several gigabytes for a gigabase of sequence , since it contains data on the quality/certainty of each base read and such )</tokentext>
<sentencetext>You have to take into account that sequencing machines do not just spit out a pretty string of A, C, T, G. For the older sequencing method, the raw data from the sequencing machine consists of 4 intensity traces (one for each base), so you have to record 4 waves, which are then interpreted (sometimes imperfectly) by software to give you the sequence.
The raw data does have to be stored and moved around for some period of time, and often needs to be stored for other analyses.
This data is around 200 kilobytes for less than 1 kilobase of sequence.
The newer methods collect data as a series of very high-resolution images (something like a black image with ~10 million colored spots), which take up TONS of space, and take substantial processing power to interpret and turn into nucleotide sequence.
I don't have exact numbers though, since I haven't worked with them directly, only the preprocessed data (which is still several gigabytes for a gigabase of sequence, since it contains data on the quality/certainty of each base read and such)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685211</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684613</id>
	<title>Buttload of data</title>
	<author>virgil Lante</author>
	<datestamp>1247490060000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Illumina's Solexa sequencing produces around 7 TB of data per genome sequencing.  Its a feat just to move the data around, let alone analyze it.  Its amazing how far sequencing technology has come, but how little our knowledge of biology as a whole has advanced.

'The Cancer Genome' does not exist.  No tumor is the same and in cancer, especially solid tumors, no two cells are the same.  Sequencing a gamish of cells from a tumor only gives you the average which may or may not give any pertinent information about the tumor.  Vogelstein's group has shown this quite convincingly but hardly anyone truly looks at what the data really says.</htmltext>
<tokenext>Illumina 's Solexa sequencing produces around 7 TB of data per genome sequencing .
Its a feat just to move the data around , let alone analyze it .
Its amazing how far sequencing technology has come , but how little our knowledge of biology as a whole has advanced .
'The Cancer Genome ' does not exist .
No tumor is the same and in cancer , especially solid tumors , no two cells are the same .
Sequencing a gamish of cells from a tumor only gives you the average which may or may not give any pertinent information about the tumor .
Vogelstein 's group has shown this quite convincingly but hardly anyone truly looks at what the data really says .</tokentext>
<sentencetext>Illumina's Solexa sequencing produces around 7 TB of data per genome sequencing.
Its a feat just to move the data around, let alone analyze it.
Its amazing how far sequencing technology has come, but how little our knowledge of biology as a whole has advanced.
'The Cancer Genome' does not exist.
No tumor is the same and in cancer, especially solid tumors, no two cells are the same.
Sequencing a gamish of cells from a tumor only gives you the average which may or may not give any pertinent information about the tumor.
Vogelstein's group has shown this quite convincingly but hardly anyone truly looks at what the data really says.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694457</id>
	<title>Re:DNA GATC ... G-GNO-ME</title>
	<author>davidsyes</author>
	<datestamp>1247600760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Splice too much of that bad, useless, convoluted code into a "new" human and we might end up with a G-Gnome or GNOME (Gratuitous, Nacent, Ogreous, Mechanised Entity). Call it... "G-UNIT", and give it a uniform and a mission. Or, give it a script and a part and call it Smeegul/Smigel...)</p></htmltext>
<tokenext>Splice too much of that bad , useless , convoluted code into a " new " human and we might end up with a G-Gnome or GNOME ( Gratuitous , Nacent , Ogreous , Mechanised Entity ) .
Call it... " G-UNIT " , and give it a uniform and a mission .
Or , give it a script and a part and call it Smeegul/Smigel... )</tokentext>
<sentencetext>Splice too much of that bad, useless, convoluted code into a "new" human and we might end up with a G-Gnome or GNOME (Gratuitous, Nacent, Ogreous, Mechanised Entity).
Call it... "G-UNIT", and give it a uniform and a mission.
Or, give it a script and a part and call it Smeegul/Smigel...)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694625</id>
	<title>Diff?</title>
	<author>bogado</author>
	<datestamp>1247601540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>While a single human genome is a lot of information, storing thousands shouldn't add much requirements, one can simply store a diff from the first.</p></htmltext>
<tokenext>While a single human genome is a lot of information , storing thousands should n't add much requirements , one can simply store a diff from the first .</tokentext>
<sentencetext>While a single human genome is a lot of information, storing thousands shouldn't add much requirements, one can simply store a diff from the first.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684327</id>
	<title>How about some numbers?</title>
	<author>QuantumG</author>
	<datestamp>1247488440000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>I can download many gigs in a week, too.  I don't need millions of dollars worth of supercomputer to process it.</p><p>Here's a suggestion: stop using python to do data crunching.  Shocking I know.</p></htmltext>
<tokenext>I can download many gigs in a week , too .
I do n't need millions of dollars worth of supercomputer to process it.Here 's a suggestion : stop using python to do data crunching .
Shocking I know .</tokentext>
<sentencetext>I can download many gigs in a week, too.
I don't need millions of dollars worth of supercomputer to process it.Here's a suggestion: stop using python to do data crunching.
Shocking I know.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28687997</id>
	<title>A way to avoid GATTACA</title>
	<author>Anonymous</author>
	<datestamp>1247565600000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Y'know what I think would be interesting?  If, when this becomes possible, if the gov't offered it so anyone can go and drop off a cheek-swab, be handed an unrecorded number on a slip of paper, and can check back or log in and get their results.  No ID required, no idea who you are, just a swab and a number.  That would, it would seem to me, avoid all that pesky gov't knowing your genome thing.</p><p>Thoughts?</p><p>I suppose it would be useful for studies to have some data on the person, age, sex, race perhaps, so broad studies could be done, but yea, why can't they just do a swab-number-off-you-go kind of thing?</p></htmltext>
<tokenext>Y'know what I think would be interesting ?
If , when this becomes possible , if the gov't offered it so anyone can go and drop off a cheek-swab , be handed an unrecorded number on a slip of paper , and can check back or log in and get their results .
No ID required , no idea who you are , just a swab and a number .
That would , it would seem to me , avoid all that pesky gov't knowing your genome thing.Thoughts ? I suppose it would be useful for studies to have some data on the person , age , sex , race perhaps , so broad studies could be done , but yea , why ca n't they just do a swab-number-off-you-go kind of thing ?</tokentext>
<sentencetext>Y'know what I think would be interesting?
If, when this becomes possible, if the gov't offered it so anyone can go and drop off a cheek-swab, be handed an unrecorded number on a slip of paper, and can check back or log in and get their results.
No ID required, no idea who you are, just a swab and a number.
That would, it would seem to me, avoid all that pesky gov't knowing your genome thing.Thoughts?I suppose it would be useful for studies to have some data on the person, age, sex, race perhaps, so broad studies could be done, but yea, why can't they just do a swab-number-off-you-go kind of thing?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603</id>
	<title>Data analysis a rapidly growing problem in Biology</title>
	<author>SlashBugs</author>
	<datestamp>1247490000000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext>Data handling and analysis is becoming a big problem for biologists generally. Techniques like microarray (or exon array) analysis can tell you how strongly a set of genes (tens of thousands, with hundreds of thousands of splice variants) are being expressed under given conditions. But actually handling this data is a nightmare, especially as a lot of biologists ended up there because they love science but aren't great at maths. Given a list of thousands of genes, teasing out the statistically significantly different genes from the noise is only the first step. Then you have to decide what's biologically important (e.g. what's the prime mover and what's just a side-effect), and then you have a list of genes which might have known functions but more likely have just a name or even a tag like "hypothetical ORF #3261", for genes that are predicted by analysis of the genome but have never been proved to actually be expressed. After this, there's the further complication that these techniques only tell you what's going on at the DNA or RNA level. The vast majority of genes only have effects when translated into protein and, perhaps, further modified, meaning that you cant's be sure that the levels you're detecting by the sequencing (DNA level) or expression analysis chips (RNA level) actually reflects what's going on in the cell.<br> <br>

One of the big problems studying expression patterns in cancer specifically is the paucity of samples. The genetic differences between individuals (and tissues within individuals) means there's a lot of noise underlying the "signal" of the putative cancer signatures. This is especially true because there are usually several genetic pathways that a given tissue can take to becoming cancerous: you might only need mutations in a small subset of a long list of genes, which is difficult to spot by sheer data mining. While cancer is very common, each type of cancer is much less so; therefore the paucity of available samples of a given cancer type in a given stage makes reaching statistical significance very difficult. There are some huge projects underway at the moment to collate all cancer labs' samples for meta-analysis, dramatically increasing the statistical power of the studies. A good example of this is the <a href="http://www.pancreasexpression.org/" title="pancreasexpression.org">Pancreas Expression Database</a> [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.</htmltext>
<tokenext>Data handling and analysis is becoming a big problem for biologists generally .
Techniques like microarray ( or exon array ) analysis can tell you how strongly a set of genes ( tens of thousands , with hundreds of thousands of splice variants ) are being expressed under given conditions .
But actually handling this data is a nightmare , especially as a lot of biologists ended up there because they love science but are n't great at maths .
Given a list of thousands of genes , teasing out the statistically significantly different genes from the noise is only the first step .
Then you have to decide what 's biologically important ( e.g .
what 's the prime mover and what 's just a side-effect ) , and then you have a list of genes which might have known functions but more likely have just a name or even a tag like " hypothetical ORF # 3261 " , for genes that are predicted by analysis of the genome but have never been proved to actually be expressed .
After this , there 's the further complication that these techniques only tell you what 's going on at the DNA or RNA level .
The vast majority of genes only have effects when translated into protein and , perhaps , further modified , meaning that you cant 's be sure that the levels you 're detecting by the sequencing ( DNA level ) or expression analysis chips ( RNA level ) actually reflects what 's going on in the cell .
One of the big problems studying expression patterns in cancer specifically is the paucity of samples .
The genetic differences between individuals ( and tissues within individuals ) means there 's a lot of noise underlying the " signal " of the putative cancer signatures .
This is especially true because there are usually several genetic pathways that a given tissue can take to becoming cancerous : you might only need mutations in a small subset of a long list of genes , which is difficult to spot by sheer data mining .
While cancer is very common , each type of cancer is much less so ; therefore the paucity of available samples of a given cancer type in a given stage makes reaching statistical significance very difficult .
There are some huge projects underway at the moment to collate all cancer labs ' samples for meta-analysis , dramatically increasing the statistical power of the studies .
A good example of this is the Pancreas Expression Database [ pancreasexpression.org ] , which some pacreatic cancer researchers are getting very excited about .</tokentext>
<sentencetext>Data handling and analysis is becoming a big problem for biologists generally.
Techniques like microarray (or exon array) analysis can tell you how strongly a set of genes (tens of thousands, with hundreds of thousands of splice variants) are being expressed under given conditions.
But actually handling this data is a nightmare, especially as a lot of biologists ended up there because they love science but aren't great at maths.
Given a list of thousands of genes, teasing out the statistically significantly different genes from the noise is only the first step.
Then you have to decide what's biologically important (e.g.
what's the prime mover and what's just a side-effect), and then you have a list of genes which might have known functions but more likely have just a name or even a tag like "hypothetical ORF #3261", for genes that are predicted by analysis of the genome but have never been proved to actually be expressed.
After this, there's the further complication that these techniques only tell you what's going on at the DNA or RNA level.
The vast majority of genes only have effects when translated into protein and, perhaps, further modified, meaning that you cant's be sure that the levels you're detecting by the sequencing (DNA level) or expression analysis chips (RNA level) actually reflects what's going on in the cell.
One of the big problems studying expression patterns in cancer specifically is the paucity of samples.
The genetic differences between individuals (and tissues within individuals) means there's a lot of noise underlying the "signal" of the putative cancer signatures.
This is especially true because there are usually several genetic pathways that a given tissue can take to becoming cancerous: you might only need mutations in a small subset of a long list of genes, which is difficult to spot by sheer data mining.
While cancer is very common, each type of cancer is much less so; therefore the paucity of available samples of a given cancer type in a given stage makes reaching statistical significance very difficult.
There are some huge projects underway at the moment to collate all cancer labs' samples for meta-analysis, dramatically increasing the statistical power of the studies.
A good example of this is the Pancreas Expression Database [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694905</id>
	<title>Re:How do you know it's NOT comments?</title>
	<author>ioshhdflwuegfh</author>
	<datestamp>1247602980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>There was some SF book I read, where it was explained that the comments were "made by a demo version of creature editor" and that was the reason for humans to die after 100 years. Some hacker has then found a way to reset the demo counter and thus to make people live forever.</p></div><p>Hey, what if that's how DNA work?  Would that not be like awesome and stuff?</p></div>
	</htmltext>
<tokenext>There was some SF book I read , where it was explained that the comments were " made by a demo version of creature editor " and that was the reason for humans to die after 100 years .
Some hacker has then found a way to reset the demo counter and thus to make people live forever.Hey , what if that 's how DNA work ?
Would that not be like awesome and stuff ?</tokentext>
<sentencetext>There was some SF book I read, where it was explained that the comments were "made by a demo version of creature editor" and that was the reason for humans to die after 100 years.
Some hacker has then found a way to reset the demo counter and thus to make people live forever.Hey, what if that's how DNA work?
Would that not be like awesome and stuff?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686567</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686803</id>
	<title>Re:How about storing it in analog format?</title>
	<author>hamisht</author>
	<datestamp>1247508180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I bet the whole thing could be squeezed into something smaller than the head of a pin!</p></div><p>or, indeed, smaller than the point of a pin</p></div>
	</htmltext>
<tokenext>I bet the whole thing could be squeezed into something smaller than the head of a pin ! or , indeed , smaller than the point of a pin</tokentext>
<sentencetext>I bet the whole thing could be squeezed into something smaller than the head of a pin!or, indeed, smaller than the point of a pin
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684425</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684551</id>
	<title>Moore's law at work?</title>
	<author>Anonymous</author>
	<datestamp>1247489760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>13 years of doubling computer speeds every 18 months would bring this to about 18 day by the time they were done with the first sequence.</p><p>So while a staggering improvement, not surprising from a CPU processing standpoint.  There could be many other factors involved though.</p></htmltext>
<tokenext>13 years of doubling computer speeds every 18 months would bring this to about 18 day by the time they were done with the first sequence.So while a staggering improvement , not surprising from a CPU processing standpoint .
There could be many other factors involved though .</tokentext>
<sentencetext>13 years of doubling computer speeds every 18 months would bring this to about 18 day by the time they were done with the first sequence.So while a staggering improvement, not surprising from a CPU processing standpoint.
There could be many other factors involved though.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685233</id>
	<title>Re:We pissed away $3 billion dollars</title>
	<author>cupantae</author>
	<datestamp>1247495460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, EENterestingly, that's pretty much what people are saying when they complain that early paleontologists ruined priceless artifacts.</p><p>You learn as you go, like when you're learning to play Ghosts 'n' Goblins and you keep getting killed by the red gargoyle, but then you eventually learn that you have to jump away from him as he swoops and fire frantically towards him. I know other people have made similar responses, but I only understand things in terms of analogies. Particularly ones related to throwing lances at gargoyles.</p></htmltext>
<tokenext>Well , EENterestingly , that 's pretty much what people are saying when they complain that early paleontologists ruined priceless artifacts.You learn as you go , like when you 're learning to play Ghosts 'n ' Goblins and you keep getting killed by the red gargoyle , but then you eventually learn that you have to jump away from him as he swoops and fire frantically towards him .
I know other people have made similar responses , but I only understand things in terms of analogies .
Particularly ones related to throwing lances at gargoyles .</tokentext>
<sentencetext>Well, EENterestingly, that's pretty much what people are saying when they complain that early paleontologists ruined priceless artifacts.You learn as you go, like when you're learning to play Ghosts 'n' Goblins and you keep getting killed by the red gargoyle, but then you eventually learn that you have to jump away from him as he swoops and fire frantically towards him.
I know other people have made similar responses, but I only understand things in terms of analogies.
Particularly ones related to throwing lances at gargoyles.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684513</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684513</id>
	<title>We pissed away $3 billion dollars</title>
	<author>Anonymous</author>
	<datestamp>1247489520000</datestamp>
	<modclass>Funny</modclass>
	<modscore>0</modscore>
	<htmltext>and 13 years of time, when we could have waited a few more years and got it done in a week, and much, much cheaper.  What a waste of time and money that was....</htmltext>
<tokenext>and 13 years of time , when we could have waited a few more years and got it done in a week , and much , much cheaper .
What a waste of time and money that was... .</tokentext>
<sentencetext>and 13 years of time, when we could have waited a few more years and got it done in a week, and much, much cheaper.
What a waste of time and money that was....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28692629</id>
	<title>Re:Data analysis a rapidly growing problem in Biol</title>
	<author>Anonymous</author>
	<datestamp>1247593140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>See, i think we need articles like this on slashdot (or at least the comment turned into an article). sometimes just random 'not in the spotlight' news articles.</p><p>just my two cents.</p></htmltext>
<tokenext>See , i think we need articles like this on slashdot ( or at least the comment turned into an article ) .
sometimes just random 'not in the spotlight ' news articles.just my two cents .</tokentext>
<sentencetext>See, i think we need articles like this on slashdot (or at least the comment turned into an article).
sometimes just random 'not in the spotlight' news articles.just my two cents.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685833</id>
	<title>Re:Passing this data back to the scientist</title>
	<author>Neil Blender</author>
	<datestamp>1247500620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A single run on a Solexa next gen sequencer can generate over 200GB of data and half a million files.  And that is for 8 samples only.  You get into the terabyte range very quickly.</p><p>That's why data is delivered on hard drives.</p></htmltext>
<tokenext>A single run on a Solexa next gen sequencer can generate over 200GB of data and half a million files .
And that is for 8 samples only .
You get into the terabyte range very quickly.That 's why data is delivered on hard drives .</tokentext>
<sentencetext>A single run on a Solexa next gen sequencer can generate over 200GB of data and half a million files.
And that is for 8 samples only.
You get into the terabyte range very quickly.That's why data is delivered on hard drives.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685447</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684339</id>
	<title>Passing this data back to the scientist</title>
	<author>Anonymous</author>
	<datestamp>1247488500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>has reverted to sneakernet.  They literally bring you an external hard drive.</p><p>Also, Illumina will sequence your genome for $48,000.</p></htmltext>
<tokenext>has reverted to sneakernet .
They literally bring you an external hard drive.Also , Illumina will sequence your genome for $ 48,000 .</tokentext>
<sentencetext>has reverted to sneakernet.
They literally bring you an external hard drive.Also, Illumina will sequence your genome for $48,000.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686605</id>
	<title>Re:DNA GATC</title>
	<author>K. S. Kyosuke</author>
	<datestamp>1247506620000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>You thought God can't spell "job security"? Mind you, he's omnipotent!</htmltext>
<tokenext>You thought God ca n't spell " job security " ?
Mind you , he 's omnipotent !</tokentext>
<sentencetext>You thought God can't spell "job security"?
Mind you, he's omnipotent!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684729</id>
	<title>Genome is so 1990...</title>
	<author>Anonymous</author>
	<datestamp>1247491200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...we've now learned that the epigenome controls which parts the genome manifest themselves.</p><p><a href="http://en.wikipedia.org/wiki/Epigenetics" title="wikipedia.org" rel="nofollow">http://en.wikipedia.org/wiki/Epigenetics</a> [wikipedia.org]</p></htmltext>
<tokenext>...we 've now learned that the epigenome controls which parts the genome manifest themselves.http : //en.wikipedia.org/wiki/Epigenetics [ wikipedia.org ]</tokentext>
<sentencetext>...we've now learned that the epigenome controls which parts the genome manifest themselves.http://en.wikipedia.org/wiki/Epigenetics [wikipedia.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686567</id>
	<title>Re:How do you know it's NOT comments?</title>
	<author>dunkelfalke</author>
	<datestamp>1247506200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There was some SF book I read, where it was explained that the comments were "made by a demo version of creature editor" and that was the reason for humans to die after 100 years. Some hacker has then found a way to reset the demo counter and thus to make people live forever.</p></htmltext>
<tokenext>There was some SF book I read , where it was explained that the comments were " made by a demo version of creature editor " and that was the reason for humans to die after 100 years .
Some hacker has then found a way to reset the demo counter and thus to make people live forever .</tokentext>
<sentencetext>There was some SF book I read, where it was explained that the comments were "made by a demo version of creature editor" and that was the reason for humans to die after 100 years.
Some hacker has then found a way to reset the demo counter and thus to make people live forever.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684923</id>
	<title>Re:DNA GATC</title>
	<author>ocularDeathRay</author>
	<datestamp>1247492580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I would like to announce publicly that my genome is released under the GPL</htmltext>
<tokenext>I would like to announce publicly that my genome is released under the GPL</tokentext>
<sentencetext>I would like to announce publicly that my genome is released under the GPL</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684631</id>
	<title>Testicles.</title>
	<author>Anonymous</author>
	<datestamp>1247490180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That is <b>not</b> all.</p></htmltext>
<tokenext>That is not all .</tokentext>
<sentencetext>That is not all.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</id>
	<title>DNA GATC</title>
	<author>sakdoctor</author>
	<datestamp>1247488860000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>Functions that don't do anything, no comments, worst piece of code ever!</p><p>I say we fork and refactor the entire project.</p></htmltext>
<tokenext>Functions that do n't do anything , no comments , worst piece of code ever ! I say we fork and refactor the entire project .</tokentext>
<sentencetext>Functions that don't do anything, no comments, worst piece of code ever!I say we fork and refactor the entire project.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28689185</id>
	<title>I want to copyright my dna.  Then, it can't be....</title>
	<author>CFD339</author>
	<datestamp>1247578140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>...used against me for anything without violating the DMCA.  The act of decoding it by some forensics lab paternity test or future insurance company medical cost profile would become unlawful and I'm sure the RIAA would help me with the cost of prosecuting the lawsuit.</p></htmltext>
<tokenext>...used against me for anything without violating the DMCA .
The act of decoding it by some forensics lab paternity test or future insurance company medical cost profile would become unlawful and I 'm sure the RIAA would help me with the cost of prosecuting the lawsuit .</tokentext>
<sentencetext>...used against me for anything without violating the DMCA.
The act of decoding it by some forensics lab paternity test or future insurance company medical cost profile would become unlawful and I'm sure the RIAA would help me with the cost of prosecuting the lawsuit.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686613</id>
	<title>Re:Here's what I want to know...</title>
	<author>timeOday</author>
	<datestamp>1247506680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>In other words, you don't have "a" genome.  What you are is a big bunch of cells, closely enough related that their genomes are very similar.</htmltext>
<tokenext>In other words , you do n't have " a " genome .
What you are is a big bunch of cells , closely enough related that their genomes are very similar .</tokentext>
<sentencetext>In other words, you don't have "a" genome.
What you are is a big bunch of cells, closely enough related that their genomes are very similar.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684443</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28691373</id>
	<title>Not sequencing a genome- resequencing</title>
	<author>Anonymous</author>
	<datestamp>1247587860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The human genome project cost a lot of money because the technology was being developed at the same time and the genome was unknown.  What WASHU is doing (and clearly stated in the article) resequencing human genomes.  They are sequencing from a genome- collecting data from a given genome- but the genome is produced by aligning short reads against the human genome reference, not creating another reference and comparing it.  As of today- a single new human genome sequenced to the accuracy and completeness of the original would still cost 60-80 million dollars. You could potentially cut the cost by 30\% using 454, but also sacrifice some serious accuracy. Oh, and the cost is so low because we already know how to do it and the investment in the technological infrastructure:)  Also, if you started from scratch it would also cost about 30M in equipment costs to accomplish this task in 1 year (okay so its probably not actually doable in a year, so say 15 M for equipment to complete the sequencing in 2 years) which brings the cost to around 100M for an equivalent denovo human genome sequence.  The original project doesn't seem like such a bad deal does it?</p></htmltext>
<tokenext>The human genome project cost a lot of money because the technology was being developed at the same time and the genome was unknown .
What WASHU is doing ( and clearly stated in the article ) resequencing human genomes .
They are sequencing from a genome- collecting data from a given genome- but the genome is produced by aligning short reads against the human genome reference , not creating another reference and comparing it .
As of today- a single new human genome sequenced to the accuracy and completeness of the original would still cost 60-80 million dollars .
You could potentially cut the cost by 30 \ % using 454 , but also sacrifice some serious accuracy .
Oh , and the cost is so low because we already know how to do it and the investment in the technological infrastructure : ) Also , if you started from scratch it would also cost about 30M in equipment costs to accomplish this task in 1 year ( okay so its probably not actually doable in a year , so say 15 M for equipment to complete the sequencing in 2 years ) which brings the cost to around 100M for an equivalent denovo human genome sequence .
The original project does n't seem like such a bad deal does it ?</tokentext>
<sentencetext>The human genome project cost a lot of money because the technology was being developed at the same time and the genome was unknown.
What WASHU is doing (and clearly stated in the article) resequencing human genomes.
They are sequencing from a genome- collecting data from a given genome- but the genome is produced by aligning short reads against the human genome reference, not creating another reference and comparing it.
As of today- a single new human genome sequenced to the accuracy and completeness of the original would still cost 60-80 million dollars.
You could potentially cut the cost by 30\% using 454, but also sacrifice some serious accuracy.
Oh, and the cost is so low because we already know how to do it and the investment in the technological infrastructure:)  Also, if you started from scratch it would also cost about 30M in equipment costs to accomplish this task in 1 year (okay so its probably not actually doable in a year, so say 15 M for equipment to complete the sequencing in 2 years) which brings the cost to around 100M for an equivalent denovo human genome sequence.
The original project doesn't seem like such a bad deal does it?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684591</id>
	<title>Re:We pissed away $3 billion dollars</title>
	<author>QuantumG</author>
	<datestamp>1247489940000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>What's funny is that there is actually people who think like that.  Apparently if we just sit around and wait, things will get better.  I call this the dark side of the "invisible hand" of the market.. because it is invisible, people forget how it comes about.  In order to get improvement in technology you need a <i>market</i> for that technology.  And, typically, you need some loss-leader to create the market in the first place.  Government funding serves this purpose well.</p></htmltext>
<tokenext>What 's funny is that there is actually people who think like that .
Apparently if we just sit around and wait , things will get better .
I call this the dark side of the " invisible hand " of the market.. because it is invisible , people forget how it comes about .
In order to get improvement in technology you need a market for that technology .
And , typically , you need some loss-leader to create the market in the first place .
Government funding serves this purpose well .</tokentext>
<sentencetext>What's funny is that there is actually people who think like that.
Apparently if we just sit around and wait, things will get better.
I call this the dark side of the "invisible hand" of the market.. because it is invisible, people forget how it comes about.
In order to get improvement in technology you need a market for that technology.
And, typically, you need some loss-leader to create the market in the first place.
Government funding serves this purpose well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684513</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684669</id>
	<title>Re:Here's what I want to know...</title>
	<author>damn\_registrars</author>
	<datestamp>1247490540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same?</p></div><p>
They should be.  An individual's <b>genome</b> does not change over time.  Gene expression can change, which can itself lead to significant problems such as cancer.</p></div>
	</htmltext>
<tokenext>Suppose they sequence a specific human 's genome .
Now they do it again .
Will the two sequences be the same ?
They should be .
An individual 's genome does not change over time .
Gene expression can change , which can itself lead to significant problems such as cancer .</tokentext>
<sentencetext>Suppose they sequence a specific human's genome.
Now they do it again.
Will the two sequences be the same?
They should be.
An individual's genome does not change over time.
Gene expression can change, which can itself lead to significant problems such as cancer.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28689965</id>
	<title>Re:Genome as a cause?</title>
	<author>SlashBugs</author>
	<datestamp>1247581500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Cancer has been with us throughout recorded history. Ancient Egyptian, Greek, Roman and Chinese doctors described and drew tumours growing on their patients covering a span of about 2000-4000 years ago. There's also archeological evidence of cancers much older than that, e.g. in <a href="http://wiki.answers.com/Q/When\_was\_cancer\_first\_discovered" title="answers.com">Bronze age fossils</a> [answers.com].<br> <br>

Cancer has become more common over the last hundred years or so. A huge part of that is simply the fact that we're living much longer, meaning that the odds of a given person developing cancer are much higher. <br> <br>

Of course you're right that environmental factors are important. Smoking and increased alcohol consumption are probably the biggest contributors, probably followed by poorly tested or controlled industrial synthetics like Asbestos. I've no idea what makes you think that no-one is researching this stuff. It's not exactly hard to find: cancer.org and cancerresearch.org.uk are great places to start reading about the known risk factors in modern life. Or, you know, there's google. <br> <br>

Probably the best source about risk factors is <a href="http://www.dietandcancerreport.org/?p=ER" title="dietandcancerreport.org">this huge meta-analysis of cancer papers</a> [dietandcancerreport.org].
A <a href="http://www.scienceprogress.org/2007/11/meta-study-says-the-best-medicine-for-cancer-is-prevention/" title="scienceprogress.org">science journalist's</a> [scienceprogress.org] summary:
<i>In addition to the cancer risk associated with excess body fat, the WCRF-AICR study offered 10 lifestyle recommendations to help ward off cancer, including limiting red meat consumption and excessive drinking, exercising daily, avoiding processed meats such as bacon and ham, and eating a diet rich in fruits, vegetables and whole grains. The research synthesizes many individual reports that have found similar lifestyle-cancer connections for specific cancers.</i>

<br> <br>
But even with cancers caused by environmental factors, there's still good reason to sequence genomes. Cancer develops as a result of a cell's DNA becoming damaged in ways that constitutively activate its replication programmes and suppress its checkpoint and suicide programmes. So sequencing the genome of cancer cells gives a lot of information about exactly how those cells became cancerous (although we're not sure what we're looking for yet), which in turn suggests ways to treat that specific cancer. Alternatively, sequencing healthy cells from people can give us information about why some populations are at higher risk of developing cancer. For example, carriers of specific forms of the <a href="http://info.cancerresearchuk.org/news/archive/pressreleases/2006/october/230040" title="cancerresearchuk.org">BRCA1, BRCA2 or BRIP1</a> [cancerresearchuk.org] gene are at higher risk of developing breast cancer than the rest of the population. These discoveries gave us insight into how this cancer develops, which hints at possible treatments. Also, if someone has their genome sequenced and discovers these faulty genes they can take steps to avoid other risk factors (alcohol, etc) to control their risk, and attend more regular screening than the general population.</htmltext>
<tokenext>Cancer has been with us throughout recorded history .
Ancient Egyptian , Greek , Roman and Chinese doctors described and drew tumours growing on their patients covering a span of about 2000-4000 years ago .
There 's also archeological evidence of cancers much older than that , e.g .
in Bronze age fossils [ answers.com ] .
Cancer has become more common over the last hundred years or so .
A huge part of that is simply the fact that we 're living much longer , meaning that the odds of a given person developing cancer are much higher .
Of course you 're right that environmental factors are important .
Smoking and increased alcohol consumption are probably the biggest contributors , probably followed by poorly tested or controlled industrial synthetics like Asbestos .
I 've no idea what makes you think that no-one is researching this stuff .
It 's not exactly hard to find : cancer.org and cancerresearch.org.uk are great places to start reading about the known risk factors in modern life .
Or , you know , there 's google .
Probably the best source about risk factors is this huge meta-analysis of cancer papers [ dietandcancerreport.org ] .
A science journalist 's [ scienceprogress.org ] summary : In addition to the cancer risk associated with excess body fat , the WCRF-AICR study offered 10 lifestyle recommendations to help ward off cancer , including limiting red meat consumption and excessive drinking , exercising daily , avoiding processed meats such as bacon and ham , and eating a diet rich in fruits , vegetables and whole grains .
The research synthesizes many individual reports that have found similar lifestyle-cancer connections for specific cancers .
But even with cancers caused by environmental factors , there 's still good reason to sequence genomes .
Cancer develops as a result of a cell 's DNA becoming damaged in ways that constitutively activate its replication programmes and suppress its checkpoint and suicide programmes .
So sequencing the genome of cancer cells gives a lot of information about exactly how those cells became cancerous ( although we 're not sure what we 're looking for yet ) , which in turn suggests ways to treat that specific cancer .
Alternatively , sequencing healthy cells from people can give us information about why some populations are at higher risk of developing cancer .
For example , carriers of specific forms of the BRCA1 , BRCA2 or BRIP1 [ cancerresearchuk.org ] gene are at higher risk of developing breast cancer than the rest of the population .
These discoveries gave us insight into how this cancer develops , which hints at possible treatments .
Also , if someone has their genome sequenced and discovers these faulty genes they can take steps to avoid other risk factors ( alcohol , etc ) to control their risk , and attend more regular screening than the general population .</tokentext>
<sentencetext>Cancer has been with us throughout recorded history.
Ancient Egyptian, Greek, Roman and Chinese doctors described and drew tumours growing on their patients covering a span of about 2000-4000 years ago.
There's also archeological evidence of cancers much older than that, e.g.
in Bronze age fossils [answers.com].
Cancer has become more common over the last hundred years or so.
A huge part of that is simply the fact that we're living much longer, meaning that the odds of a given person developing cancer are much higher.
Of course you're right that environmental factors are important.
Smoking and increased alcohol consumption are probably the biggest contributors, probably followed by poorly tested or controlled industrial synthetics like Asbestos.
I've no idea what makes you think that no-one is researching this stuff.
It's not exactly hard to find: cancer.org and cancerresearch.org.uk are great places to start reading about the known risk factors in modern life.
Or, you know, there's google.
Probably the best source about risk factors is this huge meta-analysis of cancer papers [dietandcancerreport.org].
A science journalist's [scienceprogress.org] summary:
In addition to the cancer risk associated with excess body fat, the WCRF-AICR study offered 10 lifestyle recommendations to help ward off cancer, including limiting red meat consumption and excessive drinking, exercising daily, avoiding processed meats such as bacon and ham, and eating a diet rich in fruits, vegetables and whole grains.
The research synthesizes many individual reports that have found similar lifestyle-cancer connections for specific cancers.
But even with cancers caused by environmental factors, there's still good reason to sequence genomes.
Cancer develops as a result of a cell's DNA becoming damaged in ways that constitutively activate its replication programmes and suppress its checkpoint and suicide programmes.
So sequencing the genome of cancer cells gives a lot of information about exactly how those cells became cancerous (although we're not sure what we're looking for yet), which in turn suggests ways to treat that specific cancer.
Alternatively, sequencing healthy cells from people can give us information about why some populations are at higher risk of developing cancer.
For example, carriers of specific forms of the BRCA1, BRCA2 or BRIP1 [cancerresearchuk.org] gene are at higher risk of developing breast cancer than the rest of the population.
These discoveries gave us insight into how this cancer develops, which hints at possible treatments.
Also, if someone has their genome sequenced and discovers these faulty genes they can take steps to avoid other risk factors (alcohol, etc) to control their risk, and attend more regular screening than the general population.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688511</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423</id>
	<title>Here's what I want to know...</title>
	<author>HotNeedleOfInquiry</author>
	<datestamp>1247488980000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Suppose they sequence a specific human's genome.  Now they do it again.  Will the two sequences be the same?</htmltext>
<tokenext>Suppose they sequence a specific human 's genome .
Now they do it again .
Will the two sequences be the same ?</tokentext>
<sentencetext>Suppose they sequence a specific human's genome.
Now they do it again.
Will the two sequences be the same?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727</id>
	<title>How do you know it's NOT comments?</title>
	<author>Anonymous</author>
	<datestamp>1247491140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Functions that don't do anything, no comments, worst piece of code ever!</i></p><p>Most of it doesn't code proteins or any of the other things that have been reverse-engineered so far.  How do you know it's NOT comments?</p><p>(And if terrestrial life was engineered and it IS comments, do they qualify as "holy writ"?)</p></htmltext>
<tokenext>Functions that do n't do anything , no comments , worst piece of code ever ! Most of it does n't code proteins or any of the other things that have been reverse-engineered so far .
How do you know it 's NOT comments ?
( And if terrestrial life was engineered and it IS comments , do they qualify as " holy writ " ?
)</tokentext>
<sentencetext>Functions that don't do anything, no comments, worst piece of code ever!Most of it doesn't code proteins or any of the other things that have been reverse-engineered so far.
How do you know it's NOT comments?
(And if terrestrial life was engineered and it IS comments, do they qualify as "holy writ"?
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684425</id>
	<title>How about storing it in analog format?</title>
	<author>Anonymous</author>
	<datestamp>1247488980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>Just store all that data as a chemical compound. Maybe a nucleic acid of some kind? Using two long polymers made of sugars and phosphates? I bet the whole thing could be squeezed into something smaller than the head of a pin!</p></htmltext>
<tokenext>Just store all that data as a chemical compound .
Maybe a nucleic acid of some kind ?
Using two long polymers made of sugars and phosphates ?
I bet the whole thing could be squeezed into something smaller than the head of a pin !</tokentext>
<sentencetext>Just store all that data as a chemical compound.
Maybe a nucleic acid of some kind?
Using two long polymers made of sugars and phosphates?
I bet the whole thing could be squeezed into something smaller than the head of a pin!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688493</id>
	<title>Re:DNA GATC</title>
	<author>Hurricane78</author>
	<datestamp>1247571480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually it's just the arrogance of some scientist. Who later found out, that all those parts who seemingly did not do anything, were in fact just as relevant. Just in a different way. Whoops!</p></htmltext>
<tokenext>Actually it 's just the arrogance of some scientist .
Who later found out , that all those parts who seemingly did not do anything , were in fact just as relevant .
Just in a different way .
Whoops !</tokentext>
<sentencetext>Actually it's just the arrogance of some scientist.
Who later found out, that all those parts who seemingly did not do anything, were in fact just as relevant.
Just in a different way.
Whoops!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28689201</id>
	<title>Where's Nedry ?</title>
	<author>ciderVisor</author>
	<datestamp>1247578260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Check the vending machines !</p></htmltext>
<tokenext>Check the vending machines !</tokentext>
<sentencetext>Check the vending machines !</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688741</id>
	<title>Re:How do you know it's NOT comments?</title>
	<author>Anonymous</author>
	<datestamp>1247574300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I thought that most people unlearned the "live until 100, then die" assumption by the time they were eight years old...</p><p>Human life expectancy is still averaging around 60, even in rich countries.</p></htmltext>
<tokenext>I thought that most people unlearned the " live until 100 , then die " assumption by the time they were eight years old...Human life expectancy is still averaging around 60 , even in rich countries .</tokentext>
<sentencetext>I thought that most people unlearned the "live until 100, then die" assumption by the time they were eight years old...Human life expectancy is still averaging around 60, even in rich countries.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686567</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688511</id>
	<title>Genome as a cause?</title>
	<author>Hurricane78</author>
	<datestamp>1247571660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, how about pollution, processed food, and all that trash being the main reason we get cancer?</p><p>Cancer was not even a known disease, a century ago, because nobody had it. (And if people get cancer now, way before the average age of death a century ago, then it can't be that it is because we now get older.)</p><p>But I guess there is no money in that. Right?</p></htmltext>
<tokenext>Well , how about pollution , processed food , and all that trash being the main reason we get cancer ? Cancer was not even a known disease , a century ago , because nobody had it .
( And if people get cancer now , way before the average age of death a century ago , then it ca n't be that it is because we now get older .
) But I guess there is no money in that .
Right ?</tokentext>
<sentencetext>Well, how about pollution, processed food, and all that trash being the main reason we get cancer?Cancer was not even a known disease, a century ago, because nobody had it.
(And if people get cancer now, way before the average age of death a century ago, then it can't be that it is because we now get older.
)But I guess there is no money in that.
Right?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685063</id>
	<title>Re:Data analysis a rapidly growing problem in Biol</title>
	<author>olsmeister</author>
	<datestamp>1247493840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>A good example of this is the <a href="http://www.pancreasexpression.org/" title="pancreasexpression.org" rel="nofollow">Pancreas Expression Database</a> [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.</p></div><p> <a href="http://news.yahoo.com/s/nm/20090713/wl\_nm/us\_korea\_north\_5" title="yahoo.com" rel="nofollow">
Kim Jong-il</a> [yahoo.com] will be ecstatic to hear that.  Dear Leader can't very well put the Grim Reaper into political prison....</p></div>
	</htmltext>
<tokenext>A good example of this is the Pancreas Expression Database [ pancreasexpression.org ] , which some pacreatic cancer researchers are getting very excited about .
Kim Jong-il [ yahoo.com ] will be ecstatic to hear that .
Dear Leader ca n't very well put the Grim Reaper into political prison... .</tokentext>
<sentencetext>A good example of this is the Pancreas Expression Database [pancreasexpression.org], which some pacreatic cancer researchers are getting very excited about.
Kim Jong-il [yahoo.com] will be ecstatic to hear that.
Dear Leader can't very well put the Grim Reaper into political prison....
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694671</id>
	<title>Re:Here's what I want to know...</title>
	<author>bogado</author>
	<datestamp>1247601720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Each person have a different sequence, while the first time they sequenced one of the billions "human genomes". Doing different people could help finding what makes one person different from another and on the other hand what make us similar.<nobr> <wbr></nobr>:-)</p></htmltext>
<tokenext>Each person have a different sequence , while the first time they sequenced one of the billions " human genomes " .
Doing different people could help finding what makes one person different from another and on the other hand what make us similar .
: - )</tokentext>
<sentencetext>Each person have a different sequence, while the first time they sequenced one of the billions "human genomes".
Doing different people could help finding what makes one person different from another and on the other hand what make us similar.
:-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684617</id>
	<title>Re:Here's what I want to know...</title>
	<author>K. S. Kyosuke</author>
	<datestamp>1247490120000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>"Suppose they sequence a specific human's genome. Now they do it again. Will the two sequences be the same?"</p><p>

<a href="http://en.wikipedia.org/wiki/Mosaicism" title="wikipedia.org">Not</a> [wikipedia.org] <a href="http://en.wikipedia.org/wiki/Chimera\_(genetics)" title="wikipedia.org">necessarily</a> [wikipedia.org].<nobr> <wbr></nobr>;-)</p></htmltext>
<tokenext>" Suppose they sequence a specific human 's genome .
Now they do it again .
Will the two sequences be the same ?
" Not [ wikipedia.org ] necessarily [ wikipedia.org ] .
; - )</tokentext>
<sentencetext>"Suppose they sequence a specific human's genome.
Now they do it again.
Will the two sequences be the same?
"

Not [wikipedia.org] necessarily [wikipedia.org].
;-)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685385</id>
	<title>Re:Data analysis a rapidly growing problem in Biol</title>
	<author>Daniel Dvorkin</author>
	<datestamp>1247497200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>The vast majority of genes only have effects when translated into protein</i></p><p>That depends on your definition.  If you <b>define</b> a gene as "stretch of DNA that is translated into protein," which until fairly recently was the going definition, then of course your statement is tautologically true (replacing "the vast majority of" with "all.")  But if you define it as "a stretch of DNA that does something biologically interesting," then it's no longer at all clear.  Given the number of regulatory elements not directly associated with genes, sections of DNA that code for RNAzymes, etc., it may well be that the majority of "genes" are not protein-coding at all.  Going back to the Mendelian definition of a gene as a unit of inheritance, this looks more and more likely.</p></htmltext>
<tokenext>The vast majority of genes only have effects when translated into proteinThat depends on your definition .
If you define a gene as " stretch of DNA that is translated into protein , " which until fairly recently was the going definition , then of course your statement is tautologically true ( replacing " the vast majority of " with " all .
" ) But if you define it as " a stretch of DNA that does something biologically interesting , " then it 's no longer at all clear .
Given the number of regulatory elements not directly associated with genes , sections of DNA that code for RNAzymes , etc. , it may well be that the majority of " genes " are not protein-coding at all .
Going back to the Mendelian definition of a gene as a unit of inheritance , this looks more and more likely .</tokentext>
<sentencetext>The vast majority of genes only have effects when translated into proteinThat depends on your definition.
If you define a gene as "stretch of DNA that is translated into protein," which until fairly recently was the going definition, then of course your statement is tautologically true (replacing "the vast majority of" with "all.
")  But if you define it as "a stretch of DNA that does something biologically interesting," then it's no longer at all clear.
Given the number of regulatory elements not directly associated with genes, sections of DNA that code for RNAzymes, etc., it may well be that the majority of "genes" are not protein-coding at all.
Going back to the Mendelian definition of a gene as a unit of inheritance, this looks more and more likely.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684443</id>
	<title>Re:Here's what I want to know...</title>
	<author>blackbearnh</author>
	<datestamp>1247489160000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>I wondered the same thing, so I asked.  From the article:

And between two cells, one cell right next to the other, they should be identical copies of each other. But sometimes mistakes are made in the process of copying the DNA. And so some differences may exist. However, we're not at present currently sequencing single cells. We'll collect a host of cells and isolate the DNA from a host of cells. So what you end up is with when you read the sequence out on these things is, essentially, an average of this DNA sequence. Well, I mean it's digital in that eventually you get down to a single piece of DNA. But once you align these things back, if you see 30 reads that all align to the same region of the genome and only one of them has an A at the position and all of the others have a T at that position, you can't say whether that A was actually some small change between one cell and its 99 closest neighbors or whether that was just an error in the sequencing. So it's hard to say cell-to-cell how much difference there is. But, of course, that difference does exist, otherwise that's mutation and that's what eventually leads to cancer and other diseases.</htmltext>
<tokenext>I wondered the same thing , so I asked .
From the article : And between two cells , one cell right next to the other , they should be identical copies of each other .
But sometimes mistakes are made in the process of copying the DNA .
And so some differences may exist .
However , we 're not at present currently sequencing single cells .
We 'll collect a host of cells and isolate the DNA from a host of cells .
So what you end up is with when you read the sequence out on these things is , essentially , an average of this DNA sequence .
Well , I mean it 's digital in that eventually you get down to a single piece of DNA .
But once you align these things back , if you see 30 reads that all align to the same region of the genome and only one of them has an A at the position and all of the others have a T at that position , you ca n't say whether that A was actually some small change between one cell and its 99 closest neighbors or whether that was just an error in the sequencing .
So it 's hard to say cell-to-cell how much difference there is .
But , of course , that difference does exist , otherwise that 's mutation and that 's what eventually leads to cancer and other diseases .</tokentext>
<sentencetext>I wondered the same thing, so I asked.
From the article:

And between two cells, one cell right next to the other, they should be identical copies of each other.
But sometimes mistakes are made in the process of copying the DNA.
And so some differences may exist.
However, we're not at present currently sequencing single cells.
We'll collect a host of cells and isolate the DNA from a host of cells.
So what you end up is with when you read the sequence out on these things is, essentially, an average of this DNA sequence.
Well, I mean it's digital in that eventually you get down to a single piece of DNA.
But once you align these things back, if you see 30 reads that all align to the same region of the genome and only one of them has an A at the position and all of the others have a T at that position, you can't say whether that A was actually some small change between one cell and its 99 closest neighbors or whether that was just an error in the sequencing.
So it's hard to say cell-to-cell how much difference there is.
But, of course, that difference does exist, otherwise that's mutation and that's what eventually leads to cancer and other diseases.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28690887</id>
	<title>Re:DNA GATC</title>
	<author>darthpenguin</author>
	<datestamp>1247585640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p> <i>Another problem they never mention are artifacts from the chemical protocol; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap (usually only the last few bases are unreliable); turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on.</i> </p><p>Do you have any more details about this?  I'm working on solexa sequencing of ChIP DNA with (modified) histone and transcription factor targets.  These runs are expensive so it would be nice to avoid problems that someone else has already gone through.</p></htmltext>
<tokenext>Another problem they never mention are artifacts from the chemical protocol ; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap ( usually only the last few bases are unreliable ) ; turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on .
Do you have any more details about this ?
I 'm working on solexa sequencing of ChIP DNA with ( modified ) histone and transcription factor targets .
These runs are expensive so it would be nice to avoid problems that someone else has already gone through .</tokentext>
<sentencetext> Another problem they never mention are artifacts from the chemical protocol; just the other day we found a very unusual anomaly that indicated the first 1/3 of all our reads was absolutely crap (usually only the last few bases are unreliable); turned out our slight modification of the Illumina protocol to tailor it to studying epigenomic effects had quite large effects of the sequencing reactions later on.
Do you have any more details about this?
I'm working on solexa sequencing of ChIP DNA with (modified) histone and transcription factor targets.
These runs are expensive so it would be nice to avoid problems that someone else has already gone through.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686613
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684443
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685311
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685271
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685385
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688493
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685233
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684513
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686605
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28689965
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688511
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685391
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685211
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28692629
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28687447
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685211
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685833
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685447
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684339
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686803
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684425
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684591
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684513
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694671
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28690887
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685063
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684923
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684617
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694457
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694905
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686567
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685293
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688741
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686567
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688061
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_13_2129229_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685195
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684425
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684613
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684551
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688511
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28689965
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684425
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685195
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686803
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684513
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685233
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684591
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684603
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28692629
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685385
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685063
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684403
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694457
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686605
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28690887
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685271
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688493
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684727
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686567
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694905
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688741
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28688061
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684923
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684339
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685447
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685833
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686047
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684423
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684443
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28686613
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684669
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685293
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685311
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28694671
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28684617
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_13_2129229.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685211
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28685391
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_13_2129229.28687447
</commentlist>
</conversation>
