<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_11_09_2335214</id>
	<title>The NoSQL Ecosystem</title>
	<author>kdawson</author>
	<datestamp>1257786720000</datestamp>
	<htmltext>abartels writes <i>'Unprecedented data volumes are driving businesses to look at <a href="http://www.rackspacecloud.com/blog/2009/11/09/nosql-ecosystem/">alternatives to the traditional relational database technology</a> that has served us well for over thirty years.  Collectively, these alternatives have become known as NoSQL databases. The fundamental problem is that relational databases cannot handle many modern workloads. There are three specific problem areas: scaling out to data sets like Digg's (3 TB for green badges) or Facebook's (50 TB for inbox search) or eBay's (2 PB overall); per-server performance; and rigid schema design.'</i></htmltext>
<tokenext>abartels writes 'Unprecedented data volumes are driving businesses to look at alternatives to the traditional relational database technology that has served us well for over thirty years .
Collectively , these alternatives have become known as NoSQL databases .
The fundamental problem is that relational databases can not handle many modern workloads .
There are three specific problem areas : scaling out to data sets like Digg 's ( 3 TB for green badges ) or Facebook 's ( 50 TB for inbox search ) or eBay 's ( 2 PB overall ) ; per-server performance ; and rigid schema design .
'</tokentext>
<sentencetext>abartels writes 'Unprecedented data volumes are driving businesses to look at alternatives to the traditional relational database technology that has served us well for over thirty years.
Collectively, these alternatives have become known as NoSQL databases.
The fundamental problem is that relational databases cannot handle many modern workloads.
There are three specific problem areas: scaling out to data sets like Digg's (3 TB for green badges) or Facebook's (50 TB for inbox search) or eBay's (2 PB overall); per-server performance; and rigid schema design.
'</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042774</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>Prodigy Savant</author>
	<datestamp>1257794340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>What you are suggesting is to mimic a key-value design with something like a json or serialized data as the value.</p><p>This would work if you never had to index on any of the values in the json. All your sql queries must have there where parts running off the key.</p><p>This is a problem that couchdb and mongodb solve.</p><p>I am not trying to paint SQL in an unflattering shade -- there would still be a lot of situations where an RDBMS design would be optimal. Infact, I am currently working on a mongodb/mysql hybrid solution for a large web site (larger than<nobr> <wbr></nobr>/. )</p></htmltext>
<tokenext>What you are suggesting is to mimic a key-value design with something like a json or serialized data as the value.This would work if you never had to index on any of the values in the json .
All your sql queries must have there where parts running off the key.This is a problem that couchdb and mongodb solve.I am not trying to paint SQL in an unflattering shade -- there would still be a lot of situations where an RDBMS design would be optimal .
Infact , I am currently working on a mongodb/mysql hybrid solution for a large web site ( larger than / .
)</tokentext>
<sentencetext>What you are suggesting is to mimic a key-value design with something like a json or serialized data as the value.This would work if you never had to index on any of the values in the json.
All your sql queries must have there where parts running off the key.This is a problem that couchdb and mongodb solve.I am not trying to paint SQL in an unflattering shade -- there would still be a lot of situations where an RDBMS design would be optimal.
Infact, I am currently working on a mongodb/mysql hybrid solution for a large web site (larger than /.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044470</id>
	<title>Re:Why worry?</title>
	<author>Prototerm</author>
	<datestamp>1257861660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>dBase3 FTW</p></htmltext>
<tokenext>dBase3 FTW</tokentext>
<sentencetext>dBase3 FTW</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044260</id>
	<title>Re:bad design</title>
	<author>Tim C</author>
	<datestamp>1257859500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>Also, when was the last time you tried to visit Facebook and it was down?</i></p><p>Well it's been a couple of months, but it does happen. Also one of my friends used to have a problem with her profile being unavailable for hours at a time quite frequently.</p><p>Not major issues, it's true, and they're doing a great job, but don't think that they're perfect because they're not. (But then all software has problems from time to time of course)</p></htmltext>
<tokenext>Also , when was the last time you tried to visit Facebook and it was down ? Well it 's been a couple of months , but it does happen .
Also one of my friends used to have a problem with her profile being unavailable for hours at a time quite frequently.Not major issues , it 's true , and they 're doing a great job , but do n't think that they 're perfect because they 're not .
( But then all software has problems from time to time of course )</tokentext>
<sentencetext>Also, when was the last time you tried to visit Facebook and it was down?Well it's been a couple of months, but it does happen.
Also one of my friends used to have a problem with her profile being unavailable for hours at a time quite frequently.Not major issues, it's true, and they're doing a great job, but don't think that they're perfect because they're not.
(But then all software has problems from time to time of course)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042564</id>
	<title>Hashes are your friend</title>
	<author>KalvinB</author>
	<datestamp>1257791400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In the example of inbox's no user has to look at another user's inbox so the first step is to simply find the current user's mail.</p><p>I typically use MD5 since it's very good at evenly distributing information.  For example stock symbols are heavily weighted to common letters so there are lots of stock symbols that start with "s".  But, if you MD5 the stock symbol you get an even distribution based on the first two hash characters to put the historical data into 256 tables.  You could also just put it all in one massive table and use the first two characters in their own column with an index.  The advantage of using multiple tables is that it's easier to later split the tables onto multiple physical systems.</p><p>So MD5 the Facebook user ID.  Use the first four characters to pick the database server.  Use the next four characters to pick the table and then select from there.  By the time you're even referencing the table you're down to a handful of accounts sharing one table.  Searching the User's email is then trivial as the dataset is small.</p><p>Another example of MD5 awesomeness is finding a URL and associated data very quickly (useful for DMOZ data).  In MySQL varchars can be up to 255 characters while URLs with various parameters can be any length so you could try to index the TEXT field OR you simply hash the URL and when you want to look up a URL you search for the easily indexed hash.</p><p>Working with large sets of data is only a problem if you don't devise ways to break up the data.  If Facebook needs to search all the user's email for various stuff then they can run a script that goes through every table in every database.  They don't have to run a single query which would take forever.  With distinct sets of data you can quickly start getting results to verify your code is accurate and start digging through the results while the script continues to run.</p></htmltext>
<tokenext>In the example of inbox 's no user has to look at another user 's inbox so the first step is to simply find the current user 's mail.I typically use MD5 since it 's very good at evenly distributing information .
For example stock symbols are heavily weighted to common letters so there are lots of stock symbols that start with " s " .
But , if you MD5 the stock symbol you get an even distribution based on the first two hash characters to put the historical data into 256 tables .
You could also just put it all in one massive table and use the first two characters in their own column with an index .
The advantage of using multiple tables is that it 's easier to later split the tables onto multiple physical systems.So MD5 the Facebook user ID .
Use the first four characters to pick the database server .
Use the next four characters to pick the table and then select from there .
By the time you 're even referencing the table you 're down to a handful of accounts sharing one table .
Searching the User 's email is then trivial as the dataset is small.Another example of MD5 awesomeness is finding a URL and associated data very quickly ( useful for DMOZ data ) .
In MySQL varchars can be up to 255 characters while URLs with various parameters can be any length so you could try to index the TEXT field OR you simply hash the URL and when you want to look up a URL you search for the easily indexed hash.Working with large sets of data is only a problem if you do n't devise ways to break up the data .
If Facebook needs to search all the user 's email for various stuff then they can run a script that goes through every table in every database .
They do n't have to run a single query which would take forever .
With distinct sets of data you can quickly start getting results to verify your code is accurate and start digging through the results while the script continues to run .</tokentext>
<sentencetext>In the example of inbox's no user has to look at another user's inbox so the first step is to simply find the current user's mail.I typically use MD5 since it's very good at evenly distributing information.
For example stock symbols are heavily weighted to common letters so there are lots of stock symbols that start with "s".
But, if you MD5 the stock symbol you get an even distribution based on the first two hash characters to put the historical data into 256 tables.
You could also just put it all in one massive table and use the first two characters in their own column with an index.
The advantage of using multiple tables is that it's easier to later split the tables onto multiple physical systems.So MD5 the Facebook user ID.
Use the first four characters to pick the database server.
Use the next four characters to pick the table and then select from there.
By the time you're even referencing the table you're down to a handful of accounts sharing one table.
Searching the User's email is then trivial as the dataset is small.Another example of MD5 awesomeness is finding a URL and associated data very quickly (useful for DMOZ data).
In MySQL varchars can be up to 255 characters while URLs with various parameters can be any length so you could try to index the TEXT field OR you simply hash the URL and when you want to look up a URL you search for the easily indexed hash.Working with large sets of data is only a problem if you don't devise ways to break up the data.
If Facebook needs to search all the user's email for various stuff then they can run a script that goes through every table in every database.
They don't have to run a single query which would take forever.
With distinct sets of data you can quickly start getting results to verify your code is accurate and start digging through the results while the script continues to run.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044106</id>
	<title>Neo4J is really interesting here...</title>
	<author>Anonymous</author>
	<datestamp>1257857400000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>All these programmers that know how to create tables and normalize a DB but that don't really understand advanced programming techniques are cornering themselves in the "Vietnam of software development": endless OO-to-RDB plumbing.  They invent lots of tools to ease their immediate pain, without looking at the big picture: OO and RDB do not match.</p><p>You have hierarchical datas?  Then learn something new: learn what OO really is about, use an OO DB.  It has proven to be really fast.</p><p>But I don't expect this to become mainstream: most people don't understand advanced programming techniques.  They don't understand OO, they don't understand multi-threaded programming.  Hence they rely on the SQL DB to "keep things in synch" and to "organize" their data.  It kinda works, for naive stuff.</p><p>Once it comes to real amount of data, then the relational paradigm and especially the SQL implementation of that relational paradigm simply ain't cutting it anymore.</p></htmltext>
<tokenext>All these programmers that know how to create tables and normalize a DB but that do n't really understand advanced programming techniques are cornering themselves in the " Vietnam of software development " : endless OO-to-RDB plumbing .
They invent lots of tools to ease their immediate pain , without looking at the big picture : OO and RDB do not match.You have hierarchical datas ?
Then learn something new : learn what OO really is about , use an OO DB .
It has proven to be really fast.But I do n't expect this to become mainstream : most people do n't understand advanced programming techniques .
They do n't understand OO , they do n't understand multi-threaded programming .
Hence they rely on the SQL DB to " keep things in synch " and to " organize " their data .
It kinda works , for naive stuff.Once it comes to real amount of data , then the relational paradigm and especially the SQL implementation of that relational paradigm simply ai n't cutting it anymore .</tokentext>
<sentencetext>All these programmers that know how to create tables and normalize a DB but that don't really understand advanced programming techniques are cornering themselves in the "Vietnam of software development": endless OO-to-RDB plumbing.
They invent lots of tools to ease their immediate pain, without looking at the big picture: OO and RDB do not match.You have hierarchical datas?
Then learn something new: learn what OO really is about, use an OO DB.
It has proven to be really fast.But I don't expect this to become mainstream: most people don't understand advanced programming techniques.
They don't understand OO, they don't understand multi-threaded programming.
Hence they rely on the SQL DB to "keep things in synch" and to "organize" their data.
It kinda works, for naive stuff.Once it comes to real amount of data, then the relational paradigm and especially the SQL implementation of that relational paradigm simply ain't cutting it anymore.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042740</id>
	<title>Curiously spurious</title>
	<author>KeensMustard</author>
	<datestamp>1257794040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Collectively, these alternatives have become known as NoSQL databases. The fundamental problem is that relational databases cannot handle many modern workloads.</p> </div><p>I'm sceptical. Why is the problem worse now then in the past?

Relational theory in practice is abstracting the data such that a human/application can understand it as logical constructs. How the data is PHYSICALLY organised is a matter of implementation - the relational theory doesn't place any constraint (!) on how the data is organised/retrieved/updated - except that by giving a broad design pattern , duplication is minmised, and so then is processing overhead. MPP (Parallel Processing) lends itself quite neatly to any large set of data - many implementations will continue to scale linearly above the PB size (e.g Teradata).

Looks to me like a sales pitch.</p></div>
	</htmltext>
<tokenext>Collectively , these alternatives have become known as NoSQL databases .
The fundamental problem is that relational databases can not handle many modern workloads .
I 'm sceptical .
Why is the problem worse now then in the past ?
Relational theory in practice is abstracting the data such that a human/application can understand it as logical constructs .
How the data is PHYSICALLY organised is a matter of implementation - the relational theory does n't place any constraint ( !
) on how the data is organised/retrieved/updated - except that by giving a broad design pattern , duplication is minmised , and so then is processing overhead .
MPP ( Parallel Processing ) lends itself quite neatly to any large set of data - many implementations will continue to scale linearly above the PB size ( e.g Teradata ) .
Looks to me like a sales pitch .</tokentext>
<sentencetext>Collectively, these alternatives have become known as NoSQL databases.
The fundamental problem is that relational databases cannot handle many modern workloads.
I'm sceptical.
Why is the problem worse now then in the past?
Relational theory in practice is abstracting the data such that a human/application can understand it as logical constructs.
How the data is PHYSICALLY organised is a matter of implementation - the relational theory doesn't place any constraint (!
) on how the data is organised/retrieved/updated - except that by giving a broad design pattern , duplication is minmised, and so then is processing overhead.
MPP (Parallel Processing) lends itself quite neatly to any large set of data - many implementations will continue to scale linearly above the PB size (e.g Teradata).
Looks to me like a sales pitch.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044436</id>
	<title>Re:bad design</title>
	<author>Muad'Dave</author>
	<datestamp>1257861240000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><nobr> <wbr></nobr><i>...they call themselves "No SQL" and then lash out at relational databases.</i> </p><p>Had you read the article, you would've seen that the "No" in NoSQL stands for Not Only, not No, as in none whatsoever. I welcome any and all research into better, tighter synergy between databases and object persistence.</p></htmltext>
<tokenext>...they call themselves " No SQL " and then lash out at relational databases .
Had you read the article , you would 've seen that the " No " in NoSQL stands for Not Only , not No , as in none whatsoever .
I welcome any and all research into better , tighter synergy between databases and object persistence .</tokentext>
<sentencetext> ...they call themselves "No SQL" and then lash out at relational databases.
Had you read the article, you would've seen that the "No" in NoSQL stands for Not Only, not No, as in none whatsoever.
I welcome any and all research into better, tighter synergy between databases and object persistence.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044886</id>
	<title>Re:NoSQL? That'd Be DL/I, Right?</title>
	<author>Anonymous</author>
	<datestamp>1257864960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>And for the pro-relational, nosql croud, IBM also produced Model 204 in 1968 for the NSA and it is still in use today in many intelligence and other government agencies. Uses an attribute-value structure and repeating groups in record sets, makes for extremely fast procedure based database, but again, with flexibility comes development cost....</p></htmltext>
<tokenext>And for the pro-relational , nosql croud , IBM also produced Model 204 in 1968 for the NSA and it is still in use today in many intelligence and other government agencies .
Uses an attribute-value structure and repeating groups in record sets , makes for extremely fast procedure based database , but again , with flexibility comes development cost... .</tokentext>
<sentencetext>And for the pro-relational, nosql croud, IBM also produced Model 204 in 1968 for the NSA and it is still in use today in many intelligence and other government agencies.
Uses an attribute-value structure and repeating groups in record sets, makes for extremely fast procedure based database, but again, with flexibility comes development cost....</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042548</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30053302</id>
	<title>Re:NoSQL? That'd Be DL/I, Right?</title>
	<author>Anonymous</author>
	<datestamp>1257855960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>By the way, you can do SQL queries against \_that\_ database<nobr> <wbr></nobr>;)</p></htmltext>
<tokenext>By the way , you can do SQL queries against \ _that \ _ database ; )</tokentext>
<sentencetext>By the way, you can do SQL queries against \_that\_ database ;)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042548</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042924</id>
	<title>Re:bad design</title>
	<author>syousef</author>
	<datestamp>1257796320000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p><i>Ever heard of bloom filters? Sharding? Indexes? </i></p><p>What does World of Warcraft have to do with it?</p></htmltext>
<tokenext>Ever heard of bloom filters ?
Sharding ? Indexes ?
What does World of Warcraft have to do with it ?</tokentext>
<sentencetext>Ever heard of bloom filters?
Sharding? Indexes?
What does World of Warcraft have to do with it?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042656</id>
	<title>Everything old is new again</title>
	<author>QuoteMstr</author>
	<datestamp>1257792660000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>We didn't start with relationship databases. RDBMSes were responses to the seductive but unmanageable <a href="http://en.wikipedia.org/wiki/Navigational\_database" title="wikipedia.org">navigational databases</a> [wikipedia.org] that preceded them. There were good reasons for moving to relational databases, and those reasons are still valid today.</p><p>Computer Science doesn't change because we're writing in Javascript now instead of PL/1.</p></htmltext>
<tokenext>We did n't start with relationship databases .
RDBMSes were responses to the seductive but unmanageable navigational databases [ wikipedia.org ] that preceded them .
There were good reasons for moving to relational databases , and those reasons are still valid today.Computer Science does n't change because we 're writing in Javascript now instead of PL/1 .</tokentext>
<sentencetext>We didn't start with relationship databases.
RDBMSes were responses to the seductive but unmanageable navigational databases [wikipedia.org] that preceded them.
There were good reasons for moving to relational databases, and those reasons are still valid today.Computer Science doesn't change because we're writing in Javascript now instead of PL/1.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044348</id>
	<title>Re:And I am missing it greatly on Linux</title>
	<author>mugurel</author>
	<datestamp>1257860400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>most (95\%) of my queries are "This table/index. Number 5 <i>please</i>."</p></div><p>Admirable! Despite the strong desire for efficiency, you still have the prudence to phrase you queries <i>politely</i>.</p></div>
	</htmltext>
<tokenext>most ( 95 \ % ) of my queries are " This table/index .
Number 5 please. " Admirable !
Despite the strong desire for efficiency , you still have the prudence to phrase you queries politely .</tokentext>
<sentencetext>most (95\%) of my queries are "This table/index.
Number 5 please."Admirable!
Despite the strong desire for efficiency, you still have the prudence to phrase you queries politely.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043582</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043874</id>
	<title>One?</title>
	<author>Chapter80</author>
	<datestamp>1257854400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>E-Mail servers associate data with only <b>one</b> index: the e-mail address.</p></div><p>...Valid points, except for your use of the word "one".  My email can be retrieved by my email address, but also selected by the folder that it's in, sorted by sender, subject, date or priority, and searched by keyword.</p><p>There are only a couple of handfuls of thing that need to be indexed, but certainly more than 1.</p></div>
	</htmltext>
<tokenext>E-Mail servers associate data with only one index : the e-mail address....Valid points , except for your use of the word " one " .
My email can be retrieved by my email address , but also selected by the folder that it 's in , sorted by sender , subject , date or priority , and searched by keyword.There are only a couple of handfuls of thing that need to be indexed , but certainly more than 1 .</tokentext>
<sentencetext>E-Mail servers associate data with only one index: the e-mail address....Valid points, except for your use of the word "one".
My email can be retrieved by my email address, but also selected by the folder that it's in, sorted by sender, subject, date or priority, and searched by keyword.There are only a couple of handfuls of thing that need to be indexed, but certainly more than 1.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042644</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042982</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>TubeSteak</author>
	<datestamp>1257883740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Let's fiddle with and stretch RDBMS before outright tossing them.</p></div><p>This isn't "it ain't broke, don't fix it"<br>Instead we're dealing with "I have a hammer, so every problem looks like a nail"</p><p>The desire to "fiddle with and stretch" software instead of sinking dollars into something new is<br>part of the reason we have a clusterfark of decades old technologies &amp; hardware that won't go away.<br>Sometimes you have to accept that a hammer isn't the right tool for the job.</p></div>
	</htmltext>
<tokenext>Let 's fiddle with and stretch RDBMS before outright tossing them.This is n't " it ai n't broke , do n't fix it " Instead we 're dealing with " I have a hammer , so every problem looks like a nail " The desire to " fiddle with and stretch " software instead of sinking dollars into something new ispart of the reason we have a clusterfark of decades old technologies &amp; hardware that wo n't go away.Sometimes you have to accept that a hammer is n't the right tool for the job .</tokentext>
<sentencetext>Let's fiddle with and stretch RDBMS before outright tossing them.This isn't "it ain't broke, don't fix it"Instead we're dealing with "I have a hammer, so every problem looks like a nail"The desire to "fiddle with and stretch" software instead of sinking dollars into something new ispart of the reason we have a clusterfark of decades old technologies &amp; hardware that won't go away.Sometimes you have to accept that a hammer isn't the right tool for the job.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045782</id>
	<title>NoSQL? How about...</title>
	<author>Anonymous</author>
	<datestamp>1257869580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Let's call it Nazgul instead? That's how I pronounce NoSQL anyway<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Let 's call it Nazgul instead ?
That 's how I pronounce NoSQL anyway : )</tokentext>
<sentencetext>Let's call it Nazgul instead?
That's how I pronounce NoSQL anyway :)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042772</id>
	<title>10 years ago, they had the same problem</title>
	<author>johnlcallaway</author>
	<datestamp>1257794340000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>I was an admin on a system that spread the data across 10 database servers.  Each server had a complete set of some data, like accounts, but the system was designed so that ranges of accounts stored their transaction type data a specific server, and each server held about the same number of accounts and transactions.  As data came in, it was temporarily housed on the incoming server until a background process picked it up and moved it to the 'correct' one. This is a very simplistic view, but the reality was that it worked quite well. Occasionally, there was a re-balancing that had to be done. But it was very scalable. The incoming data wasn't so time sensitive that if it took a few hours to get moved, everything was still OK. When an 'online' session needed data, it knew which server to connect to to get it. Processing was done overnight on each server, then summarized and combined as needed.
<br> <br>
So yes<nobr> <wbr></nobr>..<nobr> <wbr></nobr>.people have been coming up with innovative ways to solve these problems for a very long time.
<br> <br>
And they will continue to do so.</htmltext>
<tokenext>I was an admin on a system that spread the data across 10 database servers .
Each server had a complete set of some data , like accounts , but the system was designed so that ranges of accounts stored their transaction type data a specific server , and each server held about the same number of accounts and transactions .
As data came in , it was temporarily housed on the incoming server until a background process picked it up and moved it to the 'correct ' one .
This is a very simplistic view , but the reality was that it worked quite well .
Occasionally , there was a re-balancing that had to be done .
But it was very scalable .
The incoming data was n't so time sensitive that if it took a few hours to get moved , everything was still OK. When an 'online ' session needed data , it knew which server to connect to to get it .
Processing was done overnight on each server , then summarized and combined as needed .
So yes .. .people have been coming up with innovative ways to solve these problems for a very long time .
And they will continue to do so .</tokentext>
<sentencetext>I was an admin on a system that spread the data across 10 database servers.
Each server had a complete set of some data, like accounts, but the system was designed so that ranges of accounts stored their transaction type data a specific server, and each server held about the same number of accounts and transactions.
As data came in, it was temporarily housed on the incoming server until a background process picked it up and moved it to the 'correct' one.
This is a very simplistic view, but the reality was that it worked quite well.
Occasionally, there was a re-balancing that had to be done.
But it was very scalable.
The incoming data wasn't so time sensitive that if it took a few hours to get moved, everything was still OK. When an 'online' session needed data, it knew which server to connect to to get it.
Processing was done overnight on each server, then summarized and combined as needed.
So yes .. .people have been coming up with innovative ways to solve these problems for a very long time.
And they will continue to do so.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045370</id>
	<title>CODASYL Hierarchical Databases are faster</title>
	<author>briddle</author>
	<datestamp>1257867600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>CODASYL Hierarchical Databases are faster for large complex databases.  I've supported extremely large databases and user bases with 3 second or better end-to-end response times for over 300,000 real-time customer service rep users with such software.   These databases allow precise physical positioning; including the ability to group related child record rows on the same physical page.  One I/O can retrieve the entire set.  They also support hash or other custom indexing that directly yields the physical page address instead of wading thru relational index pages to get there.

Tool support is not as good and it takes someone who understands them to get the best results.  Functionality such as producing report output is more work.  But they work great on large datasets.</htmltext>
<tokenext>CODASYL Hierarchical Databases are faster for large complex databases .
I 've supported extremely large databases and user bases with 3 second or better end-to-end response times for over 300,000 real-time customer service rep users with such software .
These databases allow precise physical positioning ; including the ability to group related child record rows on the same physical page .
One I/O can retrieve the entire set .
They also support hash or other custom indexing that directly yields the physical page address instead of wading thru relational index pages to get there .
Tool support is not as good and it takes someone who understands them to get the best results .
Functionality such as producing report output is more work .
But they work great on large datasets .</tokentext>
<sentencetext>CODASYL Hierarchical Databases are faster for large complex databases.
I've supported extremely large databases and user bases with 3 second or better end-to-end response times for over 300,000 real-time customer service rep users with such software.
These databases allow precise physical positioning; including the ability to group related child record rows on the same physical page.
One I/O can retrieve the entire set.
They also support hash or other custom indexing that directly yields the physical page address instead of wading thru relational index pages to get there.
Tool support is not as good and it takes someone who understands them to get the best results.
Functionality such as producing report output is more work.
But they work great on large datasets.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30050496</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257843660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Wow, you can insult people with thesaurus words like puerile, you must be better than them.  Way to really engage in a discussion.</p></htmltext>
<tokenext>Wow , you can insult people with thesaurus words like puerile , you must be better than them .
Way to really engage in a discussion .</tokentext>
<sentencetext>Wow, you can insult people with thesaurus words like puerile, you must be better than them.
Way to really engage in a discussion.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045024</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042786</id>
	<title>Oh no...</title>
	<author>Anonymous</author>
	<datestamp>1257794460000</datestamp>
	<modclass>Funny</modclass>
	<modscore>0</modscore>
	<htmltext><p>I just sharded</p></htmltext>
<tokenext>I just sharded</tokentext>
<sentencetext>I just sharded</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488</id>
	<title>Why worry?</title>
	<author>Anonymous</author>
	<datestamp>1257790440000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext>Microsoft Access is here!</htmltext>
<tokenext>Microsoft Access is here !</tokentext>
<sentencetext>Microsoft Access is here!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042562</id>
	<title>Starting to love the idea</title>
	<author>Anonymous</author>
	<datestamp>1257791400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>I'm a huge PostgreSQL fan and took classes in formal database theory in college.  I'm saying this as someone who understands and thoroughly appreciates relational databases: I'm starting to love schema-less systems.  I've only been playing with CouchDB for a few weeks but can certainly see what such stores bring to the table.  Specifically, a lot of the data I've stored over the years doesn't neatly map to a predefined tuple, and while one-to-one tables can go a long way toward addressing that, they're certainly not the most elegant or efficient or convenient representation of arbitrary data.</p><p>I'm certainly not going to stop using an RDBMS for most purposes, but neither am I going to waste a lot of time trying to shoehorn an everchanging blob into one.  Each tool has its place and I'm excited to see what niche this ecosystem evolves to fill.</p></htmltext>
<tokenext>I 'm a huge PostgreSQL fan and took classes in formal database theory in college .
I 'm saying this as someone who understands and thoroughly appreciates relational databases : I 'm starting to love schema-less systems .
I 've only been playing with CouchDB for a few weeks but can certainly see what such stores bring to the table .
Specifically , a lot of the data I 've stored over the years does n't neatly map to a predefined tuple , and while one-to-one tables can go a long way toward addressing that , they 're certainly not the most elegant or efficient or convenient representation of arbitrary data.I 'm certainly not going to stop using an RDBMS for most purposes , but neither am I going to waste a lot of time trying to shoehorn an everchanging blob into one .
Each tool has its place and I 'm excited to see what niche this ecosystem evolves to fill .</tokentext>
<sentencetext>I'm a huge PostgreSQL fan and took classes in formal database theory in college.
I'm saying this as someone who understands and thoroughly appreciates relational databases: I'm starting to love schema-less systems.
I've only been playing with CouchDB for a few weeks but can certainly see what such stores bring to the table.
Specifically, a lot of the data I've stored over the years doesn't neatly map to a predefined tuple, and while one-to-one tables can go a long way toward addressing that, they're certainly not the most elegant or efficient or convenient representation of arbitrary data.I'm certainly not going to stop using an RDBMS for most purposes, but neither am I going to waste a lot of time trying to shoehorn an everchanging blob into one.
Each tool has its place and I'm excited to see what niche this ecosystem evolves to fill.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30059152</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257085620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>When was the last time you tried to use Facebook or Facebook chat and didn't get failed transport requests, unsent chat messages, unavailable photos, or random blank pages?</p></div><p>Let's see...yes, never. So far.</p><p>Anecdotal "evidence" isn't.</p><p>Note: In neither direction. That is has never failed for me doesn't mean it hasn't failed for you. But the point for <b>you</b> to realize is that just because it has failed for you does not mean it necessarily has had to fail for anyone else.</p></div>
	</htmltext>
<tokenext>When was the last time you tried to use Facebook or Facebook chat and did n't get failed transport requests , unsent chat messages , unavailable photos , or random blank pages ? Let 's see...yes , never .
So far.Anecdotal " evidence " is n't.Note : In neither direction .
That is has never failed for me does n't mean it has n't failed for you .
But the point for you to realize is that just because it has failed for you does not mean it necessarily has had to fail for anyone else .</tokentext>
<sentencetext>When was the last time you tried to use Facebook or Facebook chat and didn't get failed transport requests, unsent chat messages, unavailable photos, or random blank pages?Let's see...yes, never.
So far.Anecdotal "evidence" isn't.Note: In neither direction.
That is has never failed for me doesn't mean it hasn't failed for you.
But the point for you to realize is that just because it has failed for you does not mean it necessarily has had to fail for anyone else.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>sco08y</author>
	<datestamp>1257858840000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p><i>However, the "rigid schema" claim bothers me. RDBMS can be built that have a very dynamic flavor to them. For example, treat each row as a map (associative array).</i></p><p>You described an entity attribute value model, which winds up reinventing half the DBMS, poorly. Don't worry, *everyone* does one once until they realize it's a bad idea.</p><p><i>Constraints, such as "required" or "number" can incrementally be added as the schema becomes solidified.</i></p><p>A "rigid" schema is preventing a ton of totally redundant code being written on the app side. All those constraints wind up in the schema because your UI designer doesn't want to consider that Mary might have 5 addresses or 6 mothers or work 7 jobs simultaneously. And your UI tester doesn't want to test an exploding combinatorial number of possibilities.</p><p>I'd like to see, however, a decent type system, proper logical / physical separation, etc.</p><p><i>Maybe also overhaul or enhance SQL. It's a bit long in the tooth.</i></p><p>I'm <a href="http://github.com/scooby/gybe\_ls" title="github.com">starting from scratch.</a> [github.com] (Currently I'm slowly retyping about 40 pages into Latex...)</p></htmltext>
<tokenext>However , the " rigid schema " claim bothers me .
RDBMS can be built that have a very dynamic flavor to them .
For example , treat each row as a map ( associative array ) .You described an entity attribute value model , which winds up reinventing half the DBMS , poorly .
Do n't worry , * everyone * does one once until they realize it 's a bad idea.Constraints , such as " required " or " number " can incrementally be added as the schema becomes solidified.A " rigid " schema is preventing a ton of totally redundant code being written on the app side .
All those constraints wind up in the schema because your UI designer does n't want to consider that Mary might have 5 addresses or 6 mothers or work 7 jobs simultaneously .
And your UI tester does n't want to test an exploding combinatorial number of possibilities.I 'd like to see , however , a decent type system , proper logical / physical separation , etc.Maybe also overhaul or enhance SQL .
It 's a bit long in the tooth.I 'm starting from scratch .
[ github.com ] ( Currently I 'm slowly retyping about 40 pages into Latex... )</tokentext>
<sentencetext>However, the "rigid schema" claim bothers me.
RDBMS can be built that have a very dynamic flavor to them.
For example, treat each row as a map (associative array).You described an entity attribute value model, which winds up reinventing half the DBMS, poorly.
Don't worry, *everyone* does one once until they realize it's a bad idea.Constraints, such as "required" or "number" can incrementally be added as the schema becomes solidified.A "rigid" schema is preventing a ton of totally redundant code being written on the app side.
All those constraints wind up in the schema because your UI designer doesn't want to consider that Mary might have 5 addresses or 6 mothers or work 7 jobs simultaneously.
And your UI tester doesn't want to test an exploding combinatorial number of possibilities.I'd like to see, however, a decent type system, proper logical / physical separation, etc.Maybe also overhaul or enhance SQL.
It's a bit long in the tooth.I'm starting from scratch.
[github.com] (Currently I'm slowly retyping about 40 pages into Latex...)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042674</id>
	<title>Re:hmm</title>
	<author>Anonymous</author>
	<datestamp>1257793140000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Depends. We've been using Netezza with ~100T of data, and... well... it takes seconds to search tables that are 30T in size. I'd imagine Teradata, greenplum and other parallel db's get similar performance---all while using standard SQL with all the bells and whistles you'd normally expect Oracle SQL to have (windowing functions, etc.).</p></htmltext>
<tokenext>Depends .
We 've been using Netezza with ~ 100T of data , and... well... it takes seconds to search tables that are 30T in size .
I 'd imagine Teradata , greenplum and other parallel db 's get similar performance---all while using standard SQL with all the bells and whistles you 'd normally expect Oracle SQL to have ( windowing functions , etc .
) .</tokentext>
<sentencetext>Depends.
We've been using Netezza with ~100T of data, and... well... it takes seconds to search tables that are 30T in size.
I'd imagine Teradata, greenplum and other parallel db's get similar performance---all while using standard SQL with all the bells and whistles you'd normally expect Oracle SQL to have (windowing functions, etc.
).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042514</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048414</id>
	<title>Oh yeah, relational DBs so aren't up to the task</title>
	<author>Anonymous</author>
	<datestamp>1257879000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>50TB of data? OMG! WTF! MOREACRONYMSINCAPS! With an index and an average allocation unit of 1kB and no caching whatsoever, that could be, like, up to almost 37 seeks!!! OH NOES! DO WE HAVE ENOUGH POWER?!?!?</p></htmltext>
<tokenext>50TB of data ?
OMG ! WTF !
MOREACRONYMSINCAPS ! With an index and an average allocation unit of 1kB and no caching whatsoever , that could be , like , up to almost 37 seeks ! ! !
OH NOES !
DO WE HAVE ENOUGH POWER ? ! ? !
?</tokentext>
<sentencetext>50TB of data?
OMG! WTF!
MOREACRONYMSINCAPS! With an index and an average allocation unit of 1kB and no caching whatsoever, that could be, like, up to almost 37 seeks!!!
OH NOES!
DO WE HAVE ENOUGH POWER?!?!
?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044570</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>pla</author>
	<datestamp>1257862620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><i>RDBMS can be built that have a very dynamic flavor to them. For example, treat each row as a map (associative array). Non-existent columns in any given row are treated as Null/empty instead of an error. Perhaps tables can also be created just by inserting a row into the (new) target table. No need for explicit schema management.</i> <br>
<br>
Aaaaaaaand, congratulations, you've described "fixing" the problem of schema flexibility by using an RDBMS as a non-relational flat hashed memory storage area, with at <i>least</i> three layers of indirection (not even counting underlying complexity of the DB engine itself).<br>
<br>
Aside from why the hell you would <b>ever</b> do this in favor of, y'know, just using a flat block of <i>real</i> memory (since you've given the application the fun task of memory management <b>below</b> what the OS usually handles, with all the overhead of
framing each read or write as an SQL query)... Well, no.  I <i>have</i> no aside, just what I've written.<br>
<br>
Sorry, I'll grant that you have a clever solution to a problem, but a far more effective solution would throw away the problem itself and not try to frame <b>everything</b> in terms of DBM - Kinda like Amazon did.</htmltext>
<tokenext>RDBMS can be built that have a very dynamic flavor to them .
For example , treat each row as a map ( associative array ) .
Non-existent columns in any given row are treated as Null/empty instead of an error .
Perhaps tables can also be created just by inserting a row into the ( new ) target table .
No need for explicit schema management .
Aaaaaaaand , congratulations , you 've described " fixing " the problem of schema flexibility by using an RDBMS as a non-relational flat hashed memory storage area , with at least three layers of indirection ( not even counting underlying complexity of the DB engine itself ) .
Aside from why the hell you would ever do this in favor of , y'know , just using a flat block of real memory ( since you 've given the application the fun task of memory management below what the OS usually handles , with all the overhead of framing each read or write as an SQL query ) ... Well , no .
I have no aside , just what I 've written .
Sorry , I 'll grant that you have a clever solution to a problem , but a far more effective solution would throw away the problem itself and not try to frame everything in terms of DBM - Kinda like Amazon did .</tokentext>
<sentencetext>RDBMS can be built that have a very dynamic flavor to them.
For example, treat each row as a map (associative array).
Non-existent columns in any given row are treated as Null/empty instead of an error.
Perhaps tables can also be created just by inserting a row into the (new) target table.
No need for explicit schema management.
Aaaaaaaand, congratulations, you've described "fixing" the problem of schema flexibility by using an RDBMS as a non-relational flat hashed memory storage area, with at least three layers of indirection (not even counting underlying complexity of the DB engine itself).
Aside from why the hell you would ever do this in favor of, y'know, just using a flat block of real memory (since you've given the application the fun task of memory management below what the OS usually handles, with all the overhead of
framing each read or write as an SQL query)... Well, no.
I have no aside, just what I've written.
Sorry, I'll grant that you have a clever solution to a problem, but a far more effective solution would throw away the problem itself and not try to frame everything in terms of DBM - Kinda like Amazon did.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043158</id>
	<title>This again</title>
	<author>Twillerror</author>
	<datestamp>1257886560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Wow a "object oriented" database discussion again. I've never read one of these<nobr> <wbr></nobr>:P I've only been doing this 15 years and I've lost count of these talks a long time ago.</p><p>What is the difference between schema less and schema rigid anyways. I don't see what that has anything to do with performance. The real issue is uptime and transaction support. People want to add a column or index without taking the system down. That is different then dealing with PBs of data. Most table structures can easily deal with that much data.</p><p>If you have a DB that is big you have lots of outs. Pay...get Enterprise version of whatever. Break it into many DB/tables and merge together. Archive. Archive I bet will get most people by. Does eBay really need all that bidding info for items over a few weeks old...only for analysis maybe. Move that old stale data out of the active heavily hit data tiers.</p><p>The fact remains that MySQL should be able to scale to TBs of data. The fact that it can't is a failure of the product. All the others have been for a while. Why can't it...I don't know...the fact that it uses a F'in different file for each index on a table. If you don't understand how old school that is start using Paradox. Just because it is open source doesn't mean it has to be so damn out of date. Please for the love of god save multiple tables/indexes in the same pre sized file...god.</p><p>Google has all the power to go and use something different. Google gets to cheat. Google is a collection of pretty static data. They scan the internet a lot, but imagine if every time you did a search Google had to scan every web page on the planet, index them, and then give you search results. That would be impractical for sure. So for now they just store big collections of blobs and a big fast index for searching keywords and links to pages. Impressive none the less, but it's not like your typical app. GMail is...funny that it is one system they've had problem with. Even then EMAIL DOESN'T CHANGE. It's user specific, but it's still f'in static. GoogleTastic if you ask me.</p><p>The fact is people are using RDBMS right now to solve real world problems. Some start up is finding a way to tweek MySQL to do something cool and then posting it on a blog...then all of the sudden RDBMS is dead. RDBMS is fine, it will be fine for at least 10 years if not longer. In that time it will evolve as well so that it will be around for even longer. MySQL in 5 years will have online index addition, performance hitless online column addition, partitioning, geo indexing, XML columns, BigASS table support, Oracle RAC like support, and a thousand other features that some RDBMSs have today and some will not see for even longer. Then developers that spent all that cash developing custom shit will revert and post comments like this one.</p><p>That's the way it goes in software development. The middle tier gets bigger, gets inept, custom shit comes out, it gets integrated into the middle tier shit....continue;</p><p>Instead of pronouncing death start talking about how dated a 2 dimensional result set is. JOINs should return N dimension result sets similar to XML with butt loads of meta data. ODBC/JDBC are dated...so updated them.</p><p>select u.login, ul.when from users u join user\_logins ul as logins.login ON ul.user\_id = u.user\_id where u.name = 'me' should equal something like a nested XML packet instead of duplicated crap when there is more then one user\_logins.</p></htmltext>
<tokenext>Wow a " object oriented " database discussion again .
I 've never read one of these : P I 've only been doing this 15 years and I 've lost count of these talks a long time ago.What is the difference between schema less and schema rigid anyways .
I do n't see what that has anything to do with performance .
The real issue is uptime and transaction support .
People want to add a column or index without taking the system down .
That is different then dealing with PBs of data .
Most table structures can easily deal with that much data.If you have a DB that is big you have lots of outs .
Pay...get Enterprise version of whatever .
Break it into many DB/tables and merge together .
Archive. Archive I bet will get most people by .
Does eBay really need all that bidding info for items over a few weeks old...only for analysis maybe .
Move that old stale data out of the active heavily hit data tiers.The fact remains that MySQL should be able to scale to TBs of data .
The fact that it ca n't is a failure of the product .
All the others have been for a while .
Why ca n't it...I do n't know...the fact that it uses a F'in different file for each index on a table .
If you do n't understand how old school that is start using Paradox .
Just because it is open source does n't mean it has to be so damn out of date .
Please for the love of god save multiple tables/indexes in the same pre sized file...god.Google has all the power to go and use something different .
Google gets to cheat .
Google is a collection of pretty static data .
They scan the internet a lot , but imagine if every time you did a search Google had to scan every web page on the planet , index them , and then give you search results .
That would be impractical for sure .
So for now they just store big collections of blobs and a big fast index for searching keywords and links to pages .
Impressive none the less , but it 's not like your typical app .
GMail is...funny that it is one system they 've had problem with .
Even then EMAIL DOES N'T CHANGE .
It 's user specific , but it 's still f'in static .
GoogleTastic if you ask me.The fact is people are using RDBMS right now to solve real world problems .
Some start up is finding a way to tweek MySQL to do something cool and then posting it on a blog...then all of the sudden RDBMS is dead .
RDBMS is fine , it will be fine for at least 10 years if not longer .
In that time it will evolve as well so that it will be around for even longer .
MySQL in 5 years will have online index addition , performance hitless online column addition , partitioning , geo indexing , XML columns , BigASS table support , Oracle RAC like support , and a thousand other features that some RDBMSs have today and some will not see for even longer .
Then developers that spent all that cash developing custom shit will revert and post comments like this one.That 's the way it goes in software development .
The middle tier gets bigger , gets inept , custom shit comes out , it gets integrated into the middle tier shit....continue ; Instead of pronouncing death start talking about how dated a 2 dimensional result set is .
JOINs should return N dimension result sets similar to XML with butt loads of meta data .
ODBC/JDBC are dated...so updated them.select u.login , ul.when from users u join user \ _logins ul as logins.login ON ul.user \ _id = u.user \ _id where u.name = 'me ' should equal something like a nested XML packet instead of duplicated crap when there is more then one user \ _logins .</tokentext>
<sentencetext>Wow a "object oriented" database discussion again.
I've never read one of these :P I've only been doing this 15 years and I've lost count of these talks a long time ago.What is the difference between schema less and schema rigid anyways.
I don't see what that has anything to do with performance.
The real issue is uptime and transaction support.
People want to add a column or index without taking the system down.
That is different then dealing with PBs of data.
Most table structures can easily deal with that much data.If you have a DB that is big you have lots of outs.
Pay...get Enterprise version of whatever.
Break it into many DB/tables and merge together.
Archive. Archive I bet will get most people by.
Does eBay really need all that bidding info for items over a few weeks old...only for analysis maybe.
Move that old stale data out of the active heavily hit data tiers.The fact remains that MySQL should be able to scale to TBs of data.
The fact that it can't is a failure of the product.
All the others have been for a while.
Why can't it...I don't know...the fact that it uses a F'in different file for each index on a table.
If you don't understand how old school that is start using Paradox.
Just because it is open source doesn't mean it has to be so damn out of date.
Please for the love of god save multiple tables/indexes in the same pre sized file...god.Google has all the power to go and use something different.
Google gets to cheat.
Google is a collection of pretty static data.
They scan the internet a lot, but imagine if every time you did a search Google had to scan every web page on the planet, index them, and then give you search results.
That would be impractical for sure.
So for now they just store big collections of blobs and a big fast index for searching keywords and links to pages.
Impressive none the less, but it's not like your typical app.
GMail is...funny that it is one system they've had problem with.
Even then EMAIL DOESN'T CHANGE.
It's user specific, but it's still f'in static.
GoogleTastic if you ask me.The fact is people are using RDBMS right now to solve real world problems.
Some start up is finding a way to tweek MySQL to do something cool and then posting it on a blog...then all of the sudden RDBMS is dead.
RDBMS is fine, it will be fine for at least 10 years if not longer.
In that time it will evolve as well so that it will be around for even longer.
MySQL in 5 years will have online index addition, performance hitless online column addition, partitioning, geo indexing, XML columns, BigASS table support, Oracle RAC like support, and a thousand other features that some RDBMSs have today and some will not see for even longer.
Then developers that spent all that cash developing custom shit will revert and post comments like this one.That's the way it goes in software development.
The middle tier gets bigger, gets inept, custom shit comes out, it gets integrated into the middle tier shit....continue;Instead of pronouncing death start talking about how dated a 2 dimensional result set is.
JOINs should return N dimension result sets similar to XML with butt loads of meta data.
ODBC/JDBC are dated...so updated them.select u.login, ul.when from users u join user\_logins ul as logins.login ON ul.user\_id = u.user\_id where u.name = 'me' should equal something like a nested XML packet instead of duplicated crap when there is more then one user\_logins.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049716</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257883920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>the "No" in NoSQL stands for Not Only, not No, as in none whatsoever</p></div><p>That's possibly the stupidest thing I've heard all day.</p></div>
	</htmltext>
<tokenext>the " No " in NoSQL stands for Not Only , not No , as in none whatsoeverThat 's possibly the stupidest thing I 've heard all day .</tokentext>
<sentencetext>the "No" in NoSQL stands for Not Only, not No, as in none whatsoeverThat's possibly the stupidest thing I've heard all day.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044436</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044132</id>
	<title>RDBMS do the job</title>
	<author>Stormcrow309</author>
	<datestamp>1257857760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Database size is usually not an issue for modern RDBMS, such as Microsoft SQL, Sybase ASE, Oracle, or IBM's DB2.  I am running an ERP on Sybase with 3 TB worth of data, a datamart on Microsoft with 5 TB, a Patient Record System on Microsoft with 20 TB, a HR system with 2 TB, and a Patient Accounting system on Oracle with 8 TB of data. All of these systems talk with at least one other system, usually with the assistance of SSIS (Thank god for SSIS, our ETL is heavy lifting, approx. 5 TB a night of incrementals).  With enough server hardware, we can scale up to very large levels easily.  We forcast out our data size needs out for the next three years and have been very accurate, not running across SAN issues.</p><p>Only systems we have had issues with in the area of data size is MySQL and Informix.</p></htmltext>
<tokenext>Database size is usually not an issue for modern RDBMS , such as Microsoft SQL , Sybase ASE , Oracle , or IBM 's DB2 .
I am running an ERP on Sybase with 3 TB worth of data , a datamart on Microsoft with 5 TB , a Patient Record System on Microsoft with 20 TB , a HR system with 2 TB , and a Patient Accounting system on Oracle with 8 TB of data .
All of these systems talk with at least one other system , usually with the assistance of SSIS ( Thank god for SSIS , our ETL is heavy lifting , approx .
5 TB a night of incrementals ) .
With enough server hardware , we can scale up to very large levels easily .
We forcast out our data size needs out for the next three years and have been very accurate , not running across SAN issues.Only systems we have had issues with in the area of data size is MySQL and Informix .</tokentext>
<sentencetext>Database size is usually not an issue for modern RDBMS, such as Microsoft SQL, Sybase ASE, Oracle, or IBM's DB2.
I am running an ERP on Sybase with 3 TB worth of data, a datamart on Microsoft with 5 TB, a Patient Record System on Microsoft with 20 TB, a HR system with 2 TB, and a Patient Accounting system on Oracle with 8 TB of data.
All of these systems talk with at least one other system, usually with the assistance of SSIS (Thank god for SSIS, our ETL is heavy lifting, approx.
5 TB a night of incrementals).
With enough server hardware, we can scale up to very large levels easily.
We forcast out our data size needs out for the next three years and have been very accurate, not running across SAN issues.Only systems we have had issues with in the area of data size is MySQL and Informix.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046550</id>
	<title>Re:bad design</title>
	<author>bartoku</author>
	<datestamp>1257872820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I want to take a crack at this, I know enough about databases to make a PHP MySQL web pageis all! How can there be no clear segregation? The data is about me, about someone else, or about a group (a group being a single entity on Facebook similar to an individual) and the data either came from me, someone else, or a group.
<br> <br>
My inbox in Facebook is no different than my email inbox, all the messages are to me, just like my email inbox it should reside on one search-able database restricted to a certain manageable size like every other email inbox in the world. My inbox can be on one server and my friend's on another, the whole thing should be segmented by user. Same with my outbox, just like email I retain a copy of sent messages in my personal Facebook database. Wall posts and pictures should work the same way, anything that shows up in my profile should be copied to my personal tables and database.
<br> <br>
Now the trickiest part is when another user posts a picture and tags me in that picture. That picture reference should then be duplicated and placed in my picture database. This is in contrast to retaining one copy stored in a Facebook wide database and searching that database for pictures of me each time someone wants to bring up my pictures. The picture data storage can be spread across multiple servers and when someone views the my pictures section of my profile there is simply a dump done on my picture database of references to the pictures to present, links to the pictures and the pictures are retrieved from storage and displayed. When the owner of a picture deletes a picture or the owner untags me or I untag myself from a picture, the picture would simply be deleted from my picture reference database.
<br> <br>
I am going to go watch Robert Johnson's ACM talk cause this seems easy and the segregation is as clear as good ole email. From what I can see each individual's profile is fairly small in database terms and never viewed all at once. A profile is viewed by wall, inbox, pictures (and not even all the pictures at once), info...
<br> <br>
I am amateur at this at best, but with my hack mySQL skills I do not see it as a big deal to create an easily scalable Facebook segmented by users.</htmltext>
<tokenext>I want to take a crack at this , I know enough about databases to make a PHP MySQL web pageis all !
How can there be no clear segregation ?
The data is about me , about someone else , or about a group ( a group being a single entity on Facebook similar to an individual ) and the data either came from me , someone else , or a group .
My inbox in Facebook is no different than my email inbox , all the messages are to me , just like my email inbox it should reside on one search-able database restricted to a certain manageable size like every other email inbox in the world .
My inbox can be on one server and my friend 's on another , the whole thing should be segmented by user .
Same with my outbox , just like email I retain a copy of sent messages in my personal Facebook database .
Wall posts and pictures should work the same way , anything that shows up in my profile should be copied to my personal tables and database .
Now the trickiest part is when another user posts a picture and tags me in that picture .
That picture reference should then be duplicated and placed in my picture database .
This is in contrast to retaining one copy stored in a Facebook wide database and searching that database for pictures of me each time someone wants to bring up my pictures .
The picture data storage can be spread across multiple servers and when someone views the my pictures section of my profile there is simply a dump done on my picture database of references to the pictures to present , links to the pictures and the pictures are retrieved from storage and displayed .
When the owner of a picture deletes a picture or the owner untags me or I untag myself from a picture , the picture would simply be deleted from my picture reference database .
I am going to go watch Robert Johnson 's ACM talk cause this seems easy and the segregation is as clear as good ole email .
From what I can see each individual 's profile is fairly small in database terms and never viewed all at once .
A profile is viewed by wall , inbox , pictures ( and not even all the pictures at once ) , info.. . I am amateur at this at best , but with my hack mySQL skills I do not see it as a big deal to create an easily scalable Facebook segmented by users .</tokentext>
<sentencetext>I want to take a crack at this, I know enough about databases to make a PHP MySQL web pageis all!
How can there be no clear segregation?
The data is about me, about someone else, or about a group (a group being a single entity on Facebook similar to an individual) and the data either came from me, someone else, or a group.
My inbox in Facebook is no different than my email inbox, all the messages are to me, just like my email inbox it should reside on one search-able database restricted to a certain manageable size like every other email inbox in the world.
My inbox can be on one server and my friend's on another, the whole thing should be segmented by user.
Same with my outbox, just like email I retain a copy of sent messages in my personal Facebook database.
Wall posts and pictures should work the same way, anything that shows up in my profile should be copied to my personal tables and database.
Now the trickiest part is when another user posts a picture and tags me in that picture.
That picture reference should then be duplicated and placed in my picture database.
This is in contrast to retaining one copy stored in a Facebook wide database and searching that database for pictures of me each time someone wants to bring up my pictures.
The picture data storage can be spread across multiple servers and when someone views the my pictures section of my profile there is simply a dump done on my picture database of references to the pictures to present, links to the pictures and the pictures are retrieved from storage and displayed.
When the owner of a picture deletes a picture or the owner untags me or I untag myself from a picture, the picture would simply be deleted from my picture reference database.
I am going to go watch Robert Johnson's ACM talk cause this seems easy and the segregation is as clear as good ole email.
From what I can see each individual's profile is fairly small in database terms and never viewed all at once.
A profile is viewed by wall, inbox, pictures (and not even all the pictures at once), info...
 
I am amateur at this at best, but with my hack mySQL skills I do not see it as a big deal to create an easily scalable Facebook segmented by users.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043582</id>
	<title>And I am missing it greatly on Linux</title>
	<author>Errol backfiring</author>
	<datestamp>1257849780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>MS-Access had some really great features: it could be accessed with both SQL and with a blazingly fast (because almost running on the bare OS) ISAM-style library. I am still missing anything like it on Linux. SQLite is a file-system database, but why on earth should it parse full-blown SQL <em>at runtime</em> and why on earth should my program write another program in SQL <em>at runtime</em> just to load some data? Get serious. Parsing and building SQL is just overhead, and especially parsing SQL is no easy and light task.</p><p>Since I switched to OO programming, most (95\%) of my queries are "This table/index. Number 5 please." In essence that is the get/put method, or the ISAM style method. I really would like something like that to exist on Linux. The closest thing around is MySQL's HANDLER statement, but that can only be used for constant data (because it does dirty reads) and for reading only.</p><p>SQLite could even be faster if it just accepted some basic "get row by index" and "put row by index" commands that do not try to parse, optimize or outsmart anything. The problem with "modern" databases is that they are either "SQL" or "NoSQL". That's awful. Some programs speak SQL (because of compatibility, because it is a reporting program or just because the programmer does not know anything else) and some programs are better off with direct row management. That does not mean that the data should not be accessible by both programs. I really wish that the regular SQL databases would develop ISAM-style access methods. Programming would be a hell of a lot easier then, and the programs themselves would speed up significantly was well.</p><p>This is no idle remark. I worked a lot with MS-Access and most rants about it being slow comes from the fact that most programmers treat the file-system database as a server. So it must emulate itself as a server and do a lot of household parsing and does not even have a physical server to relieve its load.<br>
But if you know how to program a file-system database with ISAM-style methods, MS-Access is by far the fastest database I ever encountered. No Joke. Really. It can be fast because there is no <em>need</em> to do all these household jobs to just dig up a row.</p></htmltext>
<tokenext>MS-Access had some really great features : it could be accessed with both SQL and with a blazingly fast ( because almost running on the bare OS ) ISAM-style library .
I am still missing anything like it on Linux .
SQLite is a file-system database , but why on earth should it parse full-blown SQL at runtime and why on earth should my program write another program in SQL at runtime just to load some data ?
Get serious .
Parsing and building SQL is just overhead , and especially parsing SQL is no easy and light task.Since I switched to OO programming , most ( 95 \ % ) of my queries are " This table/index .
Number 5 please .
" In essence that is the get/put method , or the ISAM style method .
I really would like something like that to exist on Linux .
The closest thing around is MySQL 's HANDLER statement , but that can only be used for constant data ( because it does dirty reads ) and for reading only.SQLite could even be faster if it just accepted some basic " get row by index " and " put row by index " commands that do not try to parse , optimize or outsmart anything .
The problem with " modern " databases is that they are either " SQL " or " NoSQL " .
That 's awful .
Some programs speak SQL ( because of compatibility , because it is a reporting program or just because the programmer does not know anything else ) and some programs are better off with direct row management .
That does not mean that the data should not be accessible by both programs .
I really wish that the regular SQL databases would develop ISAM-style access methods .
Programming would be a hell of a lot easier then , and the programs themselves would speed up significantly was well.This is no idle remark .
I worked a lot with MS-Access and most rants about it being slow comes from the fact that most programmers treat the file-system database as a server .
So it must emulate itself as a server and do a lot of household parsing and does not even have a physical server to relieve its load .
But if you know how to program a file-system database with ISAM-style methods , MS-Access is by far the fastest database I ever encountered .
No Joke .
Really. It can be fast because there is no need to do all these household jobs to just dig up a row .</tokentext>
<sentencetext>MS-Access had some really great features: it could be accessed with both SQL and with a blazingly fast (because almost running on the bare OS) ISAM-style library.
I am still missing anything like it on Linux.
SQLite is a file-system database, but why on earth should it parse full-blown SQL at runtime and why on earth should my program write another program in SQL at runtime just to load some data?
Get serious.
Parsing and building SQL is just overhead, and especially parsing SQL is no easy and light task.Since I switched to OO programming, most (95\%) of my queries are "This table/index.
Number 5 please.
" In essence that is the get/put method, or the ISAM style method.
I really would like something like that to exist on Linux.
The closest thing around is MySQL's HANDLER statement, but that can only be used for constant data (because it does dirty reads) and for reading only.SQLite could even be faster if it just accepted some basic "get row by index" and "put row by index" commands that do not try to parse, optimize or outsmart anything.
The problem with "modern" databases is that they are either "SQL" or "NoSQL".
That's awful.
Some programs speak SQL (because of compatibility, because it is a reporting program or just because the programmer does not know anything else) and some programs are better off with direct row management.
That does not mean that the data should not be accessible by both programs.
I really wish that the regular SQL databases would develop ISAM-style access methods.
Programming would be a hell of a lot easier then, and the programs themselves would speed up significantly was well.This is no idle remark.
I worked a lot with MS-Access and most rants about it being slow comes from the fact that most programmers treat the file-system database as a server.
So it must emulate itself as a server and do a lot of household parsing and does not even have a physical server to relieve its load.
But if you know how to program a file-system database with ISAM-style methods, MS-Access is by far the fastest database I ever encountered.
No Joke.
Really. It can be fast because there is no need to do all these household jobs to just dig up a row.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257885180000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>"Also, when was the last time you tried to visit Facebook and it was down? They're doing quite well for people who need to stop and actually think about their "implimentation"."</p><p>When was the last time you tried to use Facebook or Facebook chat and didn't get failed transport requests, unsent chat messages, unavailable photos, or random blank pages?</p></htmltext>
<tokenext>" Also , when was the last time you tried to visit Facebook and it was down ?
They 're doing quite well for people who need to stop and actually think about their " implimentation " .
" When was the last time you tried to use Facebook or Facebook chat and did n't get failed transport requests , unsent chat messages , unavailable photos , or random blank pages ?</tokentext>
<sentencetext>"Also, when was the last time you tried to visit Facebook and it was down?
They're doing quite well for people who need to stop and actually think about their "implimentation".
"When was the last time you tried to use Facebook or Facebook chat and didn't get failed transport requests, unsent chat messages, unavailable photos, or random blank pages?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042548</id>
	<title>NoSQL? That'd Be DL/I, Right?</title>
	<author>BBCWatcher</author>
	<datestamp>1257791220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>I think I've heard of non-relational databases before. There's a particularly famous one, in fact. What could <a href="http://www.ibm.com/ims" title="ibm.com">it be</a> [ibm.com]? Let's see: first started shipping in 1969, now in its eleventh major version, JDBC and ODBC access, full XML support in and out, available with an optional paired transaction manager, extremely high performance, and holds a very large chunk of the world's financial information (among other things). It also ranks up there with Microsoft Windows as among the world's all-time highest grossing software products.
</p><p>....You bet non-relational is still highly relevant and useful in many different roles. Different tools for different jobs and all.</p></htmltext>
<tokenext>I think I 've heard of non-relational databases before .
There 's a particularly famous one , in fact .
What could it be [ ibm.com ] ?
Let 's see : first started shipping in 1969 , now in its eleventh major version , JDBC and ODBC access , full XML support in and out , available with an optional paired transaction manager , extremely high performance , and holds a very large chunk of the world 's financial information ( among other things ) .
It also ranks up there with Microsoft Windows as among the world 's all-time highest grossing software products .
....You bet non-relational is still highly relevant and useful in many different roles .
Different tools for different jobs and all .</tokentext>
<sentencetext>I think I've heard of non-relational databases before.
There's a particularly famous one, in fact.
What could it be [ibm.com]?
Let's see: first started shipping in 1969, now in its eleventh major version, JDBC and ODBC access, full XML support in and out, available with an optional paired transaction manager, extremely high performance, and holds a very large chunk of the world's financial information (among other things).
It also ranks up there with Microsoft Windows as among the world's all-time highest grossing software products.
....You bet non-relational is still highly relevant and useful in many different roles.
Different tools for different jobs and all.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042514</id>
	<title>hmm</title>
	<author>Anonymous</author>
	<datestamp>1257790680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>With regard to scalability, it strikes me that the problem isn't so much SQL but the fact that current SQL-based RDBMS implementations are optimized for smaller data sets.</htmltext>
<tokenext>With regard to scalability , it strikes me that the problem is n't so much SQL but the fact that current SQL-based RDBMS implementations are optimized for smaller data sets .</tokentext>
<sentencetext>With regard to scalability, it strikes me that the problem isn't so much SQL but the fact that current SQL-based RDBMS implementations are optimized for smaller data sets.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</id>
	<title>Re:bad design</title>
	<author>Zombywuf</author>
	<datestamp>1257851820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>The problem is when people don't think about the solution and apply the cargo cult mentality. Facebook uses Eeeerlaaaang therefore we should. Facebook wrote it's own database, therefore we should. People end up writing their own database engines that do exactly the same thing as modern relational engines, with all the bugs that were fixed in the relational engines 10 years ago (5 for Microsoft). Even MS SQL will split a large group by aggregate operation (which takes 3 lines to specify) across multiple CPUS by turning it into a map reduce problem, and it will do this all without you having to be aware of it. Oracle (and many others, Oracles is supposed to be the best) will maintain multiple concurrent versions of your data in order to allow multiple users to work with a snapshot that doesn't change under them while others are changing the data, and this happens transparently. You can go ahead and implement all this stuff yourself if you want, in C and sockets, call me when your done, in 10-20 years.</p><p>The real issue I have with the NoSQL people is they're a bunch of whiny babies, who haven't even taken the time to understand the problem before lashing out at the first thing they see. Just the name tells you this, they call themselves "No SQL" and then lash out at relational databases. SQL is is a terrible language, which really needs replacing, but it is only one possible language for querying relational databases. Relational databases represent several decades of research into how to query data in a fault tolerant scalable way as a standing implementation, re-implementing them is a waste of time.</p></htmltext>
<tokenext>The problem is when people do n't think about the solution and apply the cargo cult mentality .
Facebook uses Eeeerlaaaang therefore we should .
Facebook wrote it 's own database , therefore we should .
People end up writing their own database engines that do exactly the same thing as modern relational engines , with all the bugs that were fixed in the relational engines 10 years ago ( 5 for Microsoft ) .
Even MS SQL will split a large group by aggregate operation ( which takes 3 lines to specify ) across multiple CPUS by turning it into a map reduce problem , and it will do this all without you having to be aware of it .
Oracle ( and many others , Oracles is supposed to be the best ) will maintain multiple concurrent versions of your data in order to allow multiple users to work with a snapshot that does n't change under them while others are changing the data , and this happens transparently .
You can go ahead and implement all this stuff yourself if you want , in C and sockets , call me when your done , in 10-20 years.The real issue I have with the NoSQL people is they 're a bunch of whiny babies , who have n't even taken the time to understand the problem before lashing out at the first thing they see .
Just the name tells you this , they call themselves " No SQL " and then lash out at relational databases .
SQL is is a terrible language , which really needs replacing , but it is only one possible language for querying relational databases .
Relational databases represent several decades of research into how to query data in a fault tolerant scalable way as a standing implementation , re-implementing them is a waste of time .</tokentext>
<sentencetext>The problem is when people don't think about the solution and apply the cargo cult mentality.
Facebook uses Eeeerlaaaang therefore we should.
Facebook wrote it's own database, therefore we should.
People end up writing their own database engines that do exactly the same thing as modern relational engines, with all the bugs that were fixed in the relational engines 10 years ago (5 for Microsoft).
Even MS SQL will split a large group by aggregate operation (which takes 3 lines to specify) across multiple CPUS by turning it into a map reduce problem, and it will do this all without you having to be aware of it.
Oracle (and many others, Oracles is supposed to be the best) will maintain multiple concurrent versions of your data in order to allow multiple users to work with a snapshot that doesn't change under them while others are changing the data, and this happens transparently.
You can go ahead and implement all this stuff yourself if you want, in C and sockets, call me when your done, in 10-20 years.The real issue I have with the NoSQL people is they're a bunch of whiny babies, who haven't even taken the time to understand the problem before lashing out at the first thing they see.
Just the name tells you this, they call themselves "No SQL" and then lash out at relational databases.
SQL is is a terrible language, which really needs replacing, but it is only one possible language for querying relational databases.
Relational databases represent several decades of research into how to query data in a fault tolerant scalable way as a standing implementation, re-implementing them is a waste of time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042800</id>
	<title>I/O bottleneck</title>
	<author>Begemot</author>
	<datestamp>1257794640000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Let's not forget where the bottleneck is - the I/O. It's expensive but once you build a fast and solid storage system, correctly configure it and partition your data properly over a sufficiently large number of hard drives, RAIDs, LUNs etc., you might be able to use SQL. We run a database of 10TB on MS SQL with hundreds of millions of records with an equal rate of reads and writes and could not be happier.</p></htmltext>
<tokenext>Let 's not forget where the bottleneck is - the I/O .
It 's expensive but once you build a fast and solid storage system , correctly configure it and partition your data properly over a sufficiently large number of hard drives , RAIDs , LUNs etc. , you might be able to use SQL .
We run a database of 10TB on MS SQL with hundreds of millions of records with an equal rate of reads and writes and could not be happier .</tokentext>
<sentencetext>Let's not forget where the bottleneck is - the I/O.
It's expensive but once you build a fast and solid storage system, correctly configure it and partition your data properly over a sufficiently large number of hard drives, RAIDs, LUNs etc., you might be able to use SQL.
We run a database of 10TB on MS SQL with hundreds of millions of records with an equal rate of reads and writes and could not be happier.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044308</id>
	<title>I know the type well</title>
	<author>Anonymous</author>
	<datestamp>1257859980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The typical architect who opts for a NO-SQL approach is basing her decision on what an RDBMS can do / can't do primarily on experience with mySQL.  She would never consider something much more scaleable on the extreme like Oracle or even heaven forbid DB2.  She has never tuned let alone touched one of these real RDBMS.  Similarly, her idea of hardware doesn't much transcend a set of independent servers linked with GBE.  So her hammer is  anything but an RDBMS and the conclusion is totally foregone that an RDBMS won't work.  The real conclusion is that mySQL won't work which is totally accurate.  Go look at the Larry Ellison video of the <a href="http://www.oracle.com/database/database-machine.html" title="oracle.com" rel="nofollow"> Oracle/Sun database machine</a> [oracle.com] which will eat most of these "unsolveable" problems for lunch.  Yes it is expensive, but building an empire so your pet project can succeed is also expensive and probably more risky as well.</p></htmltext>
<tokenext>The typical architect who opts for a NO-SQL approach is basing her decision on what an RDBMS can do / ca n't do primarily on experience with mySQL .
She would never consider something much more scaleable on the extreme like Oracle or even heaven forbid DB2 .
She has never tuned let alone touched one of these real RDBMS .
Similarly , her idea of hardware does n't much transcend a set of independent servers linked with GBE .
So her hammer is anything but an RDBMS and the conclusion is totally foregone that an RDBMS wo n't work .
The real conclusion is that mySQL wo n't work which is totally accurate .
Go look at the Larry Ellison video of the Oracle/Sun database machine [ oracle.com ] which will eat most of these " unsolveable " problems for lunch .
Yes it is expensive , but building an empire so your pet project can succeed is also expensive and probably more risky as well .</tokentext>
<sentencetext>The typical architect who opts for a NO-SQL approach is basing her decision on what an RDBMS can do / can't do primarily on experience with mySQL.
She would never consider something much more scaleable on the extreme like Oracle or even heaven forbid DB2.
She has never tuned let alone touched one of these real RDBMS.
Similarly, her idea of hardware doesn't much transcend a set of independent servers linked with GBE.
So her hammer is  anything but an RDBMS and the conclusion is totally foregone that an RDBMS won't work.
The real conclusion is that mySQL won't work which is totally accurate.
Go look at the Larry Ellison video of the  Oracle/Sun database machine [oracle.com] which will eat most of these "unsolveable" problems for lunch.
Yes it is expensive, but building an empire so your pet project can succeed is also expensive and probably more risky as well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044452</id>
	<title>Re:bad design</title>
	<author>TheLink</author>
	<datestamp>1257861420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&gt; SQL is is a terrible language<br><br>Yes it is. However one of the biggest benefits of SQL and "rigid schema design" is it forces people to conform to a standard.<br><br>Those "whiny babies" might say that's a flaw, and it is a flaw when there's just one whiny baby to keep happy. But it's often a feature when you have 100 whiny babies that want different things from a \_single\_ database.<br><br>Yes RDBMSes might be slower. But thanks to Intel and friends, RDBMSes have been good enough for most organizations and the performance curve stays ahead of their growth. and the performance problems tend to not be due to the RDBMSes but due to other problems e.g. the DBA screwing up, or the app doing the wrong thing 1000 times<nobr> <wbr></nobr>:).</htmltext>
<tokenext>&gt; SQL is is a terrible languageYes it is .
However one of the biggest benefits of SQL and " rigid schema design " is it forces people to conform to a standard.Those " whiny babies " might say that 's a flaw , and it is a flaw when there 's just one whiny baby to keep happy .
But it 's often a feature when you have 100 whiny babies that want different things from a \ _single \ _ database.Yes RDBMSes might be slower .
But thanks to Intel and friends , RDBMSes have been good enough for most organizations and the performance curve stays ahead of their growth .
and the performance problems tend to not be due to the RDBMSes but due to other problems e.g .
the DBA screwing up , or the app doing the wrong thing 1000 times : ) .</tokentext>
<sentencetext>&gt; SQL is is a terrible languageYes it is.
However one of the biggest benefits of SQL and "rigid schema design" is it forces people to conform to a standard.Those "whiny babies" might say that's a flaw, and it is a flaw when there's just one whiny baby to keep happy.
But it's often a feature when you have 100 whiny babies that want different things from a \_single\_ database.Yes RDBMSes might be slower.
But thanks to Intel and friends, RDBMSes have been good enough for most organizations and the performance curve stays ahead of their growth.
and the performance problems tend to not be due to the RDBMSes but due to other problems e.g.
the DBA screwing up, or the app doing the wrong thing 1000 times :).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047546</id>
	<title>Re:And I am missing it greatly on Linux</title>
	<author>Tetsujin</author>
	<datestamp>1257876240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p><div class="quote"><p>most (95\%) of my queries are "This table/index. Number 5 <i>please</i>."</p></div><p>Admirable! Despite the strong desire for efficiency, you still have the prudence to phrase you queries <i>politely</i>.</p></div><p>Well, the database gets all sulky if it doesn't hear the magic word on a regular basis...  So it's really in one's own best interests to be polite.'); DROP TABLE stories;--</p></div>
	</htmltext>
<tokenext>most ( 95 \ % ) of my queries are " This table/index .
Number 5 please. " Admirable !
Despite the strong desire for efficiency , you still have the prudence to phrase you queries politely.Well , the database gets all sulky if it does n't hear the magic word on a regular basis... So it 's really in one 's own best interests to be polite .
' ) ; DROP TABLE stories ; --</tokentext>
<sentencetext>most (95\%) of my queries are "This table/index.
Number 5 please."Admirable!
Despite the strong desire for efficiency, you still have the prudence to phrase you queries politely.Well, the database gets all sulky if it doesn't hear the magic word on a regular basis...  So it's really in one's own best interests to be polite.
'); DROP TABLE stories;--
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044348</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048846</id>
	<title>Re:Hashes are your friend</title>
	<author>mindstrm</author>
	<datestamp>1257880500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The reason you can't just keep throwing more boxes at it boils down to CAP -<br>Consitency, Availability, and Partition Toleranace.</p><p>In designing any distributed system, you can only get two out of three.</p><p><a href="http://www.julianbrowne.com/article/viewer/brewers-cap-theorem" title="julianbrowne.com" rel="nofollow">http://www.julianbrowne.com/article/viewer/brewers-cap-theorem</a> [julianbrowne.com]</p></htmltext>
<tokenext>The reason you ca n't just keep throwing more boxes at it boils down to CAP -Consitency , Availability , and Partition Toleranace.In designing any distributed system , you can only get two out of three.http : //www.julianbrowne.com/article/viewer/brewers-cap-theorem [ julianbrowne.com ]</tokentext>
<sentencetext>The reason you can't just keep throwing more boxes at it boils down to CAP -Consitency, Availability, and Partition Toleranace.In designing any distributed system, you can only get two out of three.http://www.julianbrowne.com/article/viewer/brewers-cap-theorem [julianbrowne.com]</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042822</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</id>
	<title>Re:bad design</title>
	<author>munctional</author>
	<datestamp>1257791340000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>Ever heard of bloom filters? Sharding? Indexes? They are clearly not doing a table scan on 50gb of data every time you open your Facebook inbox.</p><p><div class="quote"><p>You know, there's a certain point where people need to stop and actually think about the implimentation.</p></div><p>Um, they do. They regularly blog about their solutions to their problems and open source their solutions and contributions to existing projects. They come up with amazing solutions to their large scale problems. They're running over five million Erlang processes for their chat system!</p><p> <a href="http://developers.facebook.com/news.php?blog=1" title="facebook.com" rel="nofollow">http://developers.facebook.com/news.php?blog=1</a> [facebook.com] </p><p> <a href="http://github.com/facebook" title="github.com" rel="nofollow">http://github.com/facebook</a> [github.com] </p><p>Also, when was the last time you tried to visit Facebook and it was down? They're doing quite well for people who need to stop and actually think about their "implimentation".</p></div>
	</htmltext>
<tokenext>Ever heard of bloom filters ?
Sharding ? Indexes ?
They are clearly not doing a table scan on 50gb of data every time you open your Facebook inbox.You know , there 's a certain point where people need to stop and actually think about the implimentation.Um , they do .
They regularly blog about their solutions to their problems and open source their solutions and contributions to existing projects .
They come up with amazing solutions to their large scale problems .
They 're running over five million Erlang processes for their chat system !
http : //developers.facebook.com/news.php ? blog = 1 [ facebook.com ] http : //github.com/facebook [ github.com ] Also , when was the last time you tried to visit Facebook and it was down ?
They 're doing quite well for people who need to stop and actually think about their " implimentation " .</tokentext>
<sentencetext>Ever heard of bloom filters?
Sharding? Indexes?
They are clearly not doing a table scan on 50gb of data every time you open your Facebook inbox.You know, there's a certain point where people need to stop and actually think about the implimentation.Um, they do.
They regularly blog about their solutions to their problems and open source their solutions and contributions to existing projects.
They come up with amazing solutions to their large scale problems.
They're running over five million Erlang processes for their chat system!
http://developers.facebook.com/news.php?blog=1 [facebook.com]  http://github.com/facebook [github.com] Also, when was the last time you tried to visit Facebook and it was down?
They're doing quite well for people who need to stop and actually think about their "implimentation".
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055156</id>
	<title>Re:bad design</title>
	<author>nairbv</author>
	<datestamp>1257866820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>no... it wouldn't "look through" all 50 TB of data.  Actually reading all 50 TB of data would make it take a long long time to see your inbox.
<br> <br>
I don't know what their architecture is exactly, but they probably something like $mysql\_host = 'sqlhosts' + ($friend\_id MODULUS 1000);
<br> <br>
Sure, 100 friends might be on 100 different servers.  That doesn't mean they need to scan through every row checking to see if it's a friend of yours.  It's the same thing as using indexes, but distributed.</htmltext>
<tokenext>no... it would n't " look through " all 50 TB of data .
Actually reading all 50 TB of data would make it take a long long time to see your inbox .
I do n't know what their architecture is exactly , but they probably something like $ mysql \ _host = 'sqlhosts ' + ( $ friend \ _id MODULUS 1000 ) ; Sure , 100 friends might be on 100 different servers .
That does n't mean they need to scan through every row checking to see if it 's a friend of yours .
It 's the same thing as using indexes , but distributed .</tokentext>
<sentencetext>no... it wouldn't "look through" all 50 TB of data.
Actually reading all 50 TB of data would make it take a long long time to see your inbox.
I don't know what their architecture is exactly, but they probably something like $mysql\_host = 'sqlhosts' + ($friend\_id MODULUS 1000);
 
Sure, 100 friends might be on 100 different servers.
That doesn't mean they need to scan through every row checking to see if it's a friend of yours.
It's the same thing as using indexes, but distributed.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048370</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>Anonymous</author>
	<datestamp>1257878880000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You know the problem with all this object-oriented database/JSON shite. They are very interesting
- but be honest... your first port of call for REALLY IMPORTANT apps is a relational database. They work. They are well understood. They've done the job for decades now. They just aren't very fashionable these days.
</p><p>People don't make reputations for genius by supporting the status-quo. Hence the legions of people claiming that RDBMS have had their time, and THEY have the right answer. That's not to say that there aren't better solutions in some cases... but really... you should view any of this stuff with a wary eye. New types of databases is  major source of CompSci quackery.</p></htmltext>
<tokenext>You know the problem with all this object-oriented database/JSON shite .
They are very interesting - but be honest... your first port of call for REALLY IMPORTANT apps is a relational database .
They work .
They are well understood .
They 've done the job for decades now .
They just are n't very fashionable these days .
People do n't make reputations for genius by supporting the status-quo .
Hence the legions of people claiming that RDBMS have had their time , and THEY have the right answer .
That 's not to say that there are n't better solutions in some cases... but really... you should view any of this stuff with a wary eye .
New types of databases is major source of CompSci quackery .</tokentext>
<sentencetext>You know the problem with all this object-oriented database/JSON shite.
They are very interesting
- but be honest... your first port of call for REALLY IMPORTANT apps is a relational database.
They work.
They are well understood.
They've done the job for decades now.
They just aren't very fashionable these days.
People don't make reputations for genius by supporting the status-quo.
Hence the legions of people claiming that RDBMS have had their time, and THEY have the right answer.
That's not to say that there aren't better solutions in some cases... but really... you should view any of this stuff with a wary eye.
New types of databases is  major source of CompSci quackery.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043006</id>
	<title>Re:Why worry?</title>
	<author>mikael\_j</author>
	<datestamp>1257884040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sadly there is plenty of production code that uses Access databases for things they just shouldn't be used for, at a previous job I actually built several production websites that used Access as the db backend because the client didn't want to use MySQL (Open source is scary!) and they didn't want to pay for MSSQL...</p><p>/Mikael</p></htmltext>
<tokenext>Sadly there is plenty of production code that uses Access databases for things they just should n't be used for , at a previous job I actually built several production websites that used Access as the db backend because the client did n't want to use MySQL ( Open source is scary !
) and they did n't want to pay for MSSQL.../Mikael</tokentext>
<sentencetext>Sadly there is plenty of production code that uses Access databases for things they just shouldn't be used for, at a previous job I actually built several production websites that used Access as the db backend because the client didn't want to use MySQL (Open source is scary!
) and they didn't want to pay for MSSQL.../Mikael</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045454</id>
	<title>Re:hmm</title>
	<author>awol</author>
	<datestamp>1257868020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I looked hard at netezza for a big project with "absurd" requirements (many 10^4 new records per second, ad hoc queryable by clients). It seemed to be the ideal solution. Nice to see it might have worked. How fast does your data grow?</p></htmltext>
<tokenext>I looked hard at netezza for a big project with " absurd " requirements ( many 10 ^ 4 new records per second , ad hoc queryable by clients ) .
It seemed to be the ideal solution .
Nice to see it might have worked .
How fast does your data grow ?</tokentext>
<sentencetext>I looked hard at netezza for a big project with "absurd" requirements (many 10^4 new records per second, ad hoc queryable by clients).
It seemed to be the ideal solution.
Nice to see it might have worked.
How fast does your data grow?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042674</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044022</id>
	<title>solution looking for a problem?</title>
	<author>timmarhy</author>
	<datestamp>1257856380000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>SQL databases if designed properly DO handle enourmous datasets. the problem starts when you have wits designing the database and then managers attempting to use the DB for purposes it wasn't meant for.</htmltext>
<tokenext>SQL databases if designed properly DO handle enourmous datasets .
the problem starts when you have wits designing the database and then managers attempting to use the DB for purposes it was n't meant for .</tokentext>
<sentencetext>SQL databases if designed properly DO handle enourmous datasets.
the problem starts when you have wits designing the database and then managers attempting to use the DB for purposes it wasn't meant for.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044466</id>
	<title>Re:bad design</title>
	<author>jcnnghm</author>
	<datestamp>1257861600000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Yes it does (look through 50TB of data), and how would you design it?</p></div><p>When a users posts a message, I would have the web server pass the message to a server that listens for messages that are being sent.  That server would collect the mail then place them as a payload package in the messaging queue when either a fixed number of mail recipients, probably around 500, or a fixed time passes, probably 500ms, whichever comes first.  When the payload reaches the front of the queue, the messaging server working on the payload would parse through all the messages building a model of all the data it needs to render all of the messages.  It would then send a low priority FQL multiquery requesting all of the data it needs to render and send all of the requests.  From there, the messaging server would render both the updated view of the mail when viewing the thread, and view of the thread when viewing the inbox.  These would be passed to a persistent memcached setup.</p><p>An FQL query would be generated for each user that would increment their inbox message counter, remove the memcached key of the old thread preview from the array of keys representing their inbox while prepending the new key to the array, and append the key to the array representing the thread.  When this was assembled for all mail, another low priority multi-query would be sent committing this change.</p><p>At this point I'd purge the old thread preview keys from the persistent memcached setup, and store the raw data in a table indexed by both the thread preview key, and the mail view key.  The raw data would be stored in case a design change ever necessitated re-rendering all of the mail, or in the case of a user name change.</p><p>Finally, I would generate and send an e-mail to each user telling them they have a new message.</p><p>This is complex, but it also means that to render an inbox, the only thing that has to be done is to retrieve the array of message thread preview keys, and request each thread preview by key from memcached.  Of course, this collection could also be cached.</p><p><b>Note</b>:  I intentionally left out some things in the interest of time, like sent message display, read and unread flagging, spam filtering, new message highlighting, and I'm sure others.  It shouldn't be difficult to see how this basic model can be expanded to cover these cases.</p></div>
	</htmltext>
<tokenext>Yes it does ( look through 50TB of data ) , and how would you design it ? When a users posts a message , I would have the web server pass the message to a server that listens for messages that are being sent .
That server would collect the mail then place them as a payload package in the messaging queue when either a fixed number of mail recipients , probably around 500 , or a fixed time passes , probably 500ms , whichever comes first .
When the payload reaches the front of the queue , the messaging server working on the payload would parse through all the messages building a model of all the data it needs to render all of the messages .
It would then send a low priority FQL multiquery requesting all of the data it needs to render and send all of the requests .
From there , the messaging server would render both the updated view of the mail when viewing the thread , and view of the thread when viewing the inbox .
These would be passed to a persistent memcached setup.An FQL query would be generated for each user that would increment their inbox message counter , remove the memcached key of the old thread preview from the array of keys representing their inbox while prepending the new key to the array , and append the key to the array representing the thread .
When this was assembled for all mail , another low priority multi-query would be sent committing this change.At this point I 'd purge the old thread preview keys from the persistent memcached setup , and store the raw data in a table indexed by both the thread preview key , and the mail view key .
The raw data would be stored in case a design change ever necessitated re-rendering all of the mail , or in the case of a user name change.Finally , I would generate and send an e-mail to each user telling them they have a new message.This is complex , but it also means that to render an inbox , the only thing that has to be done is to retrieve the array of message thread preview keys , and request each thread preview by key from memcached .
Of course , this collection could also be cached.Note : I intentionally left out some things in the interest of time , like sent message display , read and unread flagging , spam filtering , new message highlighting , and I 'm sure others .
It should n't be difficult to see how this basic model can be expanded to cover these cases .</tokentext>
<sentencetext>Yes it does (look through 50TB of data), and how would you design it?When a users posts a message, I would have the web server pass the message to a server that listens for messages that are being sent.
That server would collect the mail then place them as a payload package in the messaging queue when either a fixed number of mail recipients, probably around 500, or a fixed time passes, probably 500ms, whichever comes first.
When the payload reaches the front of the queue, the messaging server working on the payload would parse through all the messages building a model of all the data it needs to render all of the messages.
It would then send a low priority FQL multiquery requesting all of the data it needs to render and send all of the requests.
From there, the messaging server would render both the updated view of the mail when viewing the thread, and view of the thread when viewing the inbox.
These would be passed to a persistent memcached setup.An FQL query would be generated for each user that would increment their inbox message counter, remove the memcached key of the old thread preview from the array of keys representing their inbox while prepending the new key to the array, and append the key to the array representing the thread.
When this was assembled for all mail, another low priority multi-query would be sent committing this change.At this point I'd purge the old thread preview keys from the persistent memcached setup, and store the raw data in a table indexed by both the thread preview key, and the mail view key.
The raw data would be stored in case a design change ever necessitated re-rendering all of the mail, or in the case of a user name change.Finally, I would generate and send an e-mail to each user telling them they have a new message.This is complex, but it also means that to render an inbox, the only thing that has to be done is to retrieve the array of message thread preview keys, and request each thread preview by key from memcached.
Of course, this collection could also be cached.Note:  I intentionally left out some things in the interest of time, like sent message display, read and unread flagging, spam filtering, new message highlighting, and I'm sure others.
It shouldn't be difficult to see how this basic model can be expanded to cover these cases.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046876</id>
	<title>all Java?</title>
	<author>jipn4</author>
	<datestamp>1257874020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Are all of those in Java?  What about people who want something efficient and scalable without running JVMs everywhere?  Have some of them been ported to Mono?</p></htmltext>
<tokenext>Are all of those in Java ?
What about people who want something efficient and scalable without running JVMs everywhere ?
Have some of them been ported to Mono ?</tokentext>
<sentencetext>Are all of those in Java?
What about people who want something efficient and scalable without running JVMs everywhere?
Have some of them been ported to Mono?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043502</id>
	<title>Re:Why worry?</title>
	<author>Linker3000</author>
	<datestamp>1257848820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Oh Great! I have just migrated 5 offices from a veterinary management system based around Access 97 onto the new, MS-SQL-based one.</p><p>How can I expect to maintain my value to the company if they stick with old, reliable systems instead of moving onto more sophisticated 'solutions' that require a shit-load of tweaking and technical guesswork to keep them running smoothly?</p></htmltext>
<tokenext>Oh Great !
I have just migrated 5 offices from a veterinary management system based around Access 97 onto the new , MS-SQL-based one.How can I expect to maintain my value to the company if they stick with old , reliable systems instead of moving onto more sophisticated 'solutions ' that require a shit-load of tweaking and technical guesswork to keep them running smoothly ?</tokentext>
<sentencetext>Oh Great!
I have just migrated 5 offices from a veterinary management system based around Access 97 onto the new, MS-SQL-based one.How can I expect to maintain my value to the company if they stick with old, reliable systems instead of moving onto more sophisticated 'solutions' that require a shit-load of tweaking and technical guesswork to keep them running smoothly?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042822</id>
	<title>Re:Hashes are your friend</title>
	<author>Anonymous</author>
	<datestamp>1257795120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's very clever and all (and I'm sure quite effective), but it doesn't address the original issue: RDBMSs suck at scaling.  We <i>should</i> be able to throw a rack of servers with a load balancer and a SAN at the problem and have it go away.  We shouldn't have to rewrite our application logic to scale it out any more than we currently have to write special code because our hard drives are in RAID5 (read: not at all).</p><p>The storage engines and their indexing should take care of all of this nonsense automatically.  You might have to help them out by being a bit more specific than <tt>key `user\_id` (`user\_id`)</tt> (your stock tickers are a good example), but fundamentally the code that helps scale out a database should be part of the database and not the application that's using it.</p><p>But, life isn't so kind to us. Oh well, maybe in time.</p></htmltext>
<tokenext>That 's very clever and all ( and I 'm sure quite effective ) , but it does n't address the original issue : RDBMSs suck at scaling .
We should be able to throw a rack of servers with a load balancer and a SAN at the problem and have it go away .
We should n't have to rewrite our application logic to scale it out any more than we currently have to write special code because our hard drives are in RAID5 ( read : not at all ) .The storage engines and their indexing should take care of all of this nonsense automatically .
You might have to help them out by being a bit more specific than key ` user \ _id ` ( ` user \ _id ` ) ( your stock tickers are a good example ) , but fundamentally the code that helps scale out a database should be part of the database and not the application that 's using it.But , life is n't so kind to us .
Oh well , maybe in time .</tokentext>
<sentencetext>That's very clever and all (and I'm sure quite effective), but it doesn't address the original issue: RDBMSs suck at scaling.
We should be able to throw a rack of servers with a load balancer and a SAN at the problem and have it go away.
We shouldn't have to rewrite our application logic to scale it out any more than we currently have to write special code because our hard drives are in RAID5 (read: not at all).The storage engines and their indexing should take care of all of this nonsense automatically.
You might have to help them out by being a bit more specific than key `user\_id` (`user\_id`) (your stock tickers are a good example), but fundamentally the code that helps scale out a database should be part of the database and not the application that's using it.But, life isn't so kind to us.
Oh well, maybe in time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042564</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546</id>
	<title>Dynamic Relational: change it, DON'T toss it</title>
	<author>Tablizer</author>
	<datestamp>1257791160000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>The performance claims will probably be disputed by Oracle whizzes. However, the "rigid schema" claim bothers me. RDBMS can be built that have a very dynamic flavor to them. For example, treat each row as a map (associative array). Non-existent columns in any given row are treated as Null/empty instead of an error. Perhaps tables can also be created just by inserting a row into the (new) target table. <b>No need for explicit schema management</b>. Constraints, such as "required" or "number" can incrementally be added as the schema becomes solidified. We have dynamic app languages, so why not dynamic RDBMS also? <b>Let's fiddle with and stretch RDBMS before outright tossing them.</b> Maybe also overhaul or enhance SQL. It's a bit long in the tooth.</p><p>More at:<br><a href="http://geocities.com/tablizer/dynrelat.htm" title="geocities.com" rel="nofollow">http://geocities.com/tablizer/dynrelat.htm</a> [geocities.com]<br>(And you thought geocities was de</p></htmltext>
<tokenext>The performance claims will probably be disputed by Oracle whizzes .
However , the " rigid schema " claim bothers me .
RDBMS can be built that have a very dynamic flavor to them .
For example , treat each row as a map ( associative array ) .
Non-existent columns in any given row are treated as Null/empty instead of an error .
Perhaps tables can also be created just by inserting a row into the ( new ) target table .
No need for explicit schema management .
Constraints , such as " required " or " number " can incrementally be added as the schema becomes solidified .
We have dynamic app languages , so why not dynamic RDBMS also ?
Let 's fiddle with and stretch RDBMS before outright tossing them .
Maybe also overhaul or enhance SQL .
It 's a bit long in the tooth.More at : http : //geocities.com/tablizer/dynrelat.htm [ geocities.com ] ( And you thought geocities was de</tokentext>
<sentencetext>The performance claims will probably be disputed by Oracle whizzes.
However, the "rigid schema" claim bothers me.
RDBMS can be built that have a very dynamic flavor to them.
For example, treat each row as a map (associative array).
Non-existent columns in any given row are treated as Null/empty instead of an error.
Perhaps tables can also be created just by inserting a row into the (new) target table.
No need for explicit schema management.
Constraints, such as "required" or "number" can incrementally be added as the schema becomes solidified.
We have dynamic app languages, so why not dynamic RDBMS also?
Let's fiddle with and stretch RDBMS before outright tossing them.
Maybe also overhaul or enhance SQL.
It's a bit long in the tooth.More at:http://geocities.com/tablizer/dynrelat.htm [geocities.com](And you thought geocities was de</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30058150</id>
	<title>Re:bad design</title>
	<author>cheekyboy</author>
	<datestamp>1257075180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>arggg</p><p>look, the reason people write their own stuff, is because the normal stuff wont scale for the same price.</p><p>A custom solution could use 5x less disk space, so if its 50tb compared to 250tb, then one is cheaper<br>to keep, backup, verify.</p><p>And bottom line is, managers count money cost first.</p><p>Yeah Oracle can do the same as xyz custom solution, but it would take 5x disk space, 10x more servers, require 15x more POWER.</p><p>50tb can easily bit in one rack fridge, but 250 is starting to require more space/power/yearly costs, hd replacements.</p><p>Its like saying VBasic can still be used to write a quake game, sure but it would be 1fps.</p><p>Maybe its just too easy to make the wrong design/solution in SQLdb, and takes real talent to do it right.</p><p>Not everyone has a spare 50tb to play with to 'try things', or even 250 if 50 isnt enough.</p><p>Maybe its database sql gurus getting scared of loosing that $$$ contract design work to old school programmers.</p></htmltext>
<tokenext>arggglook , the reason people write their own stuff , is because the normal stuff wont scale for the same price.A custom solution could use 5x less disk space , so if its 50tb compared to 250tb , then one is cheaperto keep , backup , verify.And bottom line is , managers count money cost first.Yeah Oracle can do the same as xyz custom solution , but it would take 5x disk space , 10x more servers , require 15x more POWER.50tb can easily bit in one rack fridge , but 250 is starting to require more space/power/yearly costs , hd replacements.Its like saying VBasic can still be used to write a quake game , sure but it would be 1fps.Maybe its just too easy to make the wrong design/solution in SQLdb , and takes real talent to do it right.Not everyone has a spare 50tb to play with to 'try things ' , or even 250 if 50 isnt enough.Maybe its database sql gurus getting scared of loosing that $ $ $ contract design work to old school programmers .</tokentext>
<sentencetext>arggglook, the reason people write their own stuff, is because the normal stuff wont scale for the same price.A custom solution could use 5x less disk space, so if its 50tb compared to 250tb, then one is cheaperto keep, backup, verify.And bottom line is, managers count money cost first.Yeah Oracle can do the same as xyz custom solution, but it would take 5x disk space, 10x more servers, require 15x more POWER.50tb can easily bit in one rack fridge, but 250 is starting to require more space/power/yearly costs, hd replacements.Its like saying VBasic can still be used to write a quake game, sure but it would be 1fps.Maybe its just too easy to make the wrong design/solution in SQLdb, and takes real talent to do it right.Not everyone has a spare 50tb to play with to 'try things', or even 250 if 50 isnt enough.Maybe its database sql gurus getting scared of loosing that $$$ contract design work to old school programmers.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30058382</id>
	<title>Why choose when you can have both?</title>
	<author>ivoras</author>
	<datestamp>1257078240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>PostgreSQL at least, and probably other databases, has a generic "key-value store" data type: <a href="http://www.postgresql.org/docs/8.3/interactive/datatype.html" title="postgresql.org">http://www.postgresql.org/docs/8.3/interactive/datatype.html</a> [postgresql.org]. With it, rows can contain some strictly-typed data (such as IDs, types, other metadata) and also contain a field (or many fields) which store all other loosely-typed data. And since it's PostgreSQL, all data is safe, can be replicated, you can have complex indexes, full text search, etc.</htmltext>
<tokenext>PostgreSQL at least , and probably other databases , has a generic " key-value store " data type : http : //www.postgresql.org/docs/8.3/interactive/datatype.html [ postgresql.org ] .
With it , rows can contain some strictly-typed data ( such as IDs , types , other metadata ) and also contain a field ( or many fields ) which store all other loosely-typed data .
And since it 's PostgreSQL , all data is safe , can be replicated , you can have complex indexes , full text search , etc .</tokentext>
<sentencetext>PostgreSQL at least, and probably other databases, has a generic "key-value store" data type: http://www.postgresql.org/docs/8.3/interactive/datatype.html [postgresql.org].
With it, rows can contain some strictly-typed data (such as IDs, types, other metadata) and also contain a field (or many fields) which store all other loosely-typed data.
And since it's PostgreSQL, all data is safe, can be replicated, you can have complex indexes, full text search, etc.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044502</id>
	<title>Re:I know the type well</title>
	<author>Anonymous</author>
	<datestamp>1257861960000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>At my company, we don't call those kind of people "architects". Instead, we call them "fucktards".</p></htmltext>
<tokenext>At my company , we do n't call those kind of people " architects " .
Instead , we call them " fucktards " .</tokentext>
<sentencetext>At my company, we don't call those kind of people "architects".
Instead, we call them "fucktards".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044308</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042526</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257790860000</datestamp>
	<modclass>Troll</modclass>
	<modscore>-1</modscore>
	<htmltext><p>That would require intelligence. Havent looked outside of the basement lately have you?</p></htmltext>
<tokenext>That would require intelligence .
Havent looked outside of the basement lately have you ?</tokentext>
<sentencetext>That would require intelligence.
Havent looked outside of the basement lately have you?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051426</id>
	<title>Re:bad design</title>
	<author>MikeBabcock</author>
	<datestamp>1257847560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have that happen quite regularly as well.  I often click one of the tabs beside a friend's name and have nothing happen at all, or have the wrong data appear.  Its great fun when "Info" shows up after clicking "Photos".</p><p>Obviously they're doing a better job of handling that amount of data than most people would be, but not as good at it as say VISA is.</p></htmltext>
<tokenext>I have that happen quite regularly as well .
I often click one of the tabs beside a friend 's name and have nothing happen at all , or have the wrong data appear .
Its great fun when " Info " shows up after clicking " Photos " .Obviously they 're doing a better job of handling that amount of data than most people would be , but not as good at it as say VISA is .</tokentext>
<sentencetext>I have that happen quite regularly as well.
I often click one of the tabs beside a friend's name and have nothing happen at all, or have the wrong data appear.
Its great fun when "Info" shows up after clicking "Photos".Obviously they're doing a better job of handling that amount of data than most people would be, but not as good at it as say VISA is.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043000</id>
	<title>Re:bad design</title>
	<author>Hal\_Porter</author>
	<datestamp>1257883980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>They're running over five million Erlang processes for their chat system!</p><p> <a href="http://developers.facebook.com/news.php?blog=1" title="facebook.com" rel="nofollow">http://developers.facebook.com/news.php?blog=1</a> [facebook.com] </p></div><p>If the objective were to maximize the number of Erlang processes, that would be an indeed be an impressive achievement.</p></div>
	</htmltext>
<tokenext>They 're running over five million Erlang processes for their chat system !
http : //developers.facebook.com/news.php ? blog = 1 [ facebook.com ] If the objective were to maximize the number of Erlang processes , that would be an indeed be an impressive achievement .</tokentext>
<sentencetext>They're running over five million Erlang processes for their chat system!
http://developers.facebook.com/news.php?blog=1 [facebook.com] If the objective were to maximize the number of Erlang processes, that would be an indeed be an impressive achievement.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30062518</id>
	<title>Re:bad design</title>
	<author>convolvatron</author>
	<datestamp>1257100440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>actually, fuck off. language design people have distilled the semantics of sql down to something with is actually composable, compilable, able to be evaluated in a distributed context, and actually..useful. datalog. mercury.</p><p>sql is a festering sore. its a poorly conceived idea that for random reasons survived</p></htmltext>
<tokenext>actually , fuck off .
language design people have distilled the semantics of sql down to something with is actually composable , compilable , able to be evaluated in a distributed context , and actually..useful .
datalog. mercury.sql is a festering sore .
its a poorly conceived idea that for random reasons survived</tokentext>
<sentencetext>actually, fuck off.
language design people have distilled the semantics of sql down to something with is actually composable, compilable, able to be evaluated in a distributed context, and actually..useful.
datalog. mercury.sql is a festering sore.
its a poorly conceived idea that for random reasons survived</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042648</id>
	<title>Semi-Dupe?</title>
	<author>Tablizer</author>
	<datestamp>1257792480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There was a similar story on Slashdot a few months ago:</p><p><a href="http://tech.slashdot.org/story/09/07/02/219247/Enthusiasts-Convene-To-Say-No-To-SQL-Hash-Out-New-DB-Breed" title="slashdot.org" rel="nofollow">http://tech.slashdot.org/story/09/07/02/219247/Enthusiasts-Convene-To-Say-No-To-SQL-Hash-Out-New-DB-Breed</a> [slashdot.org]</p></htmltext>
<tokenext>There was a similar story on Slashdot a few months ago : http : //tech.slashdot.org/story/09/07/02/219247/Enthusiasts-Convene-To-Say-No-To-SQL-Hash-Out-New-DB-Breed [ slashdot.org ]</tokentext>
<sentencetext>There was a similar story on Slashdot a few months ago:http://tech.slashdot.org/story/09/07/02/219247/Enthusiasts-Convene-To-Say-No-To-SQL-Hash-Out-New-DB-Breed [slashdot.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30060290</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257091200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A more relevant question is actually "when was the last time your business failed b/c you were on FaceBook and got failed transport requests, unsent chat messages, unavailable photos, or random blank pages?"</p></htmltext>
<tokenext>A more relevant question is actually " when was the last time your business failed b/c you were on FaceBook and got failed transport requests , unsent chat messages , unavailable photos , or random blank pages ?
"</tokentext>
<sentencetext>A more relevant question is actually "when was the last time your business failed b/c you were on FaceBook and got failed transport requests, unsent chat messages, unavailable photos, or random blank pages?
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047008</id>
	<title>Re:bad design</title>
	<author>CodeBuster</author>
	<datestamp>1257874440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Relational databases represent several decades of research into how to query data in a fault tolerant scalable way as a standing implementation, re-implementing them is a waste of time.</p></div><p>Except that worthless trade rags keep publishing bullshit PR articles on "the death of SQL" and the "next great database tech" (paid for by the company plugging that next great database tech). What happens next? Some B-Level executive reads this bullshit article on his next airline flight because somebody left the magazine in the seat pocket next to the sky-mall and when he gets back the engineers have to waste a bunch of time defending proven technologies, namely relational databases and SQL, from attacks by this B-Level executive who, armed with his bullshit trade rag knowledge, accuses the engineers of "thinking inside the box". If the executive wins the argument by twisting a few arms (aka cage match negotiator) then he gets promoted and by the time everyone realizes that the next great database tech really <b> <i>isn't</i> </b> he has moved on and the engineers are left holding the bag of shit leftover from the uninformed meddling of MBA asshats.</p><p>For those of you wondering, <b>cage match negotiator</b> refers to a management <a href="http://en.wikipedia.org/wiki/Antipattern" title="wikipedia.org">anti-pattern</a> [wikipedia.org] where the negotiator (the manager) takes a "win the argument at any cost" approach to dispute resolution, up to and including driving other team members off the project. The name comes from the <a href="http://en.wikipedia.org/wiki/Cage\_match#Cages" title="wikipedia.org">cage match</a> [wikipedia.org] format in wrestling where multiple wrestlers enter the cage but only one exits victorious when the match is finished.</p></div>
	</htmltext>
<tokenext>Relational databases represent several decades of research into how to query data in a fault tolerant scalable way as a standing implementation , re-implementing them is a waste of time.Except that worthless trade rags keep publishing bullshit PR articles on " the death of SQL " and the " next great database tech " ( paid for by the company plugging that next great database tech ) .
What happens next ?
Some B-Level executive reads this bullshit article on his next airline flight because somebody left the magazine in the seat pocket next to the sky-mall and when he gets back the engineers have to waste a bunch of time defending proven technologies , namely relational databases and SQL , from attacks by this B-Level executive who , armed with his bullshit trade rag knowledge , accuses the engineers of " thinking inside the box " .
If the executive wins the argument by twisting a few arms ( aka cage match negotiator ) then he gets promoted and by the time everyone realizes that the next great database tech really is n't he has moved on and the engineers are left holding the bag of shit leftover from the uninformed meddling of MBA asshats.For those of you wondering , cage match negotiator refers to a management anti-pattern [ wikipedia.org ] where the negotiator ( the manager ) takes a " win the argument at any cost " approach to dispute resolution , up to and including driving other team members off the project .
The name comes from the cage match [ wikipedia.org ] format in wrestling where multiple wrestlers enter the cage but only one exits victorious when the match is finished .</tokentext>
<sentencetext>Relational databases represent several decades of research into how to query data in a fault tolerant scalable way as a standing implementation, re-implementing them is a waste of time.Except that worthless trade rags keep publishing bullshit PR articles on "the death of SQL" and the "next great database tech" (paid for by the company plugging that next great database tech).
What happens next?
Some B-Level executive reads this bullshit article on his next airline flight because somebody left the magazine in the seat pocket next to the sky-mall and when he gets back the engineers have to waste a bunch of time defending proven technologies, namely relational databases and SQL, from attacks by this B-Level executive who, armed with his bullshit trade rag knowledge, accuses the engineers of "thinking inside the box".
If the executive wins the argument by twisting a few arms (aka cage match negotiator) then he gets promoted and by the time everyone realizes that the next great database tech really  isn't  he has moved on and the engineers are left holding the bag of shit leftover from the uninformed meddling of MBA asshats.For those of you wondering, cage match negotiator refers to a management anti-pattern [wikipedia.org] where the negotiator (the manager) takes a "win the argument at any cost" approach to dispute resolution, up to and including driving other team members off the project.
The name comes from the cage match [wikipedia.org] format in wrestling where multiple wrestlers enter the cage but only one exits victorious when the match is finished.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046220</id>
	<title>I keep MongoDB, Sesame, and CouchDB always running</title>
	<author>MarkWatson</author>
	<datestamp>1257871440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>MongoDB starts as a service on my MacBook and on my local network I always keep services for Sesame (RDF data store, SPARQL endpoint), MongoDB, and CouchDB running.</p><p>It is easier to use NoSQL datastores (when they are appropriate) if you always have them running, have client libraries in place, etc.</p><p>If you want to use a relational database, you don't have to stop to install it, get client libraires, etc. I think the same 'ready at hand-ness' shoud apply to whatever NoSQL datastores that meet your needs.</p></htmltext>
<tokenext>MongoDB starts as a service on my MacBook and on my local network I always keep services for Sesame ( RDF data store , SPARQL endpoint ) , MongoDB , and CouchDB running.It is easier to use NoSQL datastores ( when they are appropriate ) if you always have them running , have client libraries in place , etc.If you want to use a relational database , you do n't have to stop to install it , get client libraires , etc .
I think the same 'ready at hand-ness ' shoud apply to whatever NoSQL datastores that meet your needs .</tokentext>
<sentencetext>MongoDB starts as a service on my MacBook and on my local network I always keep services for Sesame (RDF data store, SPARQL endpoint), MongoDB, and CouchDB running.It is easier to use NoSQL datastores (when they are appropriate) if you always have them running, have client libraries in place, etc.If you want to use a relational database, you don't have to stop to install it, get client libraires, etc.
I think the same 'ready at hand-ness' shoud apply to whatever NoSQL datastores that meet your needs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30065426</id>
	<title>Re:bad design</title>
	<author>moosesocks</author>
	<datestamp>1257068760000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Also, when was the last time you tried to visit Facebook and it was down? They're doing quite well for people who need to stop and actually think about their "implimentation".</p></div><p>You bring up an excellent point.  I've been using Facebook since shortly after its launch in 2004, and can't remember <i>any</i> downtime over the course of its (very impressive) growth.</p><p>I don't know of any other site with a track record like that.  Even GMail has had a few (fairly severe) outages over its history.</p></div>
	</htmltext>
<tokenext>Also , when was the last time you tried to visit Facebook and it was down ?
They 're doing quite well for people who need to stop and actually think about their " implimentation " .You bring up an excellent point .
I 've been using Facebook since shortly after its launch in 2004 , and ca n't remember any downtime over the course of its ( very impressive ) growth.I do n't know of any other site with a track record like that .
Even GMail has had a few ( fairly severe ) outages over its history .</tokentext>
<sentencetext>Also, when was the last time you tried to visit Facebook and it was down?
They're doing quite well for people who need to stop and actually think about their "implimentation".You bring up an excellent point.
I've been using Facebook since shortly after its launch in 2004, and can't remember any downtime over the course of its (very impressive) growth.I don't know of any other site with a track record like that.
Even GMail has had a few (fairly severe) outages over its history.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257791880000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext>Yes it does (look through 50TB of data), and how would you design it? It has to access all of your friends and find their postings.

Robert Johnson gave an excellent talk on facebook's design two weeks ago at OOPSLA (it should be in the ACM digital library soon). He stated that there is no clear segregation of data, the (friend) network is too connected and extracting groups of friends isn't possible.

Basically they have a huge mysql farm with memcached on top. Loading an inbox will hit multiple servers (maybe even a different server for each of your friends) across the farm.</htmltext>
<tokenext>Yes it does ( look through 50TB of data ) , and how would you design it ?
It has to access all of your friends and find their postings .
Robert Johnson gave an excellent talk on facebook 's design two weeks ago at OOPSLA ( it should be in the ACM digital library soon ) .
He stated that there is no clear segregation of data , the ( friend ) network is too connected and extracting groups of friends is n't possible .
Basically they have a huge mysql farm with memcached on top .
Loading an inbox will hit multiple servers ( maybe even a different server for each of your friends ) across the farm .</tokentext>
<sentencetext>Yes it does (look through 50TB of data), and how would you design it?
It has to access all of your friends and find their postings.
Robert Johnson gave an excellent talk on facebook's design two weeks ago at OOPSLA (it should be in the ACM digital library soon).
He stated that there is no clear segregation of data, the (friend) network is too connected and extracting groups of friends isn't possible.
Basically they have a huge mysql farm with memcached on top.
Loading an inbox will hit multiple servers (maybe even a different server for each of your friends) across the farm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042942</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257796680000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p><a href="http://en.wikipedia.org/wiki/Bloom\_filter" title="wikipedia.org" rel="nofollow">Bloom Filter</a> [wikipedia.org]</p><p>Been around before it was used to describe computer graphics lighting effects.</p></htmltext>
<tokenext>Bloom Filter [ wikipedia.org ] Been around before it was used to describe computer graphics lighting effects .</tokentext>
<sentencetext>Bloom Filter [wikipedia.org]Been around before it was used to describe computer graphics lighting effects.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051016</id>
	<title>The examples use sql</title>
	<author>Anonymous</author>
	<datestamp>1257846000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>&gt; Digg's (3 TB for green badges) or Facebook's (50 TB for inbox search) or eBay's (2 PB overall)</p><p>All of these use sql though.</p></htmltext>
<tokenext>&gt; Digg 's ( 3 TB for green badges ) or Facebook 's ( 50 TB for inbox search ) or eBay 's ( 2 PB overall ) All of these use sql though .</tokentext>
<sentencetext>&gt; Digg's (3 TB for green badges) or Facebook's (50 TB for inbox search) or eBay's (2 PB overall)All of these use sql though.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044264</id>
	<title>Re:Starting to love the idea</title>
	<author>butlerdi</author>
	<datestamp>1257859560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Also Neo4j and a multitude of high performance RDF/Owl data stores which are into the billions of rows with reasoning.</htmltext>
<tokenext>Also Neo4j and a multitude of high performance RDF/Owl data stores which are into the billions of rows with reasoning .</tokentext>
<sentencetext>Also Neo4j and a multitude of high performance RDF/Owl data stores which are into the billions of rows with reasoning.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042562</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046286</id>
	<title>Re:bad design</title>
	<author>MarkWatson</author>
	<datestamp>1257871740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I would mod you up as "mucho interesting" if I had the points...</p><p>I also appreciate how open Facebook is on their techniques to solve problems, open sourcing things like Cassandra, etc.</p></htmltext>
<tokenext>I would mod you up as " mucho interesting " if I had the points...I also appreciate how open Facebook is on their techniques to solve problems , open sourcing things like Cassandra , etc .</tokentext>
<sentencetext>I would mod you up as "mucho interesting" if I had the points...I also appreciate how open Facebook is on their techniques to solve problems, open sourcing things like Cassandra, etc.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055724</id>
	<title>Regarding Entity/Attribute/Value model</title>
	<author>Tablizer</author>
	<datestamp>1257870540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>You described an entity attribute value model</p></div></blockquote><p>No. The EAV model creates a row-centric view of attributes. My suggestion keeps the traditional column-centric view intact. Other than being more careful about implied types when comparing and asterisk usage, most <b>SQL will look just like it does in a "static" RDBMS</b>. This is not the case with EAV's; they completely change the way one queries.</p></div>
	</htmltext>
<tokenext>You described an entity attribute value modelNo .
The EAV model creates a row-centric view of attributes .
My suggestion keeps the traditional column-centric view intact .
Other than being more careful about implied types when comparing and asterisk usage , most SQL will look just like it does in a " static " RDBMS .
This is not the case with EAV 's ; they completely change the way one queries .</tokentext>
<sentencetext>You described an entity attribute value modelNo.
The EAV model creates a row-centric view of attributes.
My suggestion keeps the traditional column-centric view intact.
Other than being more careful about implied types when comparing and asterisk usage, most SQL will look just like it does in a "static" RDBMS.
This is not the case with EAV's; they completely change the way one queries.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510</id>
	<title>bad design</title>
	<author>girlintraining</author>
	<datestamp>1257790680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>So... every time I open my inbox in Facebook, it has to search through 50TB of data? That sounds like a design problem. What has always floored me is why people think everything needs to be stuffed into a database. Terabyte sized binary blobs? You know, there's a certain point where people need to stop and actually think about the implimentation.</p></htmltext>
<tokenext>So... every time I open my inbox in Facebook , it has to search through 50TB of data ?
That sounds like a design problem .
What has always floored me is why people think everything needs to be stuffed into a database .
Terabyte sized binary blobs ?
You know , there 's a certain point where people need to stop and actually think about the implimentation .</tokentext>
<sentencetext>So... every time I open my inbox in Facebook, it has to search through 50TB of data?
That sounds like a design problem.
What has always floored me is why people think everything needs to be stuffed into a database.
Terabyte sized binary blobs?
You know, there's a certain point where people need to stop and actually think about the implimentation.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042580</id>
	<title>hi monkeys</title>
	<author>Anonymous</author>
	<datestamp>1257791760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Hi Monkeys.  There are MPP databases that scale way past this and allow you speedy access that includes ansi sql access (petabytes in teradata's case).   The newer compresed column store engines for many uses destroy hadoop in analytics use cases, per in performance and far fewer machines, plus the ability to use sql.</p><p>Stop the trype hype.</p></htmltext>
<tokenext>Hi Monkeys .
There are MPP databases that scale way past this and allow you speedy access that includes ansi sql access ( petabytes in teradata 's case ) .
The newer compresed column store engines for many uses destroy hadoop in analytics use cases , per in performance and far fewer machines , plus the ability to use sql.Stop the trype hype .</tokentext>
<sentencetext>Hi Monkeys.
There are MPP databases that scale way past this and allow you speedy access that includes ansi sql access (petabytes in teradata's case).
The newer compresed column store engines for many uses destroy hadoop in analytics use cases, per in performance and far fewer machines, plus the ability to use sql.Stop the trype hype.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044304</id>
	<title>Re:Hashes are your friend</title>
	<author>davidbrit2</author>
	<datestamp>1257859980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>But, if you MD5 the stock symbol you get an even distribution based on the first two hash characters to put the historical data into 256 tables.</p></div></blockquote><p>That works great until you decide to use the R in RDBMS and actually join some tables. Plus you'd be using all sorts of dynamic SQL to allow every query to pick the appropriate table, putting yourself at risk of SQL injection vulnerabilities. You don't want a bunch of interns coding dynamic SQL against a system big enough and important enough to warrant this kind of data partitioning.</p><p>If you really need to split a single table's data across multiple file/disk systems, use a DBMS that supports this at the physical storage level, rather than forcing you to do it logically with 256 tables. SQL Server, for example, allows creating file groups, which can contain multiple files on different file systems. Assign a table to a specific file group, and it will get spread across all those files. Or if you need finer control, use table partitioning which allows you to pick which file group each specific range of data is stored in. This works great, because the data is physically stored as though it were in multiple tables/indexes, allowing you to very quickly narrow your searches based on the partitioning key, and thus isolating all the I/O to a specific partition.</p><p>But 256 separate tables? Egad. It's irritating enough working with our ERP system, which splits most data into separate "open" and "historic" tables. If I had to deal with 256 of them, I'd probably quit.</p></div>
	</htmltext>
<tokenext>But , if you MD5 the stock symbol you get an even distribution based on the first two hash characters to put the historical data into 256 tables.That works great until you decide to use the R in RDBMS and actually join some tables .
Plus you 'd be using all sorts of dynamic SQL to allow every query to pick the appropriate table , putting yourself at risk of SQL injection vulnerabilities .
You do n't want a bunch of interns coding dynamic SQL against a system big enough and important enough to warrant this kind of data partitioning.If you really need to split a single table 's data across multiple file/disk systems , use a DBMS that supports this at the physical storage level , rather than forcing you to do it logically with 256 tables .
SQL Server , for example , allows creating file groups , which can contain multiple files on different file systems .
Assign a table to a specific file group , and it will get spread across all those files .
Or if you need finer control , use table partitioning which allows you to pick which file group each specific range of data is stored in .
This works great , because the data is physically stored as though it were in multiple tables/indexes , allowing you to very quickly narrow your searches based on the partitioning key , and thus isolating all the I/O to a specific partition.But 256 separate tables ?
Egad. It 's irritating enough working with our ERP system , which splits most data into separate " open " and " historic " tables .
If I had to deal with 256 of them , I 'd probably quit .</tokentext>
<sentencetext>But, if you MD5 the stock symbol you get an even distribution based on the first two hash characters to put the historical data into 256 tables.That works great until you decide to use the R in RDBMS and actually join some tables.
Plus you'd be using all sorts of dynamic SQL to allow every query to pick the appropriate table, putting yourself at risk of SQL injection vulnerabilities.
You don't want a bunch of interns coding dynamic SQL against a system big enough and important enough to warrant this kind of data partitioning.If you really need to split a single table's data across multiple file/disk systems, use a DBMS that supports this at the physical storage level, rather than forcing you to do it logically with 256 tables.
SQL Server, for example, allows creating file groups, which can contain multiple files on different file systems.
Assign a table to a specific file group, and it will get spread across all those files.
Or if you need finer control, use table partitioning which allows you to pick which file group each specific range of data is stored in.
This works great, because the data is physically stored as though it were in multiple tables/indexes, allowing you to very quickly narrow your searches based on the partitioning key, and thus isolating all the I/O to a specific partition.But 256 separate tables?
Egad. It's irritating enough working with our ERP system, which splits most data into separate "open" and "historic" tables.
If I had to deal with 256 of them, I'd probably quit.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042564</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30052092</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257850320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>They use bloom filters for messaging? What for?</p></div><p>Damm you NVidia, shaders are like crack to developers...</p></div>
	</htmltext>
<tokenext>They use bloom filters for messaging ?
What for ? Damm you NVidia , shaders are like crack to developers.. .</tokentext>
<sentencetext>They use bloom filters for messaging?
What for?Damm you NVidia, shaders are like crack to developers...
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042974</id>
	<title>30 Years?</title>
	<author>uncqual</author>
	<datestamp>1257883620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>...the traditional relational database technology that has served us well for over thirty years...</p></div></blockquote><p>Hmm... Before 1979, market share for RDBMS was TINY. It really didn't begin to "serve us well" until the mid 80's.</p></div>
	</htmltext>
<tokenext>...the traditional relational database technology that has served us well for over thirty years...Hmm... Before 1979 , market share for RDBMS was TINY .
It really did n't begin to " serve us well " until the mid 80 's .</tokentext>
<sentencetext>...the traditional relational database technology that has served us well for over thirty years...Hmm... Before 1979, market share for RDBMS was TINY.
It really didn't begin to "serve us well" until the mid 80's.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043240</id>
	<title>Re:bad design</title>
	<author>donaggie03</author>
	<datestamp>1257844560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>None of that ever happens to me, and I use facebook all the time. Maybe facebook just doesn't like you!</htmltext>
<tokenext>None of that ever happens to me , and I use facebook all the time .
Maybe facebook just does n't like you !</tokentext>
<sentencetext>None of that ever happens to me, and I use facebook all the time.
Maybe facebook just doesn't like you!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046780</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>Tablizer</author>
	<datestamp>1257873660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>You described an entity attribute value model, which winds up reinventing half the DBMS, poorly.</p></div> </blockquote><p>May I ask for a scenario demonstrating "poorly"? It can still do joins, aggregation, and indexing just like any other RDBMS. It's not clear to me what you think is missing.</p><blockquote><div><p>A "rigid" schema is preventing a ton of totally redundant code being written on the app side.</p></div></blockquote><p>Depends on the project. Sometimes nimbleness is a strategic advantage, such as prototyping or ad-hoc analysis.</p><blockquote><div><p>I'm starting from scratch [query language]. (Currently I'm slowly retyping about 40 pages into Latex...)</p></div></blockquote><p>I couldn't open any of those for some reason. But I too have also formed a draft query language influenced from IBM's BS2. See <a href="http://c2.com/cgi/wiki?TqlRoadmap" title="c2.com" rel="nofollow">http://c2.com/cgi/wiki?TqlRoadmap</a> [c2.com]</p><p>Here's a sample that returns the top 6 earners in each company department:</p><blockquote><div><p> <tt>srt = orderBy(Employees, [dept, salary], order)<br>top = group(srt, [(dept) dept2, max(order) order])<br>join(srt, top, a.dept=b.dept2 and b.order - a.order &lt; 6)</tt></p></div> </blockquote><p>("a" and "b" represent the left and right side of the join parameters.) But SQL is too entrenched and could probably be stretched a bit further with some enhancements.</p></div>
	</htmltext>
<tokenext>You described an entity attribute value model , which winds up reinventing half the DBMS , poorly .
May I ask for a scenario demonstrating " poorly " ?
It can still do joins , aggregation , and indexing just like any other RDBMS .
It 's not clear to me what you think is missing.A " rigid " schema is preventing a ton of totally redundant code being written on the app side.Depends on the project .
Sometimes nimbleness is a strategic advantage , such as prototyping or ad-hoc analysis.I 'm starting from scratch [ query language ] .
( Currently I 'm slowly retyping about 40 pages into Latex... ) I could n't open any of those for some reason .
But I too have also formed a draft query language influenced from IBM 's BS2 .
See http : //c2.com/cgi/wiki ? TqlRoadmap [ c2.com ] Here 's a sample that returns the top 6 earners in each company department : srt = orderBy ( Employees , [ dept , salary ] , order ) top = group ( srt , [ ( dept ) dept2 , max ( order ) order ] ) join ( srt , top , a.dept = b.dept2 and b.order - a.order ( " a " and " b " represent the left and right side of the join parameters .
) But SQL is too entrenched and could probably be stretched a bit further with some enhancements .</tokentext>
<sentencetext>You described an entity attribute value model, which winds up reinventing half the DBMS, poorly.
May I ask for a scenario demonstrating "poorly"?
It can still do joins, aggregation, and indexing just like any other RDBMS.
It's not clear to me what you think is missing.A "rigid" schema is preventing a ton of totally redundant code being written on the app side.Depends on the project.
Sometimes nimbleness is a strategic advantage, such as prototyping or ad-hoc analysis.I'm starting from scratch [query language].
(Currently I'm slowly retyping about 40 pages into Latex...)I couldn't open any of those for some reason.
But I too have also formed a draft query language influenced from IBM's BS2.
See http://c2.com/cgi/wiki?TqlRoadmap [c2.com]Here's a sample that returns the top 6 earners in each company department: srt = orderBy(Employees, [dept, salary], order)top = group(srt, [(dept) dept2, max(order) order])join(srt, top, a.dept=b.dept2 and b.order - a.order  ("a" and "b" represent the left and right side of the join parameters.
) But SQL is too entrenched and could probably be stretched a bit further with some enhancements.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044744</id>
	<title>One big problem with SQL is ...</title>
	<author>Skapare</author>
	<datestamp>1257864000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>... that too many developers and integrators will just use an SQL database by default without considering whether or not it is appropriate for the task.  I see so many databases where there is little or no hint of any relationships even being involved.  Some forums, for example, store postings in a database where the message content is a blob and it is indexed by a number.  To get a post, look by number.  While an SQL database can do this, so can many other database types.  There's no complex relational searching with this; it's just basic indexing (with maybe a tree of index relationships).  I'd sooner do this with a B-tree based filesystem.</p></htmltext>
<tokenext>... that too many developers and integrators will just use an SQL database by default without considering whether or not it is appropriate for the task .
I see so many databases where there is little or no hint of any relationships even being involved .
Some forums , for example , store postings in a database where the message content is a blob and it is indexed by a number .
To get a post , look by number .
While an SQL database can do this , so can many other database types .
There 's no complex relational searching with this ; it 's just basic indexing ( with maybe a tree of index relationships ) .
I 'd sooner do this with a B-tree based filesystem .</tokentext>
<sentencetext>... that too many developers and integrators will just use an SQL database by default without considering whether or not it is appropriate for the task.
I see so many databases where there is little or no hint of any relationships even being involved.
Some forums, for example, store postings in a database where the message content is a blob and it is indexed by a number.
To get a post, look by number.
While an SQL database can do this, so can many other database types.
There's no complex relational searching with this; it's just basic indexing (with maybe a tree of index relationships).
I'd sooner do this with a B-tree based filesystem.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049662</id>
	<title>Re:bad design</title>
	<author>Eravnrekaree</author>
	<datestamp>1257883740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I certainly agree. Every time we hear this NoSQL arguments and all of the disinformation from people who don't know what they are talking about, apparently dont know anything about SQL or relational databases, or dont use them properly. There is nothing in particular about SQL that is inefficient. In fact, it offers lots of opportunity to really speed things up. The multicolumn lookup on SQL does not slow down the speed of a single column lookup, so you get added benefit over a single column "nonsql" db without a downside. NonSQL brings back problems that were solved by SQL, its a regression, and without any valid reason.</p></htmltext>
<tokenext>I certainly agree .
Every time we hear this NoSQL arguments and all of the disinformation from people who do n't know what they are talking about , apparently dont know anything about SQL or relational databases , or dont use them properly .
There is nothing in particular about SQL that is inefficient .
In fact , it offers lots of opportunity to really speed things up .
The multicolumn lookup on SQL does not slow down the speed of a single column lookup , so you get added benefit over a single column " nonsql " db without a downside .
NonSQL brings back problems that were solved by SQL , its a regression , and without any valid reason .</tokentext>
<sentencetext>I certainly agree.
Every time we hear this NoSQL arguments and all of the disinformation from people who don't know what they are talking about, apparently dont know anything about SQL or relational databases, or dont use them properly.
There is nothing in particular about SQL that is inefficient.
In fact, it offers lots of opportunity to really speed things up.
The multicolumn lookup on SQL does not slow down the speed of a single column lookup, so you get added benefit over a single column "nonsql" db without a downside.
NonSQL brings back problems that were solved by SQL, its a regression, and without any valid reason.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047292</id>
	<title>Re:One big problem with SQL is ...</title>
	<author>Abcd1234</author>
	<datestamp>1257875340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><i>While an SQL database can do this, so can many other database types. There's no complex relational searching with this; it's just basic indexing (with maybe a tree of index relationships). I'd sooner do this with a B-tree based filesystem.</i></p><p>And how many B-tree-based filesystems out there are there?  And how portable are their implementations?  Yeah, exactly.</p><p>Even for straight, non-relational, tabular data, the fact is, your average RDBMS provides decent performance and portability while allowing you to leverage the facilities your pet language of choice offers for database integration.  That's a *huge* win, particularly if your alternative is to roll your own custom storage solution (as you suggest).</p><p>Frankly, I'd seriously question someone's competence if they chose to write their own filesystem-based storage layer over leveraging an existing solution like an RDBMS, *particularly* for simple applications that don't have special perform or data storage/querying requirements.</p></htmltext>
<tokenext>While an SQL database can do this , so can many other database types .
There 's no complex relational searching with this ; it 's just basic indexing ( with maybe a tree of index relationships ) .
I 'd sooner do this with a B-tree based filesystem.And how many B-tree-based filesystems out there are there ?
And how portable are their implementations ?
Yeah , exactly.Even for straight , non-relational , tabular data , the fact is , your average RDBMS provides decent performance and portability while allowing you to leverage the facilities your pet language of choice offers for database integration .
That 's a * huge * win , particularly if your alternative is to roll your own custom storage solution ( as you suggest ) .Frankly , I 'd seriously question someone 's competence if they chose to write their own filesystem-based storage layer over leveraging an existing solution like an RDBMS , * particularly * for simple applications that do n't have special perform or data storage/querying requirements .</tokentext>
<sentencetext>While an SQL database can do this, so can many other database types.
There's no complex relational searching with this; it's just basic indexing (with maybe a tree of index relationships).
I'd sooner do this with a B-tree based filesystem.And how many B-tree-based filesystems out there are there?
And how portable are their implementations?
Yeah, exactly.Even for straight, non-relational, tabular data, the fact is, your average RDBMS provides decent performance and portability while allowing you to leverage the facilities your pet language of choice offers for database integration.
That's a *huge* win, particularly if your alternative is to roll your own custom storage solution (as you suggest).Frankly, I'd seriously question someone's competence if they chose to write their own filesystem-based storage layer over leveraging an existing solution like an RDBMS, *particularly* for simple applications that don't have special perform or data storage/querying requirements.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045024</id>
	<title>Re:bad design</title>
	<author>Anonymous</author>
	<datestamp>1257865740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><blockquote><div><p> <i>The real issue I have with the NoSQL people is they're a bunch of whiny babies, who haven't even taken the time to understand the problem before lashing out at the first thing they see.</i></p></div> </blockquote><p>Yeah, some NoSQL advocates are certainly like that, but do you know what's even worse?  The whiny ACID-babies who haven't taken the time to understand the problems that most NoSQL systems are really trying to solve, and are just deathly afraid of having to learn something besides their comfortable RDBMS-tweaking ways.  Things really do get a bit different when you have petabytes spread across multiple sites, where even millions spent on Oracle licenses won't really yield a solution.  A lot of this work is based on Eric Brewer's <a href="http://www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf" title="berkeley.edu" rel="nofollow">CAP Theorem</a> [berkeley.edu], which was presented as a keynote at PODC in 2004 and has since been formally proven.  How's that for a bunch of folks who didn't know what's already out there?  Brewer's work was in turn informed by Lamport's (e.g. vector clocks and eventual consistency), who in turn built on others going back at least as far as Codd and the relational model.  The simple <b>fact</b> is that you can't have all of C/A/P, some people legitimately value A/P more than C, and C (consistency) in this context includes the I (isolation) of which you seem so enamored.  <a href="http://cacm.acm.org/blogs/blog-cacm/50678-the-nosql-discussion-has-nothing-to-do-with-sql/fulltext" title="acm.org" rel="nofollow">Stonebraker</a> [acm.org] already made the point that this has nothing to do with SQL, and much better than you, but it does have to do with ACID and ACID is simply irreconcilable with some needs.  Raising facile objections to the name is a poor substitute for tackling the real issues.</p><p>I've written about the <a href="http://pl.atyp.us/wordpress/?p=2368" title="pl.atyp.us" rel="nofollow">cargo cult mentality</a> [pl.atyp.us] myself, even in this particular context, anticipating your remarks by at more than a month.  Someone here has indeed not taken time to understand the problem before lashing out: <b>you</b>.  Please get over the puerile attitude that different knowledge must be inferior knowledge, and educate yourself a little.</p></div>
	</htmltext>
<tokenext>The real issue I have with the NoSQL people is they 're a bunch of whiny babies , who have n't even taken the time to understand the problem before lashing out at the first thing they see .
Yeah , some NoSQL advocates are certainly like that , but do you know what 's even worse ?
The whiny ACID-babies who have n't taken the time to understand the problems that most NoSQL systems are really trying to solve , and are just deathly afraid of having to learn something besides their comfortable RDBMS-tweaking ways .
Things really do get a bit different when you have petabytes spread across multiple sites , where even millions spent on Oracle licenses wo n't really yield a solution .
A lot of this work is based on Eric Brewer 's CAP Theorem [ berkeley.edu ] , which was presented as a keynote at PODC in 2004 and has since been formally proven .
How 's that for a bunch of folks who did n't know what 's already out there ?
Brewer 's work was in turn informed by Lamport 's ( e.g .
vector clocks and eventual consistency ) , who in turn built on others going back at least as far as Codd and the relational model .
The simple fact is that you ca n't have all of C/A/P , some people legitimately value A/P more than C , and C ( consistency ) in this context includes the I ( isolation ) of which you seem so enamored .
Stonebraker [ acm.org ] already made the point that this has nothing to do with SQL , and much better than you , but it does have to do with ACID and ACID is simply irreconcilable with some needs .
Raising facile objections to the name is a poor substitute for tackling the real issues.I 've written about the cargo cult mentality [ pl.atyp.us ] myself , even in this particular context , anticipating your remarks by at more than a month .
Someone here has indeed not taken time to understand the problem before lashing out : you .
Please get over the puerile attitude that different knowledge must be inferior knowledge , and educate yourself a little .</tokentext>
<sentencetext> The real issue I have with the NoSQL people is they're a bunch of whiny babies, who haven't even taken the time to understand the problem before lashing out at the first thing they see.
Yeah, some NoSQL advocates are certainly like that, but do you know what's even worse?
The whiny ACID-babies who haven't taken the time to understand the problems that most NoSQL systems are really trying to solve, and are just deathly afraid of having to learn something besides their comfortable RDBMS-tweaking ways.
Things really do get a bit different when you have petabytes spread across multiple sites, where even millions spent on Oracle licenses won't really yield a solution.
A lot of this work is based on Eric Brewer's CAP Theorem [berkeley.edu], which was presented as a keynote at PODC in 2004 and has since been formally proven.
How's that for a bunch of folks who didn't know what's already out there?
Brewer's work was in turn informed by Lamport's (e.g.
vector clocks and eventual consistency), who in turn built on others going back at least as far as Codd and the relational model.
The simple fact is that you can't have all of C/A/P, some people legitimately value A/P more than C, and C (consistency) in this context includes the I (isolation) of which you seem so enamored.
Stonebraker [acm.org] already made the point that this has nothing to do with SQL, and much better than you, but it does have to do with ACID and ACID is simply irreconcilable with some needs.
Raising facile objections to the name is a poor substitute for tackling the real issues.I've written about the cargo cult mentality [pl.atyp.us] myself, even in this particular context, anticipating your remarks by at more than a month.
Someone here has indeed not taken time to understand the problem before lashing out: you.
Please get over the puerile attitude that different knowledge must be inferior knowledge, and educate yourself a little.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043742</id>
	<title>Re:TFA is bullshit</title>
	<author>sohp</author>
	<datestamp>1257852300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Object databases could be a nice idea, but not for performance or scaling reasons.</p></div><p>[citation needed]</p><p>You're just talking out of your ass.</p></div>
	</htmltext>
<tokenext>Object databases could be a nice idea , but not for performance or scaling reasons .
[ citation needed ] You 're just talking out of your ass .</tokentext>
<sentencetext>Object databases could be a nice idea, but not for performance or scaling reasons.
[citation needed]You're just talking out of your ass.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043324</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044920</id>
	<title>MySQL Sucks</title>
	<author>democritus</author>
	<datestamp>1257865200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Actually, the real problem is that MySQL sucks. Sure, you can patch over some of its suck with Memcache, but at somepoint your still stuck waiting 30 seconds for a query to return, no matter how optimized you make it. Yes, it's trivial to get Oracle and MSSQL to scale to billions of rows, but those cost money no one is willing to spend. NoSQL is wonderful in that it scales easily and is free.</p><p>Sure, you have to denormalize your data, but you probably already were to try to squeeze the last bit of performance out of MySQL.</p><p>You want people to use RDBMS? Make a free one that doesn't suck donkey balls and they will.</p></htmltext>
<tokenext>Actually , the real problem is that MySQL sucks .
Sure , you can patch over some of its suck with Memcache , but at somepoint your still stuck waiting 30 seconds for a query to return , no matter how optimized you make it .
Yes , it 's trivial to get Oracle and MSSQL to scale to billions of rows , but those cost money no one is willing to spend .
NoSQL is wonderful in that it scales easily and is free.Sure , you have to denormalize your data , but you probably already were to try to squeeze the last bit of performance out of MySQL.You want people to use RDBMS ?
Make a free one that does n't suck donkey balls and they will .</tokentext>
<sentencetext>Actually, the real problem is that MySQL sucks.
Sure, you can patch over some of its suck with Memcache, but at somepoint your still stuck waiting 30 seconds for a query to return, no matter how optimized you make it.
Yes, it's trivial to get Oracle and MSSQL to scale to billions of rows, but those cost money no one is willing to spend.
NoSQL is wonderful in that it scales easily and is free.Sure, you have to denormalize your data, but you probably already were to try to squeeze the last bit of performance out of MySQL.You want people to use RDBMS?
Make a free one that doesn't suck donkey balls and they will.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055400</id>
	<title>Re:bad design</title>
	<author>thethibs</author>
	<datestamp>1257868260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Relational databases represent several decades of research into designing and building relational databases.</p><p>There are other kinds of databases, and problems to be solved that aren't easily modeled as tables. There's a reason for the agony over "object-relational impedance mismatch."  The common answer--to build object models that are restricted to looking like relations (findByXXX is the smoking gun) gives up much of the power of objects.</p><p>Relational databases solve problems that look like relations. That's a subset of all the problems to be solved--and it's shrinking.</p></htmltext>
<tokenext>Relational databases represent several decades of research into designing and building relational databases.There are other kinds of databases , and problems to be solved that are n't easily modeled as tables .
There 's a reason for the agony over " object-relational impedance mismatch .
" The common answer--to build object models that are restricted to looking like relations ( findByXXX is the smoking gun ) gives up much of the power of objects.Relational databases solve problems that look like relations .
That 's a subset of all the problems to be solved--and it 's shrinking .</tokentext>
<sentencetext>Relational databases represent several decades of research into designing and building relational databases.There are other kinds of databases, and problems to be solved that aren't easily modeled as tables.
There's a reason for the agony over "object-relational impedance mismatch.
"  The common answer--to build object models that are restricted to looking like relations (findByXXX is the smoking gun) gives up much of the power of objects.Relational databases solve problems that look like relations.
That's a subset of all the problems to be solved--and it's shrinking.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043324</id>
	<title>TFA is bullshit</title>
	<author>WarwickRyan</author>
	<datestamp>1257846000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've seen OLAP systems in the 100TB range which work fantastically well on Oracle.</p><p>Object databases could be a nice idea, but not for performance or scaling reasons.  An object oriented database would be beneficial as a method to sidestep ORM.  So you can, effortlessly and <b>without any significant amount extra work</b> persist the state of your objects.</p><p>Then you can build POxOs to represent your objects and just implement a few lines of code to have them persisted.</p><p>Not sure if anything like that already exists.  I certainly don't know of anything in the C# world, but I expect there's some funky named java project which does it.</p></htmltext>
<tokenext>I 've seen OLAP systems in the 100TB range which work fantastically well on Oracle.Object databases could be a nice idea , but not for performance or scaling reasons .
An object oriented database would be beneficial as a method to sidestep ORM .
So you can , effortlessly and without any significant amount extra work persist the state of your objects.Then you can build POxOs to represent your objects and just implement a few lines of code to have them persisted.Not sure if anything like that already exists .
I certainly do n't know of anything in the C # world , but I expect there 's some funky named java project which does it .</tokentext>
<sentencetext>I've seen OLAP systems in the 100TB range which work fantastically well on Oracle.Object databases could be a nice idea, but not for performance or scaling reasons.
An object oriented database would be beneficial as a method to sidestep ORM.
So you can, effortlessly and without any significant amount extra work persist the state of your objects.Then you can build POxOs to represent your objects and just implement a few lines of code to have them persisted.Not sure if anything like that already exists.
I certainly don't know of anything in the C# world, but I expect there's some funky named java project which does it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044472</id>
	<title>Re:bad design</title>
	<author>BitZtream</author>
	<datestamp>1257861660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And you have once again shown that they are likely doing it wrong.</p><p>According to their own page, they have 300 million 'active' users, 50\% log on each day, so 150 million users login each day.  No where near that are going to to 'chat' each day.</p><p>So what you are saying is they get 3 users to each chat process if EVERY user that logs in that day chats at the same time, which doesn't happen.</p><p>Yes, they are doing it wrong.</p></htmltext>
<tokenext>And you have once again shown that they are likely doing it wrong.According to their own page , they have 300 million 'active ' users , 50 \ % log on each day , so 150 million users login each day .
No where near that are going to to 'chat ' each day.So what you are saying is they get 3 users to each chat process if EVERY user that logs in that day chats at the same time , which does n't happen.Yes , they are doing it wrong .</tokentext>
<sentencetext>And you have once again shown that they are likely doing it wrong.According to their own page, they have 300 million 'active' users, 50\% log on each day, so 150 million users login each day.
No where near that are going to to 'chat' each day.So what you are saying is they get 3 users to each chat process if EVERY user that logs in that day chats at the same time, which doesn't happen.Yes, they are doing it wrong.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046194</id>
	<title>Re:bad design</title>
	<author>kthejoker</author>
	<datestamp>1257871380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>No, the problem is that the "real issues" you are talking about are things that 99\% of your typical DBAs will never see in their lifetime, because they work at a church or a pharmacy or a box factory.</p><p>It's great that Facebook and Google and eBay need map-reduce and Erlang and something more scalable than SQL Server Express or Berkeley DB. But they are the exception, not the rule. Excoriating people for pointing that out is, at best, irrelevant and at worst harmful to the idea of alternative data storage mechanisms.</p><p>I'm not picking on you directly, I see it as a larger symptom, that somehow because SQL/RBDMS is not ideal for certain projects, that it should be abandoned at all levels, sooner rather than later, even though there's 40+ years of RDBMS architecture manuals, best practices, knowledge bases, 3rd party apps, "SQL for Dummies", and so on to help the involuntary DBA succeed without having to figure out Cassandra.</p><p>I guess my concern is that a lot of small businesses and shops will see something like this, will think, "You know, our Access database sucks," and try to port themselves over to this, and guess what? The learning curve here is a lot steeper than SQL (the *academic* side of SQL-alternatives is just now getting into 3rd gear), the business case for it is pretty poor in most cases, and you'll end up with a lot of people wasting time trying to get Erlang processes going instead of just migrating to MySQL and keep on carrying on. There's way too much "Rah, Rah, Death to SQL" being attached to these new things, and to me it seems overblown.</p><p>But you know, I'm optimistic. 5 years from now, it may be a different ball game altogether, and then us DBAs just have more things to learn and to do.</p></htmltext>
<tokenext>No , the problem is that the " real issues " you are talking about are things that 99 \ % of your typical DBAs will never see in their lifetime , because they work at a church or a pharmacy or a box factory.It 's great that Facebook and Google and eBay need map-reduce and Erlang and something more scalable than SQL Server Express or Berkeley DB .
But they are the exception , not the rule .
Excoriating people for pointing that out is , at best , irrelevant and at worst harmful to the idea of alternative data storage mechanisms.I 'm not picking on you directly , I see it as a larger symptom , that somehow because SQL/RBDMS is not ideal for certain projects , that it should be abandoned at all levels , sooner rather than later , even though there 's 40 + years of RDBMS architecture manuals , best practices , knowledge bases , 3rd party apps , " SQL for Dummies " , and so on to help the involuntary DBA succeed without having to figure out Cassandra.I guess my concern is that a lot of small businesses and shops will see something like this , will think , " You know , our Access database sucks , " and try to port themselves over to this , and guess what ?
The learning curve here is a lot steeper than SQL ( the * academic * side of SQL-alternatives is just now getting into 3rd gear ) , the business case for it is pretty poor in most cases , and you 'll end up with a lot of people wasting time trying to get Erlang processes going instead of just migrating to MySQL and keep on carrying on .
There 's way too much " Rah , Rah , Death to SQL " being attached to these new things , and to me it seems overblown.But you know , I 'm optimistic .
5 years from now , it may be a different ball game altogether , and then us DBAs just have more things to learn and to do .</tokentext>
<sentencetext>No, the problem is that the "real issues" you are talking about are things that 99\% of your typical DBAs will never see in their lifetime, because they work at a church or a pharmacy or a box factory.It's great that Facebook and Google and eBay need map-reduce and Erlang and something more scalable than SQL Server Express or Berkeley DB.
But they are the exception, not the rule.
Excoriating people for pointing that out is, at best, irrelevant and at worst harmful to the idea of alternative data storage mechanisms.I'm not picking on you directly, I see it as a larger symptom, that somehow because SQL/RBDMS is not ideal for certain projects, that it should be abandoned at all levels, sooner rather than later, even though there's 40+ years of RDBMS architecture manuals, best practices, knowledge bases, 3rd party apps, "SQL for Dummies", and so on to help the involuntary DBA succeed without having to figure out Cassandra.I guess my concern is that a lot of small businesses and shops will see something like this, will think, "You know, our Access database sucks," and try to port themselves over to this, and guess what?
The learning curve here is a lot steeper than SQL (the *academic* side of SQL-alternatives is just now getting into 3rd gear), the business case for it is pretty poor in most cases, and you'll end up with a lot of people wasting time trying to get Erlang processes going instead of just migrating to MySQL and keep on carrying on.
There's way too much "Rah, Rah, Death to SQL" being attached to these new things, and to me it seems overblown.But you know, I'm optimistic.
5 years from now, it may be a different ball game altogether, and then us DBAs just have more things to learn and to do.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045024</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049188</id>
	<title>Re:And I am missing it greatly on Linux</title>
	<author>Anonymous</author>
	<datestamp>1257881760000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I guarantee you the overhead of preparing queries is not the bottleneck.  But if you really need it, you can send sqlite's VDBE interpreter raw bytecode, and nothing prevents you from writing your own language that generates it.</p><p>The funny thing is, aside from having fairly verbose identifiers, sql is actually pretty concise.  About the only practical thing that would significantly shrink the actual syntax elements would be some kind of join shorthand people would want to use (there's NATURAL JOIN but it sucks because it's by name and not structural declarations like foreign keys)</p><p>Access isn't even particularly fast when it comes to ISAM since it goes through standard windows filesystem APIs, which aren't particularly tuned for that sort of thing.</p></htmltext>
<tokenext>I guarantee you the overhead of preparing queries is not the bottleneck .
But if you really need it , you can send sqlite 's VDBE interpreter raw bytecode , and nothing prevents you from writing your own language that generates it.The funny thing is , aside from having fairly verbose identifiers , sql is actually pretty concise .
About the only practical thing that would significantly shrink the actual syntax elements would be some kind of join shorthand people would want to use ( there 's NATURAL JOIN but it sucks because it 's by name and not structural declarations like foreign keys ) Access is n't even particularly fast when it comes to ISAM since it goes through standard windows filesystem APIs , which are n't particularly tuned for that sort of thing .</tokentext>
<sentencetext>I guarantee you the overhead of preparing queries is not the bottleneck.
But if you really need it, you can send sqlite's VDBE interpreter raw bytecode, and nothing prevents you from writing your own language that generates it.The funny thing is, aside from having fairly verbose identifiers, sql is actually pretty concise.
About the only practical thing that would significantly shrink the actual syntax elements would be some kind of join shorthand people would want to use (there's NATURAL JOIN but it sucks because it's by name and not structural declarations like foreign keys)Access isn't even particularly fast when it comes to ISAM since it goes through standard windows filesystem APIs, which aren't particularly tuned for that sort of thing.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043582</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045962</id>
	<title>NoSQL - good tech, bad name</title>
	<author>Dominican</author>
	<datestamp>1257870480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At the ACM site Michael Stonebraker wrote an article titled "The "NoSQL" Discussion has Nothing to Do With SQL" where he discusses how the NoSQL group is solving real problems, but using a name.. that well.. really has nothing to do with the problems getting solved.</p><p>http://cacm.acm.org/blogs/blog-cacm/50678-the-nosql-discussion-has-nothing-to-do-with-sql/fulltext</p><p>For anyone not familiar with Stonebreaker..<br>http://en.wikipedia.org/wiki/Michael\_Stonebraker</p><p>Great article from someone who truly knows what he is talking about.</p></htmltext>
<tokenext>At the ACM site Michael Stonebraker wrote an article titled " The " NoSQL " Discussion has Nothing to Do With SQL " where he discusses how the NoSQL group is solving real problems , but using a name.. that well.. really has nothing to do with the problems getting solved.http : //cacm.acm.org/blogs/blog-cacm/50678-the-nosql-discussion-has-nothing-to-do-with-sql/fulltextFor anyone not familiar with Stonebreaker..http : //en.wikipedia.org/wiki/Michael \ _StonebrakerGreat article from someone who truly knows what he is talking about .</tokentext>
<sentencetext>At the ACM site Michael Stonebraker wrote an article titled "The "NoSQL" Discussion has Nothing to Do With SQL" where he discusses how the NoSQL group is solving real problems, but using a name.. that well.. really has nothing to do with the problems getting solved.http://cacm.acm.org/blogs/blog-cacm/50678-the-nosql-discussion-has-nothing-to-do-with-sql/fulltextFor anyone not familiar with Stonebreaker..http://en.wikipedia.org/wiki/Michael\_StonebrakerGreat article from someone who truly knows what he is talking about.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042878</id>
	<title>You pick the DBMS that works for you</title>
	<author>mkairys</author>
	<datestamp>1257795900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Most RDBMS implementations on the web are generally only used to store data and perform very basic queries such as get and store operations. Personally I don't really see the issue of using one for a web applications since they are proven to work well and with the right design and caching solution are more than capable of handling a popular website such as Digg or Facebook. The only real issue with these sites is to prevent bottlenecks you would generally need to throw more hardware at it than may be necessary (although memory is very cheap these days so its a non-issue for most companies).
<br> <br>

Memcached has shown to really help solve many performance issues for relational databases since the database won't constantly perform complex queries to grab data, it will just pull the result from a hashed index stored in memory. MemcachedDB <a href="http://memcachedb.org/memcachedb-guide-1.0.pdf" title="memcachedb.org" rel="nofollow">http://memcachedb.org/memcachedb-guide-1.0.pdf</a> [memcachedb.org] is looking very promising to use to get rid of a RDBMS all together for certain data such as user sessions since it focuses on performance rather than functionality. Even then I think it all really boils down to choosing the right tool for the job, if there's data that you know is going to be a performance bottleneck in the database, you look for more creative solutions to store and process that data. There's nothing stopping you from running two or more different types of databases for the task at hand.</htmltext>
<tokenext>Most RDBMS implementations on the web are generally only used to store data and perform very basic queries such as get and store operations .
Personally I do n't really see the issue of using one for a web applications since they are proven to work well and with the right design and caching solution are more than capable of handling a popular website such as Digg or Facebook .
The only real issue with these sites is to prevent bottlenecks you would generally need to throw more hardware at it than may be necessary ( although memory is very cheap these days so its a non-issue for most companies ) .
Memcached has shown to really help solve many performance issues for relational databases since the database wo n't constantly perform complex queries to grab data , it will just pull the result from a hashed index stored in memory .
MemcachedDB http : //memcachedb.org/memcachedb-guide-1.0.pdf [ memcachedb.org ] is looking very promising to use to get rid of a RDBMS all together for certain data such as user sessions since it focuses on performance rather than functionality .
Even then I think it all really boils down to choosing the right tool for the job , if there 's data that you know is going to be a performance bottleneck in the database , you look for more creative solutions to store and process that data .
There 's nothing stopping you from running two or more different types of databases for the task at hand .</tokentext>
<sentencetext>Most RDBMS implementations on the web are generally only used to store data and perform very basic queries such as get and store operations.
Personally I don't really see the issue of using one for a web applications since they are proven to work well and with the right design and caching solution are more than capable of handling a popular website such as Digg or Facebook.
The only real issue with these sites is to prevent bottlenecks you would generally need to throw more hardware at it than may be necessary (although memory is very cheap these days so its a non-issue for most companies).
Memcached has shown to really help solve many performance issues for relational databases since the database won't constantly perform complex queries to grab data, it will just pull the result from a hashed index stored in memory.
MemcachedDB http://memcachedb.org/memcachedb-guide-1.0.pdf [memcachedb.org] is looking very promising to use to get rid of a RDBMS all together for certain data such as user sessions since it focuses on performance rather than functionality.
Even then I think it all really boils down to choosing the right tool for the job, if there's data that you know is going to be a performance bottleneck in the database, you look for more creative solutions to store and process that data.
There's nothing stopping you from running two or more different types of databases for the task at hand.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043216</id>
	<title>SQL is NOT the Physical Storage or RDBMS engine</title>
	<author>Invisible Now</author>
	<datestamp>1257844200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Can we agree that SQL is a high level language for capturing the set theory query logic and is COMPLETELY INDEPENDENT of the engine and physical storage that actually generates the query plan and makes the heads fly to cache and return data?<br>
<br>
Structured<br>
Query<br>
Language<br>
<br>
not<br>
<br>
Stupid<br>
Quixotic<br>
Layout <br>
(Of tables, pages, indexes, drives, heads,spindles, SANs, etc...)<br>
<br>
Right?</htmltext>
<tokenext>Can we agree that SQL is a high level language for capturing the set theory query logic and is COMPLETELY INDEPENDENT of the engine and physical storage that actually generates the query plan and makes the heads fly to cache and return data ?
Structured Query Language not Stupid Quixotic Layout ( Of tables , pages , indexes , drives , heads,spindles , SANs , etc... ) Right ?</tokentext>
<sentencetext>Can we agree that SQL is a high level language for capturing the set theory query logic and is COMPLETELY INDEPENDENT of the engine and physical storage that actually generates the query plan and makes the heads fly to cache and return data?
Structured
Query
Language

not

Stupid
Quixotic
Layout 
(Of tables, pages, indexes, drives, heads,spindles, SANs, etc...)

Right?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30053690</id>
	<title>Re:Dynamic Relational: change it, DON'T toss it</title>
	<author>clockwise\_music</author>
	<datestamp>1257858060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>sco08y is completely correct. Everyone thinks of this once until someone points out that it's a bad idea.</htmltext>
<tokenext>sco08y is completely correct .
Everyone thinks of this once until someone points out that it 's a bad idea .</tokentext>
<sentencetext>sco08y is completely correct.
Everyone thinks of this once until someone points out that it's a bad idea.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042644</id>
	<title>Re:hmm</title>
	<author>KalvinB</author>
	<datestamp>1257792480000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>For the vast majority of use cases, large data sets can be made logically small with indexes or physically small with hashes.</p><p>If you're dealing with massive data you're probably not dealing with complex relationships.  E-Mail servers associate data with only one index: the e-mail address.  Google only associates content with keywords.  E-mail servers logically and physically separate email folders.  Google logically and physically separates the datasets for various keywords.  So by the time you hit it, it knows instantly where to look for what you want.  You don't have a whole complex system of relationships between the data.  It looks at the keywords , finds the predetermined results for each and combines the results.</p></htmltext>
<tokenext>For the vast majority of use cases , large data sets can be made logically small with indexes or physically small with hashes.If you 're dealing with massive data you 're probably not dealing with complex relationships .
E-Mail servers associate data with only one index : the e-mail address .
Google only associates content with keywords .
E-mail servers logically and physically separate email folders .
Google logically and physically separates the datasets for various keywords .
So by the time you hit it , it knows instantly where to look for what you want .
You do n't have a whole complex system of relationships between the data .
It looks at the keywords , finds the predetermined results for each and combines the results .</tokentext>
<sentencetext>For the vast majority of use cases, large data sets can be made logically small with indexes or physically small with hashes.If you're dealing with massive data you're probably not dealing with complex relationships.
E-Mail servers associate data with only one index: the e-mail address.
Google only associates content with keywords.
E-mail servers logically and physically separate email folders.
Google logically and physically separates the datasets for various keywords.
So by the time you hit it, it knows instantly where to look for what you want.
You don't have a whole complex system of relationships between the data.
It looks at the keywords , finds the predetermined results for each and combines the results.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042514</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045682</id>
	<title>Re:MySQL Sucks</title>
	<author>TheSunborn</author>
	<datestamp>1257869040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The problem is not the database size as such, it is the requirements to do 50000 inserts/updates and 200000 select(Read) each second that kill the current sql implementations.</p></htmltext>
<tokenext>The problem is not the database size as such , it is the requirements to do 50000 inserts/updates and 200000 select ( Read ) each second that kill the current sql implementations .</tokentext>
<sentencetext>The problem is not the database size as such, it is the requirements to do 50000 inserts/updates and 200000 select(Read) each second that kill the current sql implementations.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044920</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051368</id>
	<title>Re:One big problem with SQL is ...</title>
	<author>shutdown -p now</author>
	<datestamp>1257847380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>... that too many developers and integrators will just use an SQL database by default without considering whether or not it is appropriate for the task. I see so many databases where there is little or no hint of any relationships even being involved. Some forums, for example, store postings in a database where the message content is a blob and it is indexed by a number. To get a post, look by number. While an SQL database can do this, so can many other database types. There's no complex relational searching with this; it's just basic indexing (with maybe a tree of index relationships). I'd sooner do this with a B-tree based filesystem.</p></div><p>It's not any different than using XML as a default format for any data that can be more or less reasonably shoved into it.</p><p>And I don't think that it's really a problem. Yes, you do not normally use a screwdriver to hammer nails, you use a hammer. But what if screwdriver is in fact convenient enough for that, and standardized to boot, while hammers are all different, and everyone is prone to making their own?</p><p>Using stock solutions - even if they do 3x more than you actually need to do - is often a lot of time saved for negligible (in the big scheme of things) performance losses. Hence XML. Hence SQL.</p></div>
	</htmltext>
<tokenext>... that too many developers and integrators will just use an SQL database by default without considering whether or not it is appropriate for the task .
I see so many databases where there is little or no hint of any relationships even being involved .
Some forums , for example , store postings in a database where the message content is a blob and it is indexed by a number .
To get a post , look by number .
While an SQL database can do this , so can many other database types .
There 's no complex relational searching with this ; it 's just basic indexing ( with maybe a tree of index relationships ) .
I 'd sooner do this with a B-tree based filesystem.It 's not any different than using XML as a default format for any data that can be more or less reasonably shoved into it.And I do n't think that it 's really a problem .
Yes , you do not normally use a screwdriver to hammer nails , you use a hammer .
But what if screwdriver is in fact convenient enough for that , and standardized to boot , while hammers are all different , and everyone is prone to making their own ? Using stock solutions - even if they do 3x more than you actually need to do - is often a lot of time saved for negligible ( in the big scheme of things ) performance losses .
Hence XML .
Hence SQL .</tokentext>
<sentencetext> ... that too many developers and integrators will just use an SQL database by default without considering whether or not it is appropriate for the task.
I see so many databases where there is little or no hint of any relationships even being involved.
Some forums, for example, store postings in a database where the message content is a blob and it is indexed by a number.
To get a post, look by number.
While an SQL database can do this, so can many other database types.
There's no complex relational searching with this; it's just basic indexing (with maybe a tree of index relationships).
I'd sooner do this with a B-tree based filesystem.It's not any different than using XML as a default format for any data that can be more or less reasonably shoved into it.And I don't think that it's really a problem.
Yes, you do not normally use a screwdriver to hammer nails, you use a hammer.
But what if screwdriver is in fact convenient enough for that, and standardized to boot, while hammers are all different, and everyone is prone to making their own?Using stock solutions - even if they do 3x more than you actually need to do - is often a lot of time saved for negligible (in the big scheme of things) performance losses.
Hence XML.
Hence SQL.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044744</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043160</id>
	<title>Re:bad design</title>
	<author>oldhack</author>
	<datestamp>1257886560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>To reverse the polarity when the flux capacitor is overloaded.</htmltext>
<tokenext>To reverse the polarity when the flux capacitor is overloaded .</tokentext>
<sentencetext>To reverse the polarity when the flux capacitor is overloaded.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043296</id>
	<title>Re:Why worry?</title>
	<author>rainhill</author>
	<datestamp>1257845640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Seriously... why the +5 funny?</p></htmltext>
<tokenext>Seriously... why the + 5 funny ?</tokentext>
<sentencetext>Seriously... why the +5 funny?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046746</id>
	<title>IMS Fastpath</title>
	<author>phulax</author>
	<datestamp>1257873540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The world rely on IMS</htmltext>
<tokenext>The world rely on IMS</tokentext>
<sentencetext>The world rely on IMS</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798</id>
	<title>Re:bad design</title>
	<author>kestasjk</author>
	<datestamp>1257794640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>They use bloom filters for messaging? What for?</htmltext>
<tokenext>They use bloom filters for messaging ?
What for ?</tokentext>
<sentencetext>They use bloom filters for messaging?
What for?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30065426
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043240
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30050496
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045024
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045454
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042674
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042514
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042982
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044308
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043000
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043742
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043324
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046780
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044470
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051426
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046194
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045024
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044452
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055724
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046550
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043006
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044260
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049662
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048846
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042822
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042564
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30060290
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30053690
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047292
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042924
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042942
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045682
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044920
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055156
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044886
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042548
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047546
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044348
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043582
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048370
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043874
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042644
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042514
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044466
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055400
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30062518
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30058150
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30053302
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042548
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042526
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042786
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042774
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044264
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042562
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051368
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044744
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046286
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30059152
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049188
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043582
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043296
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043502
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044472
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043160
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049716
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044436
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044304
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042564
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30052092
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044570
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_11_09_2335214_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047008
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042800
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042772
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042564
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042822
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048846
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044304
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042656
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043158
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044744
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047292
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051368
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042546
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30048370
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044570
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044202
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046780
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055724
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30053690
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042774
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042982
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045962
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042562
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044264
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044022
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043324
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043742
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044308
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044502
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042488
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043296
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043006
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043582
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044348
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047546
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049188
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043502
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044470
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042548
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30053302
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044886
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042514
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042674
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045454
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042644
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043874
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044106
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_11_09_2335214.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042510
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042526
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042600
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046550
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055156
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044466
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042560
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043000
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043718
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049662
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30058150
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044436
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30049716
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045024
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046194
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30050496
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044452
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30047008
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30062518
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044920
----http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30045682
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30055400
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30065426
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044260
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30046286
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042924
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042786
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043072
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30059152
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30051426
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043240
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30060290
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30044472
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042798
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30043160
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30042942
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_11_09_2335214.30052092
</commentlist>
</conversation>
