<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_10_20_1733228</id>
	<title>How Do You Manage Dev/Test/Production Environments?</title>
	<author>timothy</author>
	<datestamp>1256061480000</datestamp>
	<htmltext>An anonymous reader writes <i>"I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers (mostly Drupal). I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a URL (ex: site1.developer.example.com). Developers also get FTP access and MySQL access (through phpMyAdmin). Additional scripts check in files to the repository and move files/DBs through the different environments. I'm finding as our company grows (we currently host 50+ sites) it is cumbersome to manage all sites by hacking away at the command prompt. I would like to find a solution with a relatively easy-to-use user interface that provisions dev/test/live environments. The <a href="http://groups.drupal.org/aegir-hosting-system">Aegir</a> project is a close fit, but is only for Drupal sites and still under heavy development. Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community. How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"</i></htmltext>
<tokenext>An anonymous reader writes " I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers ( mostly Drupal ) .
I 've written a few scripts that check out dev/test/production environments from our repository , so web developers can access the site they 're working on from a URL ( ex : site1.developer.example.com ) .
Developers also get FTP access and MySQL access ( through phpMyAdmin ) .
Additional scripts check in files to the repository and move files/DBs through the different environments .
I 'm finding as our company grows ( we currently host 50 + sites ) it is cumbersome to manage all sites by hacking away at the command prompt .
I would like to find a solution with a relatively easy-to-use user interface that provisions dev/test/live environments .
The Aegir project is a close fit , but is only for Drupal sites and still under heavy development .
Another option is to completely rewrite the scripts ( or hire someone to do it for me ) , but I would much rather use something OSS so I can give back to the community .
How have fellow slashdotters managed this process , what systems/scripts have you used , and what advice do you have ?
"</tokentext>
<sentencetext>An anonymous reader writes "I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers (mostly Drupal).
I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a URL (ex: site1.developer.example.com).
Developers also get FTP access and MySQL access (through phpMyAdmin).
Additional scripts check in files to the repository and move files/DBs through the different environments.
I'm finding as our company grows (we currently host 50+ sites) it is cumbersome to manage all sites by hacking away at the command prompt.
I would like to find a solution with a relatively easy-to-use user interface that provisions dev/test/live environments.
The Aegir project is a close fit, but is only for Drupal sites and still under heavy development.
Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community.
How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811695</id>
	<title>puppet</title>
	<author>Anonymous</author>
	<datestamp>1256065800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not sure if this is what you're looking for, but a common solution is a configuration management tool.</p><p>Try puppet http://reductivelabs.com/products/puppet<br>It's simple, fast, and written in ruby</p></htmltext>
<tokenext>Not sure if this is what you 're looking for , but a common solution is a configuration management tool.Try puppet http : //reductivelabs.com/products/puppetIt 's simple , fast , and written in ruby</tokentext>
<sentencetext>Not sure if this is what you're looking for, but a common solution is a configuration management tool.Try puppet http://reductivelabs.com/products/puppetIt's simple, fast, and written in ruby</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820303</id>
	<title>Re:SVN etc.</title>
	<author>Anonymous</author>
	<datestamp>1256157120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p># Deployment to live servers via SVN checkout when the time comes</p></div><p>Wait, <strong>checkout</strong>, are you serious? Are the<nobr> <wbr></nobr>.svn folders on the server? If they are make sure that the webserver does not serve the files, because otherwise everyone can see your source code!</p></div>
	</htmltext>
<tokenext># Deployment to live servers via SVN checkout when the time comesWait , checkout , are you serious ?
Are the .svn folders on the server ?
If they are make sure that the webserver does not serve the files , because otherwise everyone can see your source code !</tokentext>
<sentencetext># Deployment to live servers via SVN checkout when the time comesWait, checkout, are you serious?
Are the .svn folders on the server?
If they are make sure that the webserver does not serve the files, because otherwise everyone can see your source code!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812963</id>
	<title>If it's anything like where I work ...</title>
	<author>Anonymous</author>
	<datestamp>1256070480000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The Dev environment has better hardware than the Production environment.  God forbid Dev and Test actually have the same specs so, you know, you can TEST ON IT!</p><p>Naming of servers end in "dev" for Development, "tst" for Test, and "prd" for Prod, except for where they end in "d", "t", "p", "pg", "02"...  Unless it's SQL Server, in which case it's ending in "Dev", "Tst", "Prod"<nobr> <wbr></nobr>... mostly.</p><p>All three of them have different names for Roles granting the same access to the same tables.  And I have to fill out authorization forms before the databases are built, so good luck on schema names.  And we're in a project where the technical spec was just rewritten yesterday, and process testing (that thing after dev) was supposed to finish 2 weeks from now.  We had to start writing test cases two weeks ago, before I had access to the database, and I might as well throw a good chunk of the work away.</p></htmltext>
<tokenext>The Dev environment has better hardware than the Production environment .
God forbid Dev and Test actually have the same specs so , you know , you can TEST ON IT ! Naming of servers end in " dev " for Development , " tst " for Test , and " prd " for Prod , except for where they end in " d " , " t " , " p " , " pg " , " 02 " ... Unless it 's SQL Server , in which case it 's ending in " Dev " , " Tst " , " Prod " ... mostly.All three of them have different names for Roles granting the same access to the same tables .
And I have to fill out authorization forms before the databases are built , so good luck on schema names .
And we 're in a project where the technical spec was just rewritten yesterday , and process testing ( that thing after dev ) was supposed to finish 2 weeks from now .
We had to start writing test cases two weeks ago , before I had access to the database , and I might as well throw a good chunk of the work away .</tokentext>
<sentencetext>The Dev environment has better hardware than the Production environment.
God forbid Dev and Test actually have the same specs so, you know, you can TEST ON IT!Naming of servers end in "dev" for Development, "tst" for Test, and "prd" for Prod, except for where they end in "d", "t", "p", "pg", "02"...  Unless it's SQL Server, in which case it's ending in "Dev", "Tst", "Prod" ... mostly.All three of them have different names for Roles granting the same access to the same tables.
And I have to fill out authorization forms before the databases are built, so good luck on schema names.
And we're in a project where the technical spec was just rewritten yesterday, and process testing (that thing after dev) was supposed to finish 2 weeks from now.
We had to start writing test cases two weeks ago, before I had access to the database, and I might as well throw a good chunk of the work away.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812399</id>
	<title>Re:Most important thing in my book</title>
	<author>mcrbids</author>
	<datestamp>1256068200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Dang. Out of mod points, so I'll reply.</p><p>Parent covers an EXCELLENT point. We've gone to great lengths to replicate data from production to test/dev modes. We have scripts set up so that in just a few commands, we can replicate data from production to test/dev, and that do data checks to make sure that something stupid isn't done. (EG: copying a customer's data from test -&gt; production and wiping out current data with something 2 weeks old, etc)</p><p>In our case, each customer has their own database, and their own set of files. A single command sends it all, EG:</p><p><div class="quote"><p>production$ senddata.sh customer3 testserver;</p> </div><p>And that sends all the data for "customer3" to the test server, to a temp folder where it can be loaded as needed. This last bit is important, because often, when testing data, you screw things up and need to "start fresh" without having to wait another hour for the data to re-replicate over rsync. In order to keep things fast, all customers' data gets sent over to the test server nightly, and to the dev server weekly. (a la cron) By keeping the data off-site fresh, it takes some 8-12 hours to get all of our customers by rsync at night.</p><p><div class="quote"><p>testserver$ loaddata customer3;</p> </div><p>That loads the data for customer3 from the temp directory into the test server. We have similar interfaces for publishing scripts from our dev server to test and production servers. We do something similar for backups, which are off-site to a separate location, behind a strict firewall, mirrors across multiple drives. (no, not RAID, 3 actual separate copies on separate disks) We back up our entire SVN repo, all scripts, all databases, and all files for all customers offsite nightly.</p><p>We have our test environment virtually identical to our production, only with fewer servers in the cluster. In this way, we have a "hot fail" server that has recent data at all times, and enough performance to do a meaningful job if we should somehow lose our primary production cluster.</p><p>All 4 environments would have to be compromised before we lose meaningful amounts of data. We have a tested and continuously verified D/R server that doubles as our test environment. We use SVN in our dev environment so that we can all work together smoothly.</p><p>All with virtually zero administration overhead after setup. It's amazing what you can do with bash, cron and a few PHP/perl scripts!</p></div>
	</htmltext>
<tokenext>Dang .
Out of mod points , so I 'll reply.Parent covers an EXCELLENT point .
We 've gone to great lengths to replicate data from production to test/dev modes .
We have scripts set up so that in just a few commands , we can replicate data from production to test/dev , and that do data checks to make sure that something stupid is n't done .
( EG : copying a customer 's data from test - &gt; production and wiping out current data with something 2 weeks old , etc ) In our case , each customer has their own database , and their own set of files .
A single command sends it all , EG : production $ senddata.sh customer3 testserver ; And that sends all the data for " customer3 " to the test server , to a temp folder where it can be loaded as needed .
This last bit is important , because often , when testing data , you screw things up and need to " start fresh " without having to wait another hour for the data to re-replicate over rsync .
In order to keep things fast , all customers ' data gets sent over to the test server nightly , and to the dev server weekly .
( a la cron ) By keeping the data off-site fresh , it takes some 8-12 hours to get all of our customers by rsync at night.testserver $ loaddata customer3 ; That loads the data for customer3 from the temp directory into the test server .
We have similar interfaces for publishing scripts from our dev server to test and production servers .
We do something similar for backups , which are off-site to a separate location , behind a strict firewall , mirrors across multiple drives .
( no , not RAID , 3 actual separate copies on separate disks ) We back up our entire SVN repo , all scripts , all databases , and all files for all customers offsite nightly.We have our test environment virtually identical to our production , only with fewer servers in the cluster .
In this way , we have a " hot fail " server that has recent data at all times , and enough performance to do a meaningful job if we should somehow lose our primary production cluster.All 4 environments would have to be compromised before we lose meaningful amounts of data .
We have a tested and continuously verified D/R server that doubles as our test environment .
We use SVN in our dev environment so that we can all work together smoothly.All with virtually zero administration overhead after setup .
It 's amazing what you can do with bash , cron and a few PHP/perl scripts !</tokentext>
<sentencetext>Dang.
Out of mod points, so I'll reply.Parent covers an EXCELLENT point.
We've gone to great lengths to replicate data from production to test/dev modes.
We have scripts set up so that in just a few commands, we can replicate data from production to test/dev, and that do data checks to make sure that something stupid isn't done.
(EG: copying a customer's data from test -&gt; production and wiping out current data with something 2 weeks old, etc)In our case, each customer has their own database, and their own set of files.
A single command sends it all, EG:production$ senddata.sh customer3 testserver; And that sends all the data for "customer3" to the test server, to a temp folder where it can be loaded as needed.
This last bit is important, because often, when testing data, you screw things up and need to "start fresh" without having to wait another hour for the data to re-replicate over rsync.
In order to keep things fast, all customers' data gets sent over to the test server nightly, and to the dev server weekly.
(a la cron) By keeping the data off-site fresh, it takes some 8-12 hours to get all of our customers by rsync at night.testserver$ loaddata customer3; That loads the data for customer3 from the temp directory into the test server.
We have similar interfaces for publishing scripts from our dev server to test and production servers.
We do something similar for backups, which are off-site to a separate location, behind a strict firewall, mirrors across multiple drives.
(no, not RAID, 3 actual separate copies on separate disks) We back up our entire SVN repo, all scripts, all databases, and all files for all customers offsite nightly.We have our test environment virtually identical to our production, only with fewer servers in the cluster.
In this way, we have a "hot fail" server that has recent data at all times, and enough performance to do a meaningful job if we should somehow lose our primary production cluster.All 4 environments would have to be compromised before we lose meaningful amounts of data.
We have a tested and continuously verified D/R server that doubles as our test environment.
We use SVN in our dev environment so that we can all work together smoothly.All with virtually zero administration overhead after setup.
It's amazing what you can do with bash, cron and a few PHP/perl scripts!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813753</id>
	<title>Re:Hilarity</title>
	<author>plague3106</author>
	<datestamp>1256030580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Um, the OSS community IS a free labor force.</p></htmltext>
<tokenext>Um , the OSS community IS a free labor force .</tokentext>
<sentencetext>Um, the OSS community IS a free labor force.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811917</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811973</id>
	<title>Tools, Practices and Standards</title>
	<author>HogGeek</author>
	<datestamp>1256066640000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>5</modscore>
	<htmltext><p>We utilize a number of tools depending on the site, but generally:</p><p>Version Control (Subversion) for management of the code base (PHP, CSS, HTML, Ruby, PERL,...) - <a href="http://subversion.tigris.org/" title="tigris.org">http://subversion.tigris.org/</a> [tigris.org]<br>BCFG2 for management of the system(s) patches and configurations (Uses svn for managing the files) - <a href="http://trac.mcs.anl.gov/projects/bcfg2" title="anl.gov">http://trac.mcs.anl.gov/projects/bcfg2</a> [anl.gov]<br>Capistrano/Webistrano for deployment (Webistrano is a nice GUI to capistrano - <a href="http://www.capify.org/" title="capify.org">http://www.capify.org/</a> [capify.org] / <a href="http://labs.peritor.com/webistrano" title="peritor.com">http://labs.peritor.com/webistrano</a> [peritor.com]</p><p>However, all of the tools above mean nothing without defining very good standards and practices for your organization. Only you and your organization can figure those out...</p></htmltext>
<tokenext>We utilize a number of tools depending on the site , but generally : Version Control ( Subversion ) for management of the code base ( PHP , CSS , HTML , Ruby , PERL,... ) - http : //subversion.tigris.org/ [ tigris.org ] BCFG2 for management of the system ( s ) patches and configurations ( Uses svn for managing the files ) - http : //trac.mcs.anl.gov/projects/bcfg2 [ anl.gov ] Capistrano/Webistrano for deployment ( Webistrano is a nice GUI to capistrano - http : //www.capify.org/ [ capify.org ] / http : //labs.peritor.com/webistrano [ peritor.com ] However , all of the tools above mean nothing without defining very good standards and practices for your organization .
Only you and your organization can figure those out.. .</tokentext>
<sentencetext>We utilize a number of tools depending on the site, but generally:Version Control (Subversion) for management of the code base (PHP, CSS, HTML, Ruby, PERL,...) - http://subversion.tigris.org/ [tigris.org]BCFG2 for management of the system(s) patches and configurations (Uses svn for managing the files) - http://trac.mcs.anl.gov/projects/bcfg2 [anl.gov]Capistrano/Webistrano for deployment (Webistrano is a nice GUI to capistrano - http://www.capify.org/ [capify.org] / http://labs.peritor.com/webistrano [peritor.com]However, all of the tools above mean nothing without defining very good standards and practices for your organization.
Only you and your organization can figure those out...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812911</id>
	<title>Re:KVM/Vmware/OpenSolaris zfs go virtual</title>
	<author>garaged</author>
	<datestamp>1256070240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>justo to be the anal in the thread, have you head of linux-vserver or openvz ?? sticking with linux would be much more efficient with those that vmware or kvm</p></htmltext>
<tokenext>justo to be the anal in the thread , have you head of linux-vserver or openvz ? ?
sticking with linux would be much more efficient with those that vmware or kvm</tokentext>
<sentencetext>justo to be the anal in the thread, have you head of linux-vserver or openvz ??
sticking with linux would be much more efficient with those that vmware or kvm</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812573</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811871</id>
	<title>Leverage your issue tracking and cvs</title>
	<author>dkh2</author>
	<datestamp>1256066280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>If you're able to script deployments from a configuration management host you can script against your CVS (SVN, SourceSafe, whatever-you're-using).</p><p>There are a lot of ways to automate the management of what file version is in each environment but a smart choice is to tie things to an issue tracking system.  My company uses MKS (http://mks.com) but BugTracker or BugZilla will do just as well.</p><p>Your scripted interface can check-out/export the specified version from controlled source and FTP/SFTP/XCOPY/whatever to the specified destination environment.  For issue-tracker backed systems you can even have this processes driven by issue-id to automatically select the correct version based on issues to be elevated.  Additionally, the closing task for the elevation process can then update the issue tracking system as needed.</p><p>Many issue tracking systems will allow you to integrate your source management and deployment management tools. It's a beautiful thing when you get it set up.</p></htmltext>
<tokenext>If you 're able to script deployments from a configuration management host you can script against your CVS ( SVN , SourceSafe , whatever-you 're-using ) .There are a lot of ways to automate the management of what file version is in each environment but a smart choice is to tie things to an issue tracking system .
My company uses MKS ( http : //mks.com ) but BugTracker or BugZilla will do just as well.Your scripted interface can check-out/export the specified version from controlled source and FTP/SFTP/XCOPY/whatever to the specified destination environment .
For issue-tracker backed systems you can even have this processes driven by issue-id to automatically select the correct version based on issues to be elevated .
Additionally , the closing task for the elevation process can then update the issue tracking system as needed.Many issue tracking systems will allow you to integrate your source management and deployment management tools .
It 's a beautiful thing when you get it set up .</tokentext>
<sentencetext>If you're able to script deployments from a configuration management host you can script against your CVS (SVN, SourceSafe, whatever-you're-using).There are a lot of ways to automate the management of what file version is in each environment but a smart choice is to tie things to an issue tracking system.
My company uses MKS (http://mks.com) but BugTracker or BugZilla will do just as well.Your scripted interface can check-out/export the specified version from controlled source and FTP/SFTP/XCOPY/whatever to the specified destination environment.
For issue-tracker backed systems you can even have this processes driven by issue-id to automatically select the correct version based on issues to be elevated.
Additionally, the closing task for the elevation process can then update the issue tracking system as needed.Many issue tracking systems will allow you to integrate your source management and deployment management tools.
It's a beautiful thing when you get it set up.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29815197</id>
	<title>Re:Separate SVN deploys</title>
	<author>Nefarious Wheel</author>
	<datestamp>1256035500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"></div><p>There are so many things to do to get this right - basically I'd add that you need a three-silo people model to match the three silos of dev,test,prod.  Your development side are the creative ones;  give them the tools they ask for and let them play.  You need a critical, intelligent and demanding test manager in the middle, and for the production gatekeeper you need someone with absolutely no imagination at all (follow the rules, tick the boxes, or *zero* chance of advance to production.  Seriously.  Tell the people who think otherwise the *boat*will*sink* if all the test boxes aren't ticked. Just don't try to run development that way, mutiny isn't pretty.)</p><p>For the test through production phases, make sure you have your servers virtualised.  Test because test environments tend to proliferate (try, just try to keep a baseline somewhere? Careful when you update it...) and Production because you'll need speedy rollback if you auger it, and strict version control on the server versions.  This applies whether your virtual:physical server ratio is 1:1 or otherwise.</p><p>Look into data de-duplication solutions to keep the total disk space used by virtual images down.  </p><p>ITIL is nice but it's really only the scaffolding.  You still have to provide the cathedral.</p></div>
	</htmltext>
<tokenext>There are so many things to do to get this right - basically I 'd add that you need a three-silo people model to match the three silos of dev,test,prod .
Your development side are the creative ones ; give them the tools they ask for and let them play .
You need a critical , intelligent and demanding test manager in the middle , and for the production gatekeeper you need someone with absolutely no imagination at all ( follow the rules , tick the boxes , or * zero * chance of advance to production .
Seriously. Tell the people who think otherwise the * boat * will * sink * if all the test boxes are n't ticked .
Just do n't try to run development that way , mutiny is n't pretty .
) For the test through production phases , make sure you have your servers virtualised .
Test because test environments tend to proliferate ( try , just try to keep a baseline somewhere ?
Careful when you update it... ) and Production because you 'll need speedy rollback if you auger it , and strict version control on the server versions .
This applies whether your virtual : physical server ratio is 1 : 1 or otherwise.Look into data de-duplication solutions to keep the total disk space used by virtual images down .
ITIL is nice but it 's really only the scaffolding .
You still have to provide the cathedral .</tokentext>
<sentencetext>There are so many things to do to get this right - basically I'd add that you need a three-silo people model to match the three silos of dev,test,prod.
Your development side are the creative ones;  give them the tools they ask for and let them play.
You need a critical, intelligent and demanding test manager in the middle, and for the production gatekeeper you need someone with absolutely no imagination at all (follow the rules, tick the boxes, or *zero* chance of advance to production.
Seriously.  Tell the people who think otherwise the *boat*will*sink* if all the test boxes aren't ticked.
Just don't try to run development that way, mutiny isn't pretty.
)For the test through production phases, make sure you have your servers virtualised.
Test because test environments tend to proliferate (try, just try to keep a baseline somewhere?
Careful when you update it...) and Production because you'll need speedy rollback if you auger it, and strict version control on the server versions.
This applies whether your virtual:physical server ratio is 1:1 or otherwise.Look into data de-duplication solutions to keep the total disk space used by virtual images down.
ITIL is nice but it's really only the scaffolding.
You still have to provide the cathedral.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811547</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812029</id>
	<title>Look at Capistrano, steal ideas from Rails</title>
	<author>Anonymous</author>
	<datestamp>1256066820000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Capistrano started life as a deployment tool for Ruby on Rails, but has grown into a useful general-purpose tool for managing multiple machines with multiple roles in multiple environments.  It is absolutely the tool you will want to use for deploying a complex set of changes across one-to-several machines.  You will want to keep code changes and database schema mods in sync, and this can help.</p><p>Ruby on Rails has the concepts of development, test, and production baked into the default app framework, and people generally add a 'staging' environment to it as well.  I'm sure the mention of any particular technology on slashdot will serve as flamebait - but putting that aside, look at the ideas here and steal them liberally.</p><p>You can be uber cool and do it on the super-cheap if you use Amazon EC2 to build a clone of your server environment, deploy to it for staging/acceptance texting/etc, and then deploy into production.  A few hours of a test environment that mimicks your production environment will cost you less than a cup of coffee.</p><p>I have tried to set up staging environments on the same production hardware using apache's virtual hosts... and while this works really well for some things, other things (like an apache or apache module, or third party software upgrade) are impossible to test when staging and production are on the same box.</p></htmltext>
<tokenext>Capistrano started life as a deployment tool for Ruby on Rails , but has grown into a useful general-purpose tool for managing multiple machines with multiple roles in multiple environments .
It is absolutely the tool you will want to use for deploying a complex set of changes across one-to-several machines .
You will want to keep code changes and database schema mods in sync , and this can help.Ruby on Rails has the concepts of development , test , and production baked into the default app framework , and people generally add a 'staging ' environment to it as well .
I 'm sure the mention of any particular technology on slashdot will serve as flamebait - but putting that aside , look at the ideas here and steal them liberally.You can be uber cool and do it on the super-cheap if you use Amazon EC2 to build a clone of your server environment , deploy to it for staging/acceptance texting/etc , and then deploy into production .
A few hours of a test environment that mimicks your production environment will cost you less than a cup of coffee.I have tried to set up staging environments on the same production hardware using apache 's virtual hosts... and while this works really well for some things , other things ( like an apache or apache module , or third party software upgrade ) are impossible to test when staging and production are on the same box .</tokentext>
<sentencetext>Capistrano started life as a deployment tool for Ruby on Rails, but has grown into a useful general-purpose tool for managing multiple machines with multiple roles in multiple environments.
It is absolutely the tool you will want to use for deploying a complex set of changes across one-to-several machines.
You will want to keep code changes and database schema mods in sync, and this can help.Ruby on Rails has the concepts of development, test, and production baked into the default app framework, and people generally add a 'staging' environment to it as well.
I'm sure the mention of any particular technology on slashdot will serve as flamebait - but putting that aside, look at the ideas here and steal them liberally.You can be uber cool and do it on the super-cheap if you use Amazon EC2 to build a clone of your server environment, deploy to it for staging/acceptance texting/etc, and then deploy into production.
A few hours of a test environment that mimicks your production environment will cost you less than a cup of coffee.I have tried to set up staging environments on the same production hardware using apache's virtual hosts... and while this works really well for some things, other things (like an apache or apache module, or third party software upgrade) are impossible to test when staging and production are on the same box.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29821539</id>
	<title>Re:You are not a n00b</title>
	<author>fuzzyfuzzyfungus</author>
	<datestamp>1256128680000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>How Socratic...</htmltext>
<tokenext>How Socratic.. .</tokentext>
<sentencetext>How Socratic...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813057</id>
	<title>Hudson Build Promotion</title>
	<author>bihoy</author>
	<datestamp>1256070900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>One solution that I have implemented at several commpanies is to use <a href="http://hudson-ci.org/" title="hudson-ci.org">Hudson</a> [hudson-ci.org] and the Hudson <a href="http://wiki.hudson-ci.org/display/HUDSON/Promoted+Builds+Plugin" title="hudson-ci.org">Promoted Builds</a> [hudson-ci.org] plugin. Read this <a href="http://configmanag.blogspot.com/2008/08/build-promotion-with-hudson.html" title="blogspot.com">brief introduction</a> [blogspot.com] to the concept.</p></htmltext>
<tokenext>One solution that I have implemented at several commpanies is to use Hudson [ hudson-ci.org ] and the Hudson Promoted Builds [ hudson-ci.org ] plugin .
Read this brief introduction [ blogspot.com ] to the concept .</tokentext>
<sentencetext>One solution that I have implemented at several commpanies is to use Hudson [hudson-ci.org] and the Hudson Promoted Builds [hudson-ci.org] plugin.
Read this brief introduction [blogspot.com] to the concept.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813437</id>
	<title>Poorly</title>
	<author>Anonymous</author>
	<datestamp>1256029380000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><p>nuf said.</p></htmltext>
<tokenext>nuf said .</tokentext>
<sentencetext>nuf said.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811665</id>
	<title>Re:You are not a n00b</title>
	<author>sakdoctor</author>
	<datestamp>1256065740000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext><p>They could be a n00b.<br>If they only have a wooden sword and shield, and 100 gold pieces then they're probably a n00b.</p></htmltext>
<tokenext>They could be a n00b.If they only have a wooden sword and shield , and 100 gold pieces then they 're probably a n00b .</tokentext>
<sentencetext>They could be a n00b.If they only have a wooden sword and shield, and 100 gold pieces then they're probably a n00b.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812765</id>
	<title>Re:Most important thing in my book</title>
	<author>Anonymous</author>
	<datestamp>1256069640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>And don't forget: it's DTAP:</p><p>
&nbsp; Development -&gt; Test -&gt; ACCEPT -&gt; Production</p><p>
&nbsp; and  vv.</p></htmltext>
<tokenext>And do n't forget : it 's DTAP :   Development - &gt; Test - &gt; ACCEPT - &gt; Production   and vv .</tokentext>
<sentencetext>And don't forget: it's DTAP:
  Development -&gt; Test -&gt; ACCEPT -&gt; Production
  and  vv.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820713</id>
	<title>Re:SVN etc.</title>
	<author>Anonymous</author>
	<datestamp>1256119200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Versioning databases, a hack:</p><p>We write all our database changes in uniquely numbered scripts with useful, long names.  We have a single way to execute them through a piece of code that logs the name to the database we're updating.</p><p>This way we avoid double runs and can always see exactly which version of the database we're running by comparing up date scripts to the source repository whether on dev, test or live.</p></htmltext>
<tokenext>Versioning databases , a hack : We write all our database changes in uniquely numbered scripts with useful , long names .
We have a single way to execute them through a piece of code that logs the name to the database we 're updating.This way we avoid double runs and can always see exactly which version of the database we 're running by comparing up date scripts to the source repository whether on dev , test or live .</tokentext>
<sentencetext>Versioning databases, a hack:We write all our database changes in uniquely numbered scripts with useful, long names.
We have a single way to execute them through a piece of code that logs the name to the database we're updating.This way we avoid double runs and can always see exactly which version of the database we're running by comparing up date scripts to the source repository whether on dev, test or live.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29818887</id>
	<title>A few tips</title>
	<author>benjto</author>
	<datestamp>1256055720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>A few tips about the code.  Feel free to replace SVN with your SCM of choice.

<ol>
<li>Deployments should be a no-thinking zone.  That means if the process requires any variation or creativity, it is bad.  Lots of thinking is required to achieve this state.</li><li>Everything lives in SVN, including your deploy scripts.  Keep as few as possible and setup directories for each projects with config files.</li>
<li>Developers work with SVN, period.</li>
<li>Tag and build once for dev.  Package it and store it somewhere (tar.gz, war, etc).  Treat that as your "binary" and store the tag in it.  Name the file using the tag.  Make the tag visible from the application itself.  <a href="http://myhost/myapp/version" title="myhost" rel="nofollow">http://myhost/myapp/version</a> [myhost] works well.  If using a date, use a format that sorts naturally (20091001, not 10-01-2009).</li>
<li>Never build that version again.  From here on out, move the archive, do not rebuild.</li>
<li>As requested by your testing / dev teams, do the same for each environment.  By the time you get to production you have done it at least 3 times.  No surprises.  You always know the version and can track it fully.</li>
</ol><p>

Don't take shortcuts.  It just bites you in the end.

Of course, this puts some burden on the dev teams.  No environmental config in the project.  There are many ways to accomplish this.  In Java, we put properties files into the server's classpath.  Figure out a similar mechanism for your platform.</p></htmltext>
<tokenext>A few tips about the code .
Feel free to replace SVN with your SCM of choice .
Deployments should be a no-thinking zone .
That means if the process requires any variation or creativity , it is bad .
Lots of thinking is required to achieve this state.Everything lives in SVN , including your deploy scripts .
Keep as few as possible and setup directories for each projects with config files .
Developers work with SVN , period .
Tag and build once for dev .
Package it and store it somewhere ( tar.gz , war , etc ) .
Treat that as your " binary " and store the tag in it .
Name the file using the tag .
Make the tag visible from the application itself .
http : //myhost/myapp/version [ myhost ] works well .
If using a date , use a format that sorts naturally ( 20091001 , not 10-01-2009 ) .
Never build that version again .
From here on out , move the archive , do not rebuild .
As requested by your testing / dev teams , do the same for each environment .
By the time you get to production you have done it at least 3 times .
No surprises .
You always know the version and can track it fully .
Do n't take shortcuts .
It just bites you in the end .
Of course , this puts some burden on the dev teams .
No environmental config in the project .
There are many ways to accomplish this .
In Java , we put properties files into the server 's classpath .
Figure out a similar mechanism for your platform .</tokentext>
<sentencetext>A few tips about the code.
Feel free to replace SVN with your SCM of choice.
Deployments should be a no-thinking zone.
That means if the process requires any variation or creativity, it is bad.
Lots of thinking is required to achieve this state.Everything lives in SVN, including your deploy scripts.
Keep as few as possible and setup directories for each projects with config files.
Developers work with SVN, period.
Tag and build once for dev.
Package it and store it somewhere (tar.gz, war, etc).
Treat that as your "binary" and store the tag in it.
Name the file using the tag.
Make the tag visible from the application itself.
http://myhost/myapp/version [myhost] works well.
If using a date, use a format that sorts naturally (20091001, not 10-01-2009).
Never build that version again.
From here on out, move the archive, do not rebuild.
As requested by your testing / dev teams, do the same for each environment.
By the time you get to production you have done it at least 3 times.
No surprises.
You always know the version and can track it fully.
Don't take shortcuts.
It just bites you in the end.
Of course, this puts some burden on the dev teams.
No environmental config in the project.
There are many ways to accomplish this.
In Java, we put properties files into the server's classpath.
Figure out a similar mechanism for your platform.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811777</id>
	<title>Start an OSS Project</title>
	<author>Anonymous</author>
	<datestamp>1256065980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>If there's not a project to fit your bill, develop it internally and release it as an OSS project. It'll add some nice OSS experience to your resume and also add visibility to your employer. If it succeeds, it'll be a big deal for your company. If it doesn't succeed, at least you got the project done. Sounds like everyone wins.<br> <br>I've never actually done this (my employer balks at the suggestion), but I'd love to have that sort of opportunity.</htmltext>
<tokenext>If there 's not a project to fit your bill , develop it internally and release it as an OSS project .
It 'll add some nice OSS experience to your resume and also add visibility to your employer .
If it succeeds , it 'll be a big deal for your company .
If it does n't succeed , at least you got the project done .
Sounds like everyone wins .
I 've never actually done this ( my employer balks at the suggestion ) , but I 'd love to have that sort of opportunity .</tokentext>
<sentencetext>If there's not a project to fit your bill, develop it internally and release it as an OSS project.
It'll add some nice OSS experience to your resume and also add visibility to your employer.
If it succeeds, it'll be a big deal for your company.
If it doesn't succeed, at least you got the project done.
Sounds like everyone wins.
I've never actually done this (my employer balks at the suggestion), but I'd love to have that sort of opportunity.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811585</id>
	<title>happy with phing</title>
	<author>tthomas48</author>
	<datestamp>1256065500000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>There's really only so much you can do generically. I'm really happy with phing. I use the dbdeploy task to keep my databases in a similar state. I build on a local machine, deploy via ssh and then migrate the database.</p><p>I'd suggest that rather than checkout at each level you create a continous integration machine using something like cruise control or bamboo, then push out build tarballs and migrate the database.</p></htmltext>
<tokenext>There 's really only so much you can do generically .
I 'm really happy with phing .
I use the dbdeploy task to keep my databases in a similar state .
I build on a local machine , deploy via ssh and then migrate the database.I 'd suggest that rather than checkout at each level you create a continous integration machine using something like cruise control or bamboo , then push out build tarballs and migrate the database .</tokentext>
<sentencetext>There's really only so much you can do generically.
I'm really happy with phing.
I use the dbdeploy task to keep my databases in a similar state.
I build on a local machine, deploy via ssh and then migrate the database.I'd suggest that rather than checkout at each level you create a continous integration machine using something like cruise control or bamboo, then push out build tarballs and migrate the database.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812121</id>
	<title>Bash and git</title>
	<author>Phred T. Magnificent</author>
	<datestamp>1256067180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I do mine with ssh, bash and git, for the moment.  I'm looking at moving to something like <a href="http://reductivelabs.com/" title="reductivelabs.com" rel="nofollow">puppet</a> [reductivelabs.com] for system configuration, though.  I've also heard good things about cobbler for initial provisioning, but it's mainly aimed at an RHEL environment and that's not what we're using.</p><p>The <a href="http://2009.utosc.com/" title="utosc.com" rel="nofollow">2009 Utah Open Source Conference</a> [utosc.com] had several good presentations on infrastructure automation.  See, in particular, <a href="http://www.windley.com/docs/2009/infrastructure\%20automation.pdf" title="windley.com" rel="nofollow">Phil Windley's slides on puppet and cobbler</a> [windley.com] (hopefully audio and maybe video will be available soon).</p></htmltext>
<tokenext>I do mine with ssh , bash and git , for the moment .
I 'm looking at moving to something like puppet [ reductivelabs.com ] for system configuration , though .
I 've also heard good things about cobbler for initial provisioning , but it 's mainly aimed at an RHEL environment and that 's not what we 're using.The 2009 Utah Open Source Conference [ utosc.com ] had several good presentations on infrastructure automation .
See , in particular , Phil Windley 's slides on puppet and cobbler [ windley.com ] ( hopefully audio and maybe video will be available soon ) .</tokentext>
<sentencetext>I do mine with ssh, bash and git, for the moment.
I'm looking at moving to something like puppet [reductivelabs.com] for system configuration, though.
I've also heard good things about cobbler for initial provisioning, but it's mainly aimed at an RHEL environment and that's not what we're using.The 2009 Utah Open Source Conference [utosc.com] had several good presentations on infrastructure automation.
See, in particular, Phil Windley's slides on puppet and cobbler [windley.com] (hopefully audio and maybe video will be available soon).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811981</id>
	<title>build a self service virtual lab</title>
	<author>Anonymous</author>
	<datestamp>1256066640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>If you have some $$ off the shelf tools like vmware lab manager will kick some serious butt in this type of environment.</p></htmltext>
<tokenext>If you have some $ $ off the shelf tools like vmware lab manager will kick some serious butt in this type of environment .</tokentext>
<sentencetext>If you have some $$ off the shelf tools like vmware lab manager will kick some serious butt in this type of environment.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811725</id>
	<title>Final Solution</title>
	<author>Javarufus</author>
	<datestamp>1256065860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Put all of your application workspaces in an easily escapable situation involving an overly elaborate and exotic death.</p><p>But, if in doubt, add laser beams.</p></htmltext>
<tokenext>Put all of your application workspaces in an easily escapable situation involving an overly elaborate and exotic death.But , if in doubt , add laser beams .</tokentext>
<sentencetext>Put all of your application workspaces in an easily escapable situation involving an overly elaborate and exotic death.But, if in doubt, add laser beams.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820941</id>
	<title>Aegir is ready for production use</title>
	<author>SqyD</author>
	<datestamp>1256122260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Allthough Aegir is in active development each release is stable enough for production use and migrations between releases are supported. Allthough right now it only supports Drupal the basic infrastructure is in place to support other LAMP applications in the future.</p><p>My company runs a virtual lab setup that is build on top of vmlogix labmanager. It handles automated rollouts of test machines but you're still on your own to handle the applications logic.</p></htmltext>
<tokenext>Allthough Aegir is in active development each release is stable enough for production use and migrations between releases are supported .
Allthough right now it only supports Drupal the basic infrastructure is in place to support other LAMP applications in the future.My company runs a virtual lab setup that is build on top of vmlogix labmanager .
It handles automated rollouts of test machines but you 're still on your own to handle the applications logic .</tokentext>
<sentencetext>Allthough Aegir is in active development each release is stable enough for production use and migrations between releases are supported.
Allthough right now it only supports Drupal the basic infrastructure is in place to support other LAMP applications in the future.My company runs a virtual lab setup that is build on top of vmlogix labmanager.
It handles automated rollouts of test machines but you're still on your own to handle the applications logic.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812495</id>
	<title>Use CVS if you're feeling burly</title>
	<author>rho</author>
	<datestamp>1256068560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>CVS (of whatever flavor) can help you do this. It's a pain in the ass, and everybody will hate it, but it works.

</p><p>I've done this with virtual machines as well. It's kinda whizzy to do, but probably overkill.

</p><p>The simplest way for me was to simply use rsync. Rigid delineation between live and test/dev environments is important. Use a completely separate database (not just a different schema), and if possible a completely separate database server. Changes to the database schema should be encapsulated in update scripts and tightly controlled and thoroughly tested in the test environment. Use a database that supports transactions and use them. Updating the live site should be performed by updating a clone of the live site in another directory. That way if everything goes tits up for some unexpected reason you can revert back to the old site while you lick your wounds. Virtual machines definitely make this all techy and bitchin', but editing httpd.conf and restarting Apache also works.

</p><p>The best solution is going to be customized to the needs of the project. Most projects don't need a dev/test/live arrangement. Dev and live are sufficient. The most important thing is to establish a framework of how changes are to be made to the code base or database, and stick to it. CVS will help enforce this, but at the cost of having to use CVS.</p></htmltext>
<tokenext>CVS ( of whatever flavor ) can help you do this .
It 's a pain in the ass , and everybody will hate it , but it works .
I 've done this with virtual machines as well .
It 's kinda whizzy to do , but probably overkill .
The simplest way for me was to simply use rsync .
Rigid delineation between live and test/dev environments is important .
Use a completely separate database ( not just a different schema ) , and if possible a completely separate database server .
Changes to the database schema should be encapsulated in update scripts and tightly controlled and thoroughly tested in the test environment .
Use a database that supports transactions and use them .
Updating the live site should be performed by updating a clone of the live site in another directory .
That way if everything goes tits up for some unexpected reason you can revert back to the old site while you lick your wounds .
Virtual machines definitely make this all techy and bitchin ' , but editing httpd.conf and restarting Apache also works .
The best solution is going to be customized to the needs of the project .
Most projects do n't need a dev/test/live arrangement .
Dev and live are sufficient .
The most important thing is to establish a framework of how changes are to be made to the code base or database , and stick to it .
CVS will help enforce this , but at the cost of having to use CVS .</tokentext>
<sentencetext>CVS (of whatever flavor) can help you do this.
It's a pain in the ass, and everybody will hate it, but it works.
I've done this with virtual machines as well.
It's kinda whizzy to do, but probably overkill.
The simplest way for me was to simply use rsync.
Rigid delineation between live and test/dev environments is important.
Use a completely separate database (not just a different schema), and if possible a completely separate database server.
Changes to the database schema should be encapsulated in update scripts and tightly controlled and thoroughly tested in the test environment.
Use a database that supports transactions and use them.
Updating the live site should be performed by updating a clone of the live site in another directory.
That way if everything goes tits up for some unexpected reason you can revert back to the old site while you lick your wounds.
Virtual machines definitely make this all techy and bitchin', but editing httpd.conf and restarting Apache also works.
The best solution is going to be customized to the needs of the project.
Most projects don't need a dev/test/live arrangement.
Dev and live are sufficient.
The most important thing is to establish a framework of how changes are to be made to the code base or database, and stick to it.
CVS will help enforce this, but at the cost of having to use CVS.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813107</id>
	<title>Use virtualization</title>
	<author>Anonymous</author>
	<datestamp>1256071200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I work for a virtualization company that might be able to solve some of your problems.  But I don't dare mention it because I don't want to be labeled a spammer.  Do some searches, form your own opinion.</p></htmltext>
<tokenext>I work for a virtualization company that might be able to solve some of your problems .
But I do n't dare mention it because I do n't want to be labeled a spammer .
Do some searches , form your own opinion .</tokentext>
<sentencetext>I work for a virtualization company that might be able to solve some of your problems.
But I don't dare mention it because I don't want to be labeled a spammer.
Do some searches, form your own opinion.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812201</id>
	<title>TPS reports</title>
	<author>Joe The Dragon</author>
	<datestamp>1256067420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>TPS reports with lots of cover letters.</p></htmltext>
<tokenext>TPS reports with lots of cover letters .</tokentext>
<sentencetext>TPS reports with lots of cover letters.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812401</id>
	<title>Re:SVN etc.</title>
	<author>Anonymous</author>
	<datestamp>1256068260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>My company wrote a small project for this (not released in any form, though). It has a collection of SQL scripts identified by date (eg "2009-10-15 1415 Renamed Foo.Bar to Foo.Baz.sql") and a table with columns for script name and date applied. Any scripts it finds that aren't listed in that table, it applies in order according to the date in the script name.</p><p>You should be able to hack this together in a day or so.</p></htmltext>
<tokenext>My company wrote a small project for this ( not released in any form , though ) .
It has a collection of SQL scripts identified by date ( eg " 2009-10-15 1415 Renamed Foo.Bar to Foo.Baz.sql " ) and a table with columns for script name and date applied .
Any scripts it finds that are n't listed in that table , it applies in order according to the date in the script name.You should be able to hack this together in a day or so .</tokentext>
<sentencetext>My company wrote a small project for this (not released in any form, though).
It has a collection of SQL scripts identified by date (eg "2009-10-15 1415 Renamed Foo.Bar to Foo.Baz.sql") and a table with columns for script name and date applied.
Any scripts it finds that aren't listed in that table, it applies in order according to the date in the script name.You should be able to hack this together in a day or so.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811935</id>
	<title>I manage them with an Iron Fist Of Death</title>
	<author>wiredog</author>
	<datestamp>1256066460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Or I would if I were in management.  For some reason they won't promote me here.</p></htmltext>
<tokenext>Or I would if I were in management .
For some reason they wo n't promote me here .</tokentext>
<sentencetext>Or I would if I were in management.
For some reason they won't promote me here.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814183</id>
	<title>Re:How slashdot does it</title>
	<author>AugstWest</author>
	<datestamp>1256032020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>1) A solid, well-defined subversion structure<br>2) Ant<br>3) SSH keys that Ant can use</p><p>Done and done. I work for a major broadcast network, pushing out hundreds of Java,<nobr> <wbr></nobr>.Net and Oracle Forms applications day in and day out to some number of servers I haven't bothered to count.</p><p>90\% of them can be pushed out from a single shell script with just a couple of command line switches. Most of this is done through identifying environments and destination paths for each of them in a build.properties file, then specifying the target server/environment at build time.</p><p>Ant (or Nant) works for pretty much any programming technology. It's primarily used for Java, but is very straightforward to adapt to everything else, and handing over an XML-formatted build script can be a *lot* cleaner than most of the perl, shell or python scripts I've dealt with in the past.</p></htmltext>
<tokenext>1 ) A solid , well-defined subversion structure2 ) Ant3 ) SSH keys that Ant can useDone and done .
I work for a major broadcast network , pushing out hundreds of Java , .Net and Oracle Forms applications day in and day out to some number of servers I have n't bothered to count.90 \ % of them can be pushed out from a single shell script with just a couple of command line switches .
Most of this is done through identifying environments and destination paths for each of them in a build.properties file , then specifying the target server/environment at build time.Ant ( or Nant ) works for pretty much any programming technology .
It 's primarily used for Java , but is very straightforward to adapt to everything else , and handing over an XML-formatted build script can be a * lot * cleaner than most of the perl , shell or python scripts I 've dealt with in the past .</tokentext>
<sentencetext>1) A solid, well-defined subversion structure2) Ant3) SSH keys that Ant can useDone and done.
I work for a major broadcast network, pushing out hundreds of Java, .Net and Oracle Forms applications day in and day out to some number of servers I haven't bothered to count.90\% of them can be pushed out from a single shell script with just a couple of command line switches.
Most of this is done through identifying environments and destination paths for each of them in a build.properties file, then specifying the target server/environment at build time.Ant (or Nant) works for pretty much any programming technology.
It's primarily used for Java, but is very straightforward to adapt to everything else, and handing over an XML-formatted build script can be a *lot* cleaner than most of the perl, shell or python scripts I've dealt with in the past.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812005</id>
	<title>Check out Springloops</title>
	<author>Fortunato\_NC</author>
	<datestamp>1256066700000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>It's hosted Subversion, with a slick web interface that walks you through darn near everything. You can configure development / test / production servers that can be accessed via FTP or SFTP and deploy new builds to any of them with just a couple of clicks. It integrates with Basecamp for project management, and it is really cheap - it sounds like either their Garden or Field plans would meet your needs, and they're both under $50/month.</p><p><a href="http://www.springloops.com/" title="springloops.com">Check them out here.</a> [springloops.com]</p><p>Not affiliated with them in any way, other than as a satisfied customer.</p></htmltext>
<tokenext>It 's hosted Subversion , with a slick web interface that walks you through darn near everything .
You can configure development / test / production servers that can be accessed via FTP or SFTP and deploy new builds to any of them with just a couple of clicks .
It integrates with Basecamp for project management , and it is really cheap - it sounds like either their Garden or Field plans would meet your needs , and they 're both under $ 50/month.Check them out here .
[ springloops.com ] Not affiliated with them in any way , other than as a satisfied customer .</tokentext>
<sentencetext>It's hosted Subversion, with a slick web interface that walks you through darn near everything.
You can configure development / test / production servers that can be accessed via FTP or SFTP and deploy new builds to any of them with just a couple of clicks.
It integrates with Basecamp for project management, and it is really cheap - it sounds like either their Garden or Field plans would meet your needs, and they're both under $50/month.Check them out here.
[springloops.com]Not affiliated with them in any way, other than as a satisfied customer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813215</id>
	<title>Bamboo</title>
	<author>Anonymous</author>
	<datestamp>1256071800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Check out Atlassian Bamboo. It's a build server that can plug in to your repository and automate builds and deployments.</p></htmltext>
<tokenext>Check out Atlassian Bamboo .
It 's a build server that can plug in to your repository and automate builds and deployments .</tokentext>
<sentencetext>Check out Atlassian Bamboo.
It's a build server that can plug in to your repository and automate builds and deployments.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813891</id>
	<title>If you find...</title>
	<author>Anonymous</author>
	<datestamp>1256031000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>&gt;... it is cumbersome to manage all sites by hacking away at the command prompt

<br>
you should probably get another job. Maybe Sysadmin in a Windows-only shop.</htmltext>
<tokenext>&gt; ... it is cumbersome to manage all sites by hacking away at the command prompt you should probably get another job .
Maybe Sysadmin in a Windows-only shop .</tokentext>
<sentencetext>&gt;... it is cumbersome to manage all sites by hacking away at the command prompt


you should probably get another job.
Maybe Sysadmin in a Windows-only shop.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820185</id>
	<title>Re:Look at Capistrano, steal ideas from Rails</title>
	<author>kuzb</author>
	<datestamp>1256155620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We use this where we work, it's incredibly flexable and works well.  While we do have ruby apps, we have apps in other languages as well, and use capistrano/git for deploying everything.</p><p>Frankly though, I wouldn't deploy anything to amazon's hardware due to privacy/security concerns.  It might just be me tinfoil hatting it up, but when you spend a quarter million in development time and resources on something, the last thing you want to do is run it on some other companies' machines.</p></htmltext>
<tokenext>We use this where we work , it 's incredibly flexable and works well .
While we do have ruby apps , we have apps in other languages as well , and use capistrano/git for deploying everything.Frankly though , I would n't deploy anything to amazon 's hardware due to privacy/security concerns .
It might just be me tinfoil hatting it up , but when you spend a quarter million in development time and resources on something , the last thing you want to do is run it on some other companies ' machines .</tokentext>
<sentencetext>We use this where we work, it's incredibly flexable and works well.
While we do have ruby apps, we have apps in other languages as well, and use capistrano/git for deploying everything.Frankly though, I wouldn't deploy anything to amazon's hardware due to privacy/security concerns.
It might just be me tinfoil hatting it up, but when you spend a quarter million in development time and resources on something, the last thing you want to do is run it on some other companies' machines.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812573</id>
	<title>KVM/Vmware/OpenSolaris zfs go virtual</title>
	<author>jwhitener</author>
	<datestamp>1256068800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The simple answer?  Virtual Machines.  If you have to stay with linux, go with vmware or for a free solution, KVM.  See http://en.wikipedia.org/wiki/Kernel-based\_Virtual\_Machine</p><p>If you want to run LAMP on open solaris/solaris, ZFS has very robust and easy to manage virtual machines called zones.  Sun also provides enterprise ops center software that can be used to manage the zones via a gui.  Copy/create/rollback, etc..</p><p>After that, smart system administration is required to keep things easy to manage.</p><p>How you choose to separate data, apps, and the OS will depend on your requirements, but in general, keeping them separate is a good idea.</p><p>Another good idea, is a<nobr> <wbr></nobr>/mnt/safe or area mounted inside the prod/test/dev boxes that is nfs shared between them.  Often times, I'll make a request of my sys admin "Please refresh test and dev with prod".  So I copy changes or other work to<nobr> <wbr></nobr>/mnt/safe and then he overwrites dev and test with a recent zfs snapshot (virtual machine snapshot) of production.</p><p>
&nbsp; I see you use the word check-in/out.  I'm assuming you have subversion or something similar running that you use to check in/out to a new location.  Do your developers need access to a CVS?   If so, I'd just build it into the virtual machine, so each developer has their own subversion installation.</p><p>The only thing you need to do when using zones/virtual machines (at least in zfs) is change the hostname and IP, but that is easily scripted.</p></htmltext>
<tokenext>The simple answer ?
Virtual Machines .
If you have to stay with linux , go with vmware or for a free solution , KVM .
See http : //en.wikipedia.org/wiki/Kernel-based \ _Virtual \ _MachineIf you want to run LAMP on open solaris/solaris , ZFS has very robust and easy to manage virtual machines called zones .
Sun also provides enterprise ops center software that can be used to manage the zones via a gui .
Copy/create/rollback , etc..After that , smart system administration is required to keep things easy to manage.How you choose to separate data , apps , and the OS will depend on your requirements , but in general , keeping them separate is a good idea.Another good idea , is a /mnt/safe or area mounted inside the prod/test/dev boxes that is nfs shared between them .
Often times , I 'll make a request of my sys admin " Please refresh test and dev with prod " .
So I copy changes or other work to /mnt/safe and then he overwrites dev and test with a recent zfs snapshot ( virtual machine snapshot ) of production .
  I see you use the word check-in/out .
I 'm assuming you have subversion or something similar running that you use to check in/out to a new location .
Do your developers need access to a CVS ?
If so , I 'd just build it into the virtual machine , so each developer has their own subversion installation.The only thing you need to do when using zones/virtual machines ( at least in zfs ) is change the hostname and IP , but that is easily scripted .</tokentext>
<sentencetext>The simple answer?
Virtual Machines.
If you have to stay with linux, go with vmware or for a free solution, KVM.
See http://en.wikipedia.org/wiki/Kernel-based\_Virtual\_MachineIf you want to run LAMP on open solaris/solaris, ZFS has very robust and easy to manage virtual machines called zones.
Sun also provides enterprise ops center software that can be used to manage the zones via a gui.
Copy/create/rollback, etc..After that, smart system administration is required to keep things easy to manage.How you choose to separate data, apps, and the OS will depend on your requirements, but in general, keeping them separate is a good idea.Another good idea, is a /mnt/safe or area mounted inside the prod/test/dev boxes that is nfs shared between them.
Often times, I'll make a request of my sys admin "Please refresh test and dev with prod".
So I copy changes or other work to /mnt/safe and then he overwrites dev and test with a recent zfs snapshot (virtual machine snapshot) of production.
  I see you use the word check-in/out.
I'm assuming you have subversion or something similar running that you use to check in/out to a new location.
Do your developers need access to a CVS?
If so, I'd just build it into the virtual machine, so each developer has their own subversion installation.The only thing you need to do when using zones/virtual machines (at least in zfs) is change the hostname and IP, but that is easily scripted.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811709</id>
	<title>Duct tape</title>
	<author>Anonymous</author>
	<datestamp>1256065860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>We do it the unix way: duct tape svn, sqsh, rsync, and sendmail together with shell scripts.  Reconciliation of what went where can be a little hairy, documentation is sparse, and some safeguards I'd like to see are not there, but it's a good base.  I'm actually taking a break from documenting the deployment system at this very moment...</p></htmltext>
<tokenext>We do it the unix way : duct tape svn , sqsh , rsync , and sendmail together with shell scripts .
Reconciliation of what went where can be a little hairy , documentation is sparse , and some safeguards I 'd like to see are not there , but it 's a good base .
I 'm actually taking a break from documenting the deployment system at this very moment.. .</tokentext>
<sentencetext>We do it the unix way: duct tape svn, sqsh, rsync, and sendmail together with shell scripts.
Reconciliation of what went where can be a little hairy, documentation is sparse, and some safeguards I'd like to see are not there, but it's a good base.
I'm actually taking a break from documenting the deployment system at this very moment...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29822109</id>
	<title>Re:Most important thing in my book</title>
	<author>cjb110</author>
	<datestamp>1256133060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Cept when Data Protection laws/policy are properly enforced!</p><p>Just had these come from high ups through to our dev team, they say 'no prod data in test', however they offer no alternatives or budget to overcome the massive limitation they've just placed upon us.<br>So in order to keep functioning the rules will be the subject a few poor jokes, and then ignored.</p></htmltext>
<tokenext>Cept when Data Protection laws/policy are properly enforced ! Just had these come from high ups through to our dev team , they say 'no prod data in test ' , however they offer no alternatives or budget to overcome the massive limitation they 've just placed upon us.So in order to keep functioning the rules will be the subject a few poor jokes , and then ignored .</tokentext>
<sentencetext>Cept when Data Protection laws/policy are properly enforced!Just had these come from high ups through to our dev team, they say 'no prod data in test', however they offer no alternatives or budget to overcome the massive limitation they've just placed upon us.So in order to keep functioning the rules will be the subject a few poor jokes, and then ignored.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812175</id>
	<title>best practice</title>
	<author>petes\_PoV</author>
	<datestamp>1256067360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><nobr> <wbr></nobr>... is just to call everything beta, then you never have to bother with testing, or documenting anything (though, to be fair, you didn't ask about documentation - so I guess you'd already decided not to bother with that detail). That way you get much faster development time and keep your time to market down to the same as your competitors - who are using the same techniques.
<p>
The trick then is to move on to another outfit just before it hits the fan. Don't worry about your customers - if they are running web based businesses, chances are most of them will have gone down the tubes in a year or so. Long before they get anywhere near release 1.0.</p></htmltext>
<tokenext>... is just to call everything beta , then you never have to bother with testing , or documenting anything ( though , to be fair , you did n't ask about documentation - so I guess you 'd already decided not to bother with that detail ) .
That way you get much faster development time and keep your time to market down to the same as your competitors - who are using the same techniques .
The trick then is to move on to another outfit just before it hits the fan .
Do n't worry about your customers - if they are running web based businesses , chances are most of them will have gone down the tubes in a year or so .
Long before they get anywhere near release 1.0 .</tokentext>
<sentencetext> ... is just to call everything beta, then you never have to bother with testing, or documenting anything (though, to be fair, you didn't ask about documentation - so I guess you'd already decided not to bother with that detail).
That way you get much faster development time and keep your time to market down to the same as your competitors - who are using the same techniques.
The trick then is to move on to another outfit just before it hits the fan.
Don't worry about your customers - if they are running web based businesses, chances are most of them will have gone down the tubes in a year or so.
Long before they get anywhere near release 1.0.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813317</id>
	<title>FTP is shit</title>
	<author>binford2k</author>
	<datestamp>1256072160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Don't ever enable FTP logins.  FTP should only be used for anonymous access.  EVER.</p></htmltext>
<tokenext>Do n't ever enable FTP logins .
FTP should only be used for anonymous access .
EVER .</tokentext>
<sentencetext>Don't ever enable FTP logins.
FTP should only be used for anonymous access.
EVER.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29818067</id>
	<title>One more opinion</title>
	<author>LordThyGod</author>
	<datestamp>1256050740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>First, I can't see the need for a GUI or management tool. God gave us bash to make us happy and productive and we should use it. Managing a mix of sites, is an ideal use of scripting.

What we (small web dev company 100+ sites) do:

bazaar (bzr) for repos. I much prefer it to svn. Cleaner and smarter. Uses sftp by default. Create a repo: 'bzr init ; bzr add', and anyone with access to that system can co a copy.

Each developer checks out the devel code to their local systems. We have our nameservers set to handle custom naming, so like example.dev maps to 127.0.0.1. All the db stuff goes to a db server and everyone (in house and contractors), have access to that. Get to use all your desktop tools, etc. The windows guys have local versions of Apache and php, so everybody's happy on that end.

Next level is staging/testing. That server is a checkout of the repo. Clients have limited access.

When the time comes, a copy of the site is made, site is grepped for no-no's and FIXME's, the new code is "cleaned", and then copied to production. This is all scripted. Permissions are reset, development stuff (docs, fla's, psd's, etc) are removed. We have a naming scheme, so certain directories are blown away on the live site.  The scripts issues some reminders of stuff to make sure get done at the last minute and post launch.

Loop, gotop(), and tweak script.</htmltext>
<tokenext>First , I ca n't see the need for a GUI or management tool .
God gave us bash to make us happy and productive and we should use it .
Managing a mix of sites , is an ideal use of scripting .
What we ( small web dev company 100 + sites ) do : bazaar ( bzr ) for repos .
I much prefer it to svn .
Cleaner and smarter .
Uses sftp by default .
Create a repo : 'bzr init ; bzr add ' , and anyone with access to that system can co a copy .
Each developer checks out the devel code to their local systems .
We have our nameservers set to handle custom naming , so like example.dev maps to 127.0.0.1 .
All the db stuff goes to a db server and everyone ( in house and contractors ) , have access to that .
Get to use all your desktop tools , etc .
The windows guys have local versions of Apache and php , so everybody 's happy on that end .
Next level is staging/testing .
That server is a checkout of the repo .
Clients have limited access .
When the time comes , a copy of the site is made , site is grepped for no-no 's and FIXME 's , the new code is " cleaned " , and then copied to production .
This is all scripted .
Permissions are reset , development stuff ( docs , fla 's , psd 's , etc ) are removed .
We have a naming scheme , so certain directories are blown away on the live site .
The scripts issues some reminders of stuff to make sure get done at the last minute and post launch .
Loop , gotop ( ) , and tweak script .</tokentext>
<sentencetext>First, I can't see the need for a GUI or management tool.
God gave us bash to make us happy and productive and we should use it.
Managing a mix of sites, is an ideal use of scripting.
What we (small web dev company 100+ sites) do:

bazaar (bzr) for repos.
I much prefer it to svn.
Cleaner and smarter.
Uses sftp by default.
Create a repo: 'bzr init ; bzr add', and anyone with access to that system can co a copy.
Each developer checks out the devel code to their local systems.
We have our nameservers set to handle custom naming, so like example.dev maps to 127.0.0.1.
All the db stuff goes to a db server and everyone (in house and contractors), have access to that.
Get to use all your desktop tools, etc.
The windows guys have local versions of Apache and php, so everybody's happy on that end.
Next level is staging/testing.
That server is a checkout of the repo.
Clients have limited access.
When the time comes, a copy of the site is made, site is grepped for no-no's and FIXME's, the new code is "cleaned", and then copied to production.
This is all scripted.
Permissions are reset, development stuff (docs, fla's, psd's, etc) are removed.
We have a naming scheme, so certain directories are blown away on the live site.
The scripts issues some reminders of stuff to make sure get done at the last minute and post launch.
Loop, gotop(), and tweak script.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813361</id>
	<title>Re:You are not a n00b</title>
	<author>Anonymous</author>
	<datestamp>1256072340000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>n00b is unknowable?</p></htmltext>
<tokenext>n00b is unknowable ?</tokentext>
<sentencetext>n00b is unknowable?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471</id>
	<title>How slashdot does it</title>
	<author>Anonymous</author>
	<datestamp>1256065080000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p><div class="quote"><p>How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"</p></div><p>I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix</p><p>it - while actually never fixing it. After all the results are not that significant:</p><p>- if someone posts about it on a thread, mods will -1 offtopic it and no one will hear your complain<br>- many people will "lol fail" at the weird effects, like when <a href="http://yro.slashdot.org/story/09/09/24/238251/CA-City-Mulls-Evading-the-Law-On-Red-Light-Cameras?from=rss" title="slashdot.org">kdawson decides to merge two different stories together</a> [slashdot.org]</p></div>
	</htmltext>
<tokenext>How have fellow slashdotters managed this process , what systems/scripts have you used , and what advice do you have ?
" I do the same as Slashdot.org does - Make the changes on live code , except a little downtime and weird effects and then try to fixit - while actually never fixing it .
After all the results are not that significant : - if someone posts about it on a thread , mods will -1 offtopic it and no one will hear your complain- many people will " lol fail " at the weird effects , like when kdawson decides to merge two different stories together [ slashdot.org ]</tokentext>
<sentencetext>How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?
"I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fixit - while actually never fixing it.
After all the results are not that significant:- if someone posts about it on a thread, mods will -1 offtopic it and no one will hear your complain- many people will "lol fail" at the weird effects, like when kdawson decides to merge two different stories together [slashdot.org]
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816053</id>
	<title>Clearly Draw the lines...</title>
	<author>Anonymous</author>
	<datestamp>1256039700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My only advise is:<br>Dont let the app developers allow the marketoids to run production deliverables from development systems...</p></htmltext>
<tokenext>My only advise is : Dont let the app developers allow the marketoids to run production deliverables from development systems.. .</tokentext>
<sentencetext>My only advise is:Dont let the app developers allow the marketoids to run production deliverables from development systems...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813483</id>
	<title>Re:SVN etc.</title>
	<author>Erskin</author>
	<datestamp>1256029560000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p>Deployment to live servers via SVN checkout when the time comes</p></div><p>Side note: I humbly suggest (as someone else mentioned elsewhere) you use <em>export</em> instead of <em>checkout</em> for the live deployments.</p></div>
	</htmltext>
<tokenext>Deployment to live servers via SVN checkout when the time comesSide note : I humbly suggest ( as someone else mentioned elsewhere ) you use export instead of checkout for the live deployments .</tokentext>
<sentencetext>Deployment to live servers via SVN checkout when the time comesSide note: I humbly suggest (as someone else mentioned elsewhere) you use export instead of checkout for the live deployments.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816889</id>
	<title>version control system + build/deploy engine</title>
	<author>nfsilkey</author>
	<datestamp>1256044200000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We do this for many many Drupal sites on many horizontal web nodes via bzr + ant.  By 'sites' I mean no multi-site; each 'site' gets its own Drupal instance.  By 'Drupal instance', I mean the 'Drupal instance' is an ant-powered deploy from a branch in bzr comprised of vendor branches (core + modules) merged in plus customizations by our shop.  Each environment gets a branch, and we merge code upstream (dev -&gt; tst -&gt; prd).</p><p>The only thing 'shared' across the infrastructure is the web services and frameworks on the webapp nodes.  Ant is great at auto-magic MySQL db provisioning, Drush calls to pound the schema, APC cache flushes, Memcached bops, etc.  Also I would throw myself off a bridge if I had to manage all the complex merges across our branches and dealing with updating the vendor branches.</p><p>Others here also made the comment wrt code up, content down.  Live it, love it; SERIOUSLY!  Refresh often, and give your devs anonymized slices of the db for them to keep on a laptop they will undoubtedly leave in a cab.  Were currently bending ant to perform the downstream refreshes + sanitizes.  Looks very promising.</p><p>Also if youre not able to bastardize ant to do what you want it to do, look at ant-contrib to further extend the tool.</p><p><a href="http://bazaar-vcs.org/en/" title="bazaar-vcs.org">http://bazaar-vcs.org/en/</a> [bazaar-vcs.org]<br><a href="http://ant.apache.org/" title="apache.org">http://ant.apache.org/</a> [apache.org]<br><a href="http://ant-contrib.sourceforge.net/" title="sourceforge.net">http://ant-contrib.sourceforge.net/</a> [sourceforge.net]</p><p>Slightly OT: The J2EE guys at $employer prefer a maven+ant+svn approach.  YMMV.</p><p>Have fun.  These are very interesting toys to play with, tbh.</p></htmltext>
<tokenext>We do this for many many Drupal sites on many horizontal web nodes via bzr + ant .
By 'sites ' I mean no multi-site ; each 'site ' gets its own Drupal instance .
By 'Drupal instance ' , I mean the 'Drupal instance ' is an ant-powered deploy from a branch in bzr comprised of vendor branches ( core + modules ) merged in plus customizations by our shop .
Each environment gets a branch , and we merge code upstream ( dev - &gt; tst - &gt; prd ) .The only thing 'shared ' across the infrastructure is the web services and frameworks on the webapp nodes .
Ant is great at auto-magic MySQL db provisioning , Drush calls to pound the schema , APC cache flushes , Memcached bops , etc .
Also I would throw myself off a bridge if I had to manage all the complex merges across our branches and dealing with updating the vendor branches.Others here also made the comment wrt code up , content down .
Live it , love it ; SERIOUSLY !
Refresh often , and give your devs anonymized slices of the db for them to keep on a laptop they will undoubtedly leave in a cab .
Were currently bending ant to perform the downstream refreshes + sanitizes .
Looks very promising.Also if youre not able to bastardize ant to do what you want it to do , look at ant-contrib to further extend the tool.http : //bazaar-vcs.org/en/ [ bazaar-vcs.org ] http : //ant.apache.org/ [ apache.org ] http : //ant-contrib.sourceforge.net/ [ sourceforge.net ] Slightly OT : The J2EE guys at $ employer prefer a maven + ant + svn approach .
YMMV.Have fun .
These are very interesting toys to play with , tbh .</tokentext>
<sentencetext>We do this for many many Drupal sites on many horizontal web nodes via bzr + ant.
By 'sites' I mean no multi-site; each 'site' gets its own Drupal instance.
By 'Drupal instance', I mean the 'Drupal instance' is an ant-powered deploy from a branch in bzr comprised of vendor branches (core + modules) merged in plus customizations by our shop.
Each environment gets a branch, and we merge code upstream (dev -&gt; tst -&gt; prd).The only thing 'shared' across the infrastructure is the web services and frameworks on the webapp nodes.
Ant is great at auto-magic MySQL db provisioning, Drush calls to pound the schema, APC cache flushes, Memcached bops, etc.
Also I would throw myself off a bridge if I had to manage all the complex merges across our branches and dealing with updating the vendor branches.Others here also made the comment wrt code up, content down.
Live it, love it; SERIOUSLY!
Refresh often, and give your devs anonymized slices of the db for them to keep on a laptop they will undoubtedly leave in a cab.
Were currently bending ant to perform the downstream refreshes + sanitizes.
Looks very promising.Also if youre not able to bastardize ant to do what you want it to do, look at ant-contrib to further extend the tool.http://bazaar-vcs.org/en/ [bazaar-vcs.org]http://ant.apache.org/ [apache.org]http://ant-contrib.sourceforge.net/ [sourceforge.net]Slightly OT: The J2EE guys at $employer prefer a maven+ant+svn approach.
YMMV.Have fun.
These are very interesting toys to play with, tbh.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811957</id>
	<title>Puppet</title>
	<author>philipborlin</author>
	<datestamp>1256066580000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>If you are in the unix/linux world take a look at puppet.  You provision out a set of nodes (allows node inheritance) and manage all your scripts, config files, etc from one central location (called the puppet master).  Changes propagate to all servers that the change applied to automatically.  It is built around  keeping the configuration files in a versioned repository and is ready to use today.</htmltext>
<tokenext>If you are in the unix/linux world take a look at puppet .
You provision out a set of nodes ( allows node inheritance ) and manage all your scripts , config files , etc from one central location ( called the puppet master ) .
Changes propagate to all servers that the change applied to automatically .
It is built around keeping the configuration files in a versioned repository and is ready to use today .</tokentext>
<sentencetext>If you are in the unix/linux world take a look at puppet.
You provision out a set of nodes (allows node inheritance) and manage all your scripts, config files, etc from one central location (called the puppet master).
Changes propagate to all servers that the change applied to automatically.
It is built around  keeping the configuration files in a versioned repository and is ready to use today.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811601</id>
	<title>global config</title>
	<author>Anonymous</author>
	<datestamp>1256065560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I use a config file that sets a global variable that all scripts reference to determine what site (dev/test/prod) the code is running on. I also use CVS and RPM to handle code management and pushes.</p></htmltext>
<tokenext>I use a config file that sets a global variable that all scripts reference to determine what site ( dev/test/prod ) the code is running on .
I also use CVS and RPM to handle code management and pushes .</tokentext>
<sentencetext>I use a config file that sets a global variable that all scripts reference to determine what site (dev/test/prod) the code is running on.
I also use CVS and RPM to handle code management and pushes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812237</id>
	<title>My advice</title>
	<author>.Bruce Perens</author>
	<datestamp>1256067600000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>You're still new.  Get out and choose a new career before you lose too much retirement.</p></htmltext>
<tokenext>You 're still new .
Get out and choose a new career before you lose too much retirement .</tokentext>
<sentencetext>You're still new.
Get out and choose a new career before you lose too much retirement.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817803</id>
	<title>Re:KISS</title>
	<author>mindstrm</author>
	<datestamp>1256048880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"You don't need development machines. Let them develop on their workstations. They're developers, they'll figure it out."</p><p>Quite often, they aren't - they are web design guys who should not be confused with programmers of any sort.</p></htmltext>
<tokenext>" You do n't need development machines .
Let them develop on their workstations .
They 're developers , they 'll figure it out .
" Quite often , they are n't - they are web design guys who should not be confused with programmers of any sort .</tokentext>
<sentencetext>"You don't need development machines.
Let them develop on their workstations.
They're developers, they'll figure it out.
"Quite often, they aren't - they are web design guys who should not be confused with programmers of any sort.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816403</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812615</id>
	<title>frist 4s0t</title>
	<author>Anonymous</author>
	<datestamp>1256069040000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext><A HREF="http://goat.cx/" title="goat.cx" rel="nofollow">give BSD credit</a> [goat.cx]</htmltext>
<tokenext>give BSD credit [ goat.cx ]</tokentext>
<sentencetext>give BSD credit [goat.cx]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813507</id>
	<title>Try a build tool</title>
	<author>Anonymous</author>
	<datestamp>1256029620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I think you're looking for a build tool. At work we use Maven to do some of what you're doing but I think Ant would be a better choice for you. Don't worry about the fact that they are written for Java.</p><p>They allow you to move environment specific settings into a configuration file. You tell it the different addresses, passwords and usernames to use in your development, test and production environments. You then ask for a build for the test environment and all those settings are inserted where they need to go. It comes with a lot of pre-defined tasks for checking out sources, initializing databases and publishing the files on an FTP. You can pick and choose which actions are run for which environment. Finally, if you run into a specific problem that won't be solved by one of the many, many, pre-defined tasks, you can write you're own task or launch a script.</p></htmltext>
<tokenext>I think you 're looking for a build tool .
At work we use Maven to do some of what you 're doing but I think Ant would be a better choice for you .
Do n't worry about the fact that they are written for Java.They allow you to move environment specific settings into a configuration file .
You tell it the different addresses , passwords and usernames to use in your development , test and production environments .
You then ask for a build for the test environment and all those settings are inserted where they need to go .
It comes with a lot of pre-defined tasks for checking out sources , initializing databases and publishing the files on an FTP .
You can pick and choose which actions are run for which environment .
Finally , if you run into a specific problem that wo n't be solved by one of the many , many , pre-defined tasks , you can write you 're own task or launch a script .</tokentext>
<sentencetext>I think you're looking for a build tool.
At work we use Maven to do some of what you're doing but I think Ant would be a better choice for you.
Don't worry about the fact that they are written for Java.They allow you to move environment specific settings into a configuration file.
You tell it the different addresses, passwords and usernames to use in your development, test and production environments.
You then ask for a build for the test environment and all those settings are inserted where they need to go.
It comes with a lot of pre-defined tasks for checking out sources, initializing databases and publishing the files on an FTP.
You can pick and choose which actions are run for which environment.
Finally, if you run into a specific problem that won't be solved by one of the many, many, pre-defined tasks, you can write you're own task or launch a script.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813235</id>
	<title>Drupal: Deployment module</title>
	<author>Anonymous</author>
	<datestamp>1256071860000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The deployment framework is a series of modules which are designed to allow developers to easily stage Drupal data from one site to another. This includes content (nodes, taxonomy, users, etc) as well as configuration (views, content types, system settings, etc.) Not only can it push new content, it can also push updates to existing content. Deploy automatically manages dependencies between nodes (IE nodereferences) as well as between other objects. It is designed to have a rich API which can be easily extended to be used in a variety of situations. Check out the screencast for a demo!</p><p><a href="http://drupal.org/project/deploy" title="drupal.org" rel="nofollow">http://drupal.org/project/deploy</a> [drupal.org]</p></htmltext>
<tokenext>The deployment framework is a series of modules which are designed to allow developers to easily stage Drupal data from one site to another .
This includes content ( nodes , taxonomy , users , etc ) as well as configuration ( views , content types , system settings , etc .
) Not only can it push new content , it can also push updates to existing content .
Deploy automatically manages dependencies between nodes ( IE nodereferences ) as well as between other objects .
It is designed to have a rich API which can be easily extended to be used in a variety of situations .
Check out the screencast for a demo ! http : //drupal.org/project/deploy [ drupal.org ]</tokentext>
<sentencetext>The deployment framework is a series of modules which are designed to allow developers to easily stage Drupal data from one site to another.
This includes content (nodes, taxonomy, users, etc) as well as configuration (views, content types, system settings, etc.
) Not only can it push new content, it can also push updates to existing content.
Deploy automatically manages dependencies between nodes (IE nodereferences) as well as between other objects.
It is designed to have a rich API which can be easily extended to be used in a variety of situations.
Check out the screencast for a demo!http://drupal.org/project/deploy [drupal.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812315</id>
	<title>Hudson</title>
	<author>Anonymous</author>
	<datestamp>1256067900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Deploy a continuous build server.  You've already done most of the work by writing all of these scripts.  It will transfer nicely over to Hudson or maybe CruiseControl.  As a bonus, if you guys use Selenium tests, you can automate those too.</p><p>Also, get a ticketing system (Jira/Trac/whatever) with a configurable workflow that you're comfortable with so you can track deployments and approvals/disapprovals.  The workflow should be configurable because you never know if you'll start adding environments or release steps (your company's packaged releases won't really get deployed to production as the final step or you might add a customer environment).</p><p>Finally print out instructions for the developers and post them on a wall or wiki where they are plainly visible.  It's all useless if you're the only one who knows how the system works.</p></htmltext>
<tokenext>Deploy a continuous build server .
You 've already done most of the work by writing all of these scripts .
It will transfer nicely over to Hudson or maybe CruiseControl .
As a bonus , if you guys use Selenium tests , you can automate those too.Also , get a ticketing system ( Jira/Trac/whatever ) with a configurable workflow that you 're comfortable with so you can track deployments and approvals/disapprovals .
The workflow should be configurable because you never know if you 'll start adding environments or release steps ( your company 's packaged releases wo n't really get deployed to production as the final step or you might add a customer environment ) .Finally print out instructions for the developers and post them on a wall or wiki where they are plainly visible .
It 's all useless if you 're the only one who knows how the system works .</tokentext>
<sentencetext>Deploy a continuous build server.
You've already done most of the work by writing all of these scripts.
It will transfer nicely over to Hudson or maybe CruiseControl.
As a bonus, if you guys use Selenium tests, you can automate those too.Also, get a ticketing system (Jira/Trac/whatever) with a configurable workflow that you're comfortable with so you can track deployments and approvals/disapprovals.
The workflow should be configurable because you never know if you'll start adding environments or release steps (your company's packaged releases won't really get deployed to production as the final step or you might add a customer environment).Finally print out instructions for the developers and post them on a wall or wiki where they are plainly visible.
It's all useless if you're the only one who knows how the system works.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813315</id>
	<title>Re:From professional experience:</title>
	<author>Hurricane78</author>
	<datestamp>1256072160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Lool. I guess the lem&gt; was a Freudian slip. It's the exact part that I still have sleepless nights over. So please forgive me.<nobr> <wbr></nobr>:))</p></htmltext>
<tokenext>Lool .
I guess the lem &gt; was a Freudian slip .
It 's the exact part that I still have sleepless nights over .
So please forgive me .
: ) )</tokentext>
<sentencetext>Lool.
I guess the lem&gt; was a Freudian slip.
It's the exact part that I still have sleepless nights over.
So please forgive me.
:))</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813127</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817017</id>
	<title>The way we do it</title>
	<author>Anonymous</author>
	<datestamp>1256044680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Here is our entire processes from test to prod deployment</p><p>1) Unit tests<br>2) regression tests (hacky, but use junit to invoke and interact with a bootstrapped version of our service)<br>3) (provided everything goes well)<br>4) Deploy to Alpha stage<br>5) run remote regression tests. This is integration testing. Does it work with the db? All config is good? VIP settings ok?<br>4) deploy to devo. This is essentially a mirror of the production website<br>5) Run remote regression tests (as per #2, but this is against the real mccoy). This allows us to detect configuration problems as well as how the code interacts with other incoming requests.<br>6) promote to pre-prod. This is a box (or set of) operating with prod data but not serving prod traffic. The only difference between this and live traffic is merely the fact that our VIP doesn't know about it<br>7) run preprod regression tests<br>8) Deploy to prod.<br>9) Run prod regression tests<br>10) Monitor for fatal errors</p><p>All of our boxes have insane monitors on them. Disk usage, CPU usage, RAM usage, TCP socket usage, process count usage. Pretty much everything that defines the box as normal. Anything goes out of whack: Page the on-call.</p><p>FYI I work for a VERY large ecommerce site. But sadly, very few teams are this rigorous</p></htmltext>
<tokenext>Here is our entire processes from test to prod deployment1 ) Unit tests2 ) regression tests ( hacky , but use junit to invoke and interact with a bootstrapped version of our service ) 3 ) ( provided everything goes well ) 4 ) Deploy to Alpha stage5 ) run remote regression tests .
This is integration testing .
Does it work with the db ?
All config is good ?
VIP settings ok ? 4 ) deploy to devo .
This is essentially a mirror of the production website5 ) Run remote regression tests ( as per # 2 , but this is against the real mccoy ) .
This allows us to detect configuration problems as well as how the code interacts with other incoming requests.6 ) promote to pre-prod .
This is a box ( or set of ) operating with prod data but not serving prod traffic .
The only difference between this and live traffic is merely the fact that our VIP does n't know about it7 ) run preprod regression tests8 ) Deploy to prod.9 ) Run prod regression tests10 ) Monitor for fatal errorsAll of our boxes have insane monitors on them .
Disk usage , CPU usage , RAM usage , TCP socket usage , process count usage .
Pretty much everything that defines the box as normal .
Anything goes out of whack : Page the on-call.FYI I work for a VERY large ecommerce site .
But sadly , very few teams are this rigorous</tokentext>
<sentencetext>Here is our entire processes from test to prod deployment1) Unit tests2) regression tests (hacky, but use junit to invoke and interact with a bootstrapped version of our service)3) (provided everything goes well)4) Deploy to Alpha stage5) run remote regression tests.
This is integration testing.
Does it work with the db?
All config is good?
VIP settings ok?4) deploy to devo.
This is essentially a mirror of the production website5) Run remote regression tests (as per #2, but this is against the real mccoy).
This allows us to detect configuration problems as well as how the code interacts with other incoming requests.6) promote to pre-prod.
This is a box (or set of) operating with prod data but not serving prod traffic.
The only difference between this and live traffic is merely the fact that our VIP doesn't know about it7) run preprod regression tests8) Deploy to prod.9) Run prod regression tests10) Monitor for fatal errorsAll of our boxes have insane monitors on them.
Disk usage, CPU usage, RAM usage, TCP socket usage, process count usage.
Pretty much everything that defines the box as normal.
Anything goes out of whack: Page the on-call.FYI I work for a VERY large ecommerce site.
But sadly, very few teams are this rigorous</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812269</id>
	<title>Packaging Packaging Packaging...</title>
	<author>keepper</author>
	<datestamp>1256067780000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Its amazing, how this seemingly obvious question, always gets weird and overly complex answers.</p><p>Think about how every unix os handles this. Packaging!</p><p>Without getting into a flame war about the merits of any packaging systems:</p><p>- Use your native distributions packaging system.<br>- Create a naming convention for pkgs ( ie, web-fronted-php-1.2.4, web-prod-configs-1.27 )<br>- Use meta-packages  ( packages, whose only purpose is to list out what  makes out a complete systems )<br>- Make the developers package their software, or write scripts for them to do so easily ( this is a lot easier than it seems )<br>- Put your packages in different repositories ( dev for dev servers, stg for staging systems,qa for qa systems ,  prod for production, etc et c<br>- Use other system management tools to deploy said packages ( either your native package manager, or puppet, cfgengine, func, sshcmd scripts, etc )</p><p>And the pluses? you always know absolutely whats running on your system. You can always reproduce and clone a systems.</p><p>It takes discipline, but this is how its done in large environments.</p><p>-</p></htmltext>
<tokenext>Its amazing , how this seemingly obvious question , always gets weird and overly complex answers.Think about how every unix os handles this .
Packaging ! Without getting into a flame war about the merits of any packaging systems : - Use your native distributions packaging system.- Create a naming convention for pkgs ( ie , web-fronted-php-1.2.4 , web-prod-configs-1.27 ) - Use meta-packages ( packages , whose only purpose is to list out what makes out a complete systems ) - Make the developers package their software , or write scripts for them to do so easily ( this is a lot easier than it seems ) - Put your packages in different repositories ( dev for dev servers , stg for staging systems,qa for qa systems , prod for production , etc et c- Use other system management tools to deploy said packages ( either your native package manager , or puppet , cfgengine , func , sshcmd scripts , etc ) And the pluses ?
you always know absolutely whats running on your system .
You can always reproduce and clone a systems.It takes discipline , but this is how its done in large environments.-</tokentext>
<sentencetext>Its amazing, how this seemingly obvious question, always gets weird and overly complex answers.Think about how every unix os handles this.
Packaging!Without getting into a flame war about the merits of any packaging systems:- Use your native distributions packaging system.- Create a naming convention for pkgs ( ie, web-fronted-php-1.2.4, web-prod-configs-1.27 )- Use meta-packages  ( packages, whose only purpose is to list out what  makes out a complete systems )- Make the developers package their software, or write scripts for them to do so easily ( this is a lot easier than it seems )- Put your packages in different repositories ( dev for dev servers, stg for staging systems,qa for qa systems ,  prod for production, etc et c- Use other system management tools to deploy said packages ( either your native package manager, or puppet, cfgengine, func, sshcmd scripts, etc )And the pluses?
you always know absolutely whats running on your system.
You can always reproduce and clone a systems.It takes discipline, but this is how its done in large environments.-</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812857</id>
	<title>FTP?  phpMyAdmin?</title>
	<author>Culture20</author>
	<datestamp>1256069940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Web devs need to have security enforced or they won't think about it for their sites.  Shut off FTP and enforce SFTP only.  If bandwidth is a factor is choosing FTP over SFTP, at the very least, use kerberized FTP.  Make certain that phpMyAdmin is behind https and that authentication is required.  Yes, this means they have to use two passwords.  Tough.</htmltext>
<tokenext>Web devs need to have security enforced or they wo n't think about it for their sites .
Shut off FTP and enforce SFTP only .
If bandwidth is a factor is choosing FTP over SFTP , at the very least , use kerberized FTP .
Make certain that phpMyAdmin is behind https and that authentication is required .
Yes , this means they have to use two passwords .
Tough .</tokentext>
<sentencetext>Web devs need to have security enforced or they won't think about it for their sites.
Shut off FTP and enforce SFTP only.
If bandwidth is a factor is choosing FTP over SFTP, at the very least, use kerberized FTP.
Make certain that phpMyAdmin is behind https and that authentication is required.
Yes, this means they have to use two passwords.
Tough.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816403</id>
	<title>KISS</title>
	<author>SanityInAnarchy</author>
	<datestamp>1256041860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm amazed -- no one yet seems to have figured out one very obvious possibility:</p><p>You don't need development machines. Let them develop on their workstations. They're developers, they'll figure it out.</p><p>Use Git. Or, if you must use SVN, use developer-specific branches -- and after using this for awhile, it should be very obvious why you should use Git. The point is that each developer should be checking into version control <i>often</i>, without having to worry about causing problems for other users. Want someone to take a look at your code? Let them merge your code (or just check out your branch) and run it on their local machine.</p><p>Then, one staging and one production. Or more, if you need to be testing multiple versions company-wide. VM images to spawn each (so staging can be cloned from production, or vice versa), and Capistrano to deploy.</p><p>Granted, this wasn't used with a particularly large team, but we didn't have anything which even hinted at scalability issues down the road.</p><p>Now, if your team is so large that you need many test and many production servers, I apologize, but it does look like something of a WTF when you have web developers who can't figure out how to set up a webserver on their machine. I mean, every Mac comes with Ruby On Rails out of the box!</p></htmltext>
<tokenext>I 'm amazed -- no one yet seems to have figured out one very obvious possibility : You do n't need development machines .
Let them develop on their workstations .
They 're developers , they 'll figure it out.Use Git .
Or , if you must use SVN , use developer-specific branches -- and after using this for awhile , it should be very obvious why you should use Git .
The point is that each developer should be checking into version control often , without having to worry about causing problems for other users .
Want someone to take a look at your code ?
Let them merge your code ( or just check out your branch ) and run it on their local machine.Then , one staging and one production .
Or more , if you need to be testing multiple versions company-wide .
VM images to spawn each ( so staging can be cloned from production , or vice versa ) , and Capistrano to deploy.Granted , this was n't used with a particularly large team , but we did n't have anything which even hinted at scalability issues down the road.Now , if your team is so large that you need many test and many production servers , I apologize , but it does look like something of a WTF when you have web developers who ca n't figure out how to set up a webserver on their machine .
I mean , every Mac comes with Ruby On Rails out of the box !</tokentext>
<sentencetext>I'm amazed -- no one yet seems to have figured out one very obvious possibility:You don't need development machines.
Let them develop on their workstations.
They're developers, they'll figure it out.Use Git.
Or, if you must use SVN, use developer-specific branches -- and after using this for awhile, it should be very obvious why you should use Git.
The point is that each developer should be checking into version control often, without having to worry about causing problems for other users.
Want someone to take a look at your code?
Let them merge your code (or just check out your branch) and run it on their local machine.Then, one staging and one production.
Or more, if you need to be testing multiple versions company-wide.
VM images to spawn each (so staging can be cloned from production, or vice versa), and Capistrano to deploy.Granted, this wasn't used with a particularly large team, but we didn't have anything which even hinted at scalability issues down the road.Now, if your team is so large that you need many test and many production servers, I apologize, but it does look like something of a WTF when you have web developers who can't figure out how to set up a webserver on their machine.
I mean, every Mac comes with Ruby On Rails out of the box!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817175</id>
	<title>How Do You Manage Dev/Test/Production Environments</title>
	<author>Anonymous</author>
	<datestamp>1256045460000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>How Do You Manage Dev/Test/Production Environments?
<br> <br>
<i>Very carefully.</i></htmltext>
<tokenext>How Do You Manage Dev/Test/Production Environments ?
Very carefully .</tokentext>
<sentencetext>How Do You Manage Dev/Test/Production Environments?
Very carefully.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812641</id>
	<title>Quick Brief</title>
	<author>kenp2002</author>
	<datestamp>1256069100000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><p>Develop 4 Environment Structures</p><p>Development (DEV)<br>Integration Testing (INTEG)<br>Acceptance (ACPT)<br>Production (PROD)</p><p>For each system create a migration script that generically does the following:<br>(We will use SOURCE and DEST for environments. You migrate from DEV-&gt;INTEG-&gt;ACPT-&gt;PROD)</p><p>The migration script as it's core does the following:</p><p>1) STOP Existing Services and Databases (SOURCE and DEST)</p><p>2) BUILD your deployment package from SOURCE (This means finalizing commits to an SVN, Creating a dump of SOURCE databases etc.) If this is a long process then you can leave the DEST running and STOP DEST at the end of the build phase. I do this as builds for my world can take 2-3 days.</p><p>3) CONDITION your deployment package to be configured for DEST environment (simple find and replace scripts to correct database names, IP address, etc. These should be config files that are read and processes.) This is common if there are different security SAPs, Certificates, Etc that need to be configured. For instance you may not have SSL enabled in DEV but you might in INTEG or ACPT.</p><p>4) BACKUP DEST information as an install package(this is identical to the BUILD done on the source. This BACKUP can be deployed to restore the previous version.) This should be the same function you ran on SOURCE with a different destination (say Backups verus Deploys)</p><p>5) MIGRATE the install package from SOURCE to DEST<br>START DEST</p><p>6) TEST TEST and RETEST</p><p>7) If all tests pass then APPROVE. This is the green light to re-start the SOURCE services so development can move on.</p><p>That is a brief of my suggestion.</p><p>DEV is obvious<br>INTEG is where you look for defects and resolve defects. Primary testing.<br>ACPT is where user and BL acceptance testing occurs and should mirror PROD in services available.<br>PROD<nobr> <wbr></nobr>... yeah...</p><p>I handle about 790+ applications across 2000+ pieces of hardware so this may appear to be overkill for some but it can be as simple as 4 running instances on a single box with a<nobr> <wbr></nobr>/DEV/<nobr> <wbr></nobr>/IT/<nobr> <wbr></nobr>/ACPT/<nobr> <wbr></nobr>/PROD/</p><p>Directory structure with MYSQL running 4 different databases. The "Script" could be as simple as dropping the DEST database and copying the SOURCE database with a new name. Other options are creating modification SQLS for instance that are applied onto the exist database.</p><p>e.g. STOP, UPDATE, START</p><p>to preserve existing data. In the case of Drupal your DEV might pull a nightly build and kick out a weekly IT, a biweekly ACPT, and a monthly PROD update.</p><p>JUST REMEMBER THAT YOU MUST MAKE SURE THE PROCESS IS ALWAYS REVERSABLE!!</p><p>The script to deploy needs to handle failure. There has to be a good backout.</p><p>You should have a method to backup and restore the current state. Integrate that into the script. Always backup Before you do changes and AGAIN after you change. DEV may need to look at the failed deploy data (perhaps a substitution or patch failed, they need to find out why.)</p><p>Before Backup and After Backup in the migration script.</p><p>And always 'shake out' a deployment in each environment level to make sure problems to propogate. You find problems in IT, you test to make sure what you found in IT is resolved in ACPT. Your testers should NOT normally be finding and filing new defects in ACPT environments with the exception of inter-application communication that might not be available in earlier environments. (Great example might be ACPT has the ability to connect to say a marketing companies databases where you use dummy databases in IT and DEV.) 80/20 is the norm for IT/ACPT that I see.</p><p>Good luck. Use scripts that are consistent and invest in a good migration method. It works great for mainframes and works great in the distributed world too.</p><p>A special condition is needed for final production as you may need temporary redirects to be applied for online services (commonly called Gone Fishing pages or Under Construction Redirects)</p></htmltext>
<tokenext>Develop 4 Environment StructuresDevelopment ( DEV ) Integration Testing ( INTEG ) Acceptance ( ACPT ) Production ( PROD ) For each system create a migration script that generically does the following : ( We will use SOURCE and DEST for environments .
You migrate from DEV- &gt; INTEG- &gt; ACPT- &gt; PROD ) The migration script as it 's core does the following : 1 ) STOP Existing Services and Databases ( SOURCE and DEST ) 2 ) BUILD your deployment package from SOURCE ( This means finalizing commits to an SVN , Creating a dump of SOURCE databases etc .
) If this is a long process then you can leave the DEST running and STOP DEST at the end of the build phase .
I do this as builds for my world can take 2-3 days.3 ) CONDITION your deployment package to be configured for DEST environment ( simple find and replace scripts to correct database names , IP address , etc .
These should be config files that are read and processes .
) This is common if there are different security SAPs , Certificates , Etc that need to be configured .
For instance you may not have SSL enabled in DEV but you might in INTEG or ACPT.4 ) BACKUP DEST information as an install package ( this is identical to the BUILD done on the source .
This BACKUP can be deployed to restore the previous version .
) This should be the same function you ran on SOURCE with a different destination ( say Backups verus Deploys ) 5 ) MIGRATE the install package from SOURCE to DESTSTART DEST6 ) TEST TEST and RETEST7 ) If all tests pass then APPROVE .
This is the green light to re-start the SOURCE services so development can move on.That is a brief of my suggestion.DEV is obviousINTEG is where you look for defects and resolve defects .
Primary testing.ACPT is where user and BL acceptance testing occurs and should mirror PROD in services available.PROD ... yeah...I handle about 790 + applications across 2000 + pieces of hardware so this may appear to be overkill for some but it can be as simple as 4 running instances on a single box with a /DEV/ /IT/ /ACPT/ /PROD/Directory structure with MYSQL running 4 different databases .
The " Script " could be as simple as dropping the DEST database and copying the SOURCE database with a new name .
Other options are creating modification SQLS for instance that are applied onto the exist database.e.g .
STOP , UPDATE , STARTto preserve existing data .
In the case of Drupal your DEV might pull a nightly build and kick out a weekly IT , a biweekly ACPT , and a monthly PROD update.JUST REMEMBER THAT YOU MUST MAKE SURE THE PROCESS IS ALWAYS REVERSABLE !
! The script to deploy needs to handle failure .
There has to be a good backout.You should have a method to backup and restore the current state .
Integrate that into the script .
Always backup Before you do changes and AGAIN after you change .
DEV may need to look at the failed deploy data ( perhaps a substitution or patch failed , they need to find out why .
) Before Backup and After Backup in the migration script.And always 'shake out ' a deployment in each environment level to make sure problems to propogate .
You find problems in IT , you test to make sure what you found in IT is resolved in ACPT .
Your testers should NOT normally be finding and filing new defects in ACPT environments with the exception of inter-application communication that might not be available in earlier environments .
( Great example might be ACPT has the ability to connect to say a marketing companies databases where you use dummy databases in IT and DEV .
) 80/20 is the norm for IT/ACPT that I see.Good luck .
Use scripts that are consistent and invest in a good migration method .
It works great for mainframes and works great in the distributed world too.A special condition is needed for final production as you may need temporary redirects to be applied for online services ( commonly called Gone Fishing pages or Under Construction Redirects )</tokentext>
<sentencetext>Develop 4 Environment StructuresDevelopment (DEV)Integration Testing (INTEG)Acceptance (ACPT)Production (PROD)For each system create a migration script that generically does the following:(We will use SOURCE and DEST for environments.
You migrate from DEV-&gt;INTEG-&gt;ACPT-&gt;PROD)The migration script as it's core does the following:1) STOP Existing Services and Databases (SOURCE and DEST)2) BUILD your deployment package from SOURCE (This means finalizing commits to an SVN, Creating a dump of SOURCE databases etc.
) If this is a long process then you can leave the DEST running and STOP DEST at the end of the build phase.
I do this as builds for my world can take 2-3 days.3) CONDITION your deployment package to be configured for DEST environment (simple find and replace scripts to correct database names, IP address, etc.
These should be config files that are read and processes.
) This is common if there are different security SAPs, Certificates, Etc that need to be configured.
For instance you may not have SSL enabled in DEV but you might in INTEG or ACPT.4) BACKUP DEST information as an install package(this is identical to the BUILD done on the source.
This BACKUP can be deployed to restore the previous version.
) This should be the same function you ran on SOURCE with a different destination (say Backups verus Deploys)5) MIGRATE the install package from SOURCE to DESTSTART DEST6) TEST TEST and RETEST7) If all tests pass then APPROVE.
This is the green light to re-start the SOURCE services so development can move on.That is a brief of my suggestion.DEV is obviousINTEG is where you look for defects and resolve defects.
Primary testing.ACPT is where user and BL acceptance testing occurs and should mirror PROD in services available.PROD ... yeah...I handle about 790+ applications across 2000+ pieces of hardware so this may appear to be overkill for some but it can be as simple as 4 running instances on a single box with a /DEV/ /IT/ /ACPT/ /PROD/Directory structure with MYSQL running 4 different databases.
The "Script" could be as simple as dropping the DEST database and copying the SOURCE database with a new name.
Other options are creating modification SQLS for instance that are applied onto the exist database.e.g.
STOP, UPDATE, STARTto preserve existing data.
In the case of Drupal your DEV might pull a nightly build and kick out a weekly IT, a biweekly ACPT, and a monthly PROD update.JUST REMEMBER THAT YOU MUST MAKE SURE THE PROCESS IS ALWAYS REVERSABLE!
!The script to deploy needs to handle failure.
There has to be a good backout.You should have a method to backup and restore the current state.
Integrate that into the script.
Always backup Before you do changes and AGAIN after you change.
DEV may need to look at the failed deploy data (perhaps a substitution or patch failed, they need to find out why.
)Before Backup and After Backup in the migration script.And always 'shake out' a deployment in each environment level to make sure problems to propogate.
You find problems in IT, you test to make sure what you found in IT is resolved in ACPT.
Your testers should NOT normally be finding and filing new defects in ACPT environments with the exception of inter-application communication that might not be available in earlier environments.
(Great example might be ACPT has the ability to connect to say a marketing companies databases where you use dummy databases in IT and DEV.
) 80/20 is the norm for IT/ACPT that I see.Good luck.
Use scripts that are consistent and invest in a good migration method.
It works great for mainframes and works great in the distributed world too.A special condition is needed for final production as you may need temporary redirects to be applied for online services (commonly called Gone Fishing pages or Under Construction Redirects)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813373</id>
	<title>Re:Most important thing in my book</title>
	<author>Anonymous</author>
	<datestamp>1256029200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I'd have to contest the Data settings...</p><p>Most SOX audits as well as customers demand that you NOT use live/real data for testing...<br>Develop a tool that will *morph* live data to semi-mangled data, before using in test/dev... Or just write a tool to generate good / random data for dev and test...</p></htmltext>
<tokenext>I 'd have to contest the Data settings...Most SOX audits as well as customers demand that you NOT use live/real data for testing...Develop a tool that will * morph * live data to semi-mangled data , before using in test/dev... Or just write a tool to generate good / random data for dev and test.. .</tokentext>
<sentencetext>I'd have to contest the Data settings...Most SOX audits as well as customers demand that you NOT use live/real data for testing...Develop a tool that will *morph* live data to semi-mangled data, before using in test/dev... Or just write a tool to generate good / random data for dev and test...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</id>
	<title>Most important thing in my book</title>
	<author>Anonymous</author>
	<datestamp>1256066520000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>Most important thing is to treat your code and data separately.</p><p>Code:</p><p>Dev -&gt; Test -&gt; Production</p><p>Data:</p><p>Production -&gt; Test -&gt; Dev</p><p>Many developers forget to test and develop with real and current data, allowing problems to slip further downstream than they should.</p><p>And make sure you backup you Dev code and you Production Data.</p></htmltext>
<tokenext>Most important thing is to treat your code and data separately.Code : Dev - &gt; Test - &gt; ProductionData : Production - &gt; Test - &gt; DevMany developers forget to test and develop with real and current data , allowing problems to slip further downstream than they should.And make sure you backup you Dev code and you Production Data .</tokentext>
<sentencetext>Most important thing is to treat your code and data separately.Code:Dev -&gt; Test -&gt; ProductionData:Production -&gt; Test -&gt; DevMany developers forget to test and develop with real and current data, allowing problems to slip further downstream than they should.And make sure you backup you Dev code and you Production Data.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813127</id>
	<title>From professional experience:</title>
	<author>Hurricane78</author>
	<datestamp>1256071320000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I have adapted my system from the 5 years I professionally did it.</p><p>First of all, it's a 3-stage system.<br>You have a couple of live servers, a identical staging server, and the user machines.<br>Every system has a clone of the files. the servers have rsync copies of the stage server files.<br>And the users all sync to the stage with GIT.<br>Everyone has a local clone of the stage server software too, so he can test server-side code right on his machine.<br>That's important in every company where people could do conflicting (and even big, global) patches.</p><p>The stage server then has validity tests running. Compilations and unit test cases wherever possible. Including the database, the server side code, and rendering test pages in all relevant browsers, to diff the rendered versions (images) of the pages. (There's a app for that in Firefox, but otherwise it's desktop automation.)<br>There's an red alert box in the test case overview when something fails. Which gets checked every evening, before pushing anything to the live servers at night.<br>The only thing that turns out to be a bit hard, is to test the client-side logic (e.g. of web-apps) in a transparent manner (= keeping the software configurations and serveride code the same to be able to rely on it).</p><p>Then there is a emergency push and and emergency direct live update mechanism, for cases when you quickly have to fix something that got overlooked. (Which usually should result in a new test case to be written, to catch all such problems.)</p><p>A well-integrated project management system is very important. At the end of my first company, it was a self-written one with good integration. But in the beginning, something like Trac might suffice.</p><p>Then <em>very</em> important is, to have a <em>knowledge base</em> for all things that need to be remembered. Like a meta-documentation. Workflows and procedures. Why the mysql server will not restart on a reboot of stage server clones. Little hooks and mantraps like that. I recommend a Wiki.</p><p>And last but not least, never ever forget to have a Bugzilla. If you're good, you can integrate Bugzilla, the test validations and the task/project management into one system. Making the validity tests create bugs in Bugzilla, and bugs being the same as tasks (which makes test-driven development easier).</p><p>Yet this all is completely worthless, if your colleagues don't use it!<nobr> <wbr></nobr>;)<br>Unfortunately, I learned, that when someone <em>can/em&gt; do something wrong, he <em>will</em>.<br>So if you can't lock down possibilities to only those required, you have to be very very careful with who you hire. Especially with "web development", where you get sinology students who learned HTML while working as a taxi driver, stating that they are "professional web developers with 5 years of experience", while honestly believing that. And team leaders believing it too, because they are just as "competent". Because they themselves either started as something an simple as link collectors, or the boss of the company does not know shit about his business, and hired those types. They then usually get promoted to "Head of<nobr> <wbr></nobr>...". It's the mother of all PHB stories. ^^</em></p><p><em>The key is: Make them <em>like</em> to work the proper way. If nothing helps, money can always push them in the right direction. It's called "bonus".<br>And making it <em>their</em> project too, by also embracing their decisions!<nobr> <wbr></nobr>:)</em></p></htmltext>
<tokenext>I have adapted my system from the 5 years I professionally did it.First of all , it 's a 3-stage system.You have a couple of live servers , a identical staging server , and the user machines.Every system has a clone of the files .
the servers have rsync copies of the stage server files.And the users all sync to the stage with GIT.Everyone has a local clone of the stage server software too , so he can test server-side code right on his machine.That 's important in every company where people could do conflicting ( and even big , global ) patches.The stage server then has validity tests running .
Compilations and unit test cases wherever possible .
Including the database , the server side code , and rendering test pages in all relevant browsers , to diff the rendered versions ( images ) of the pages .
( There 's a app for that in Firefox , but otherwise it 's desktop automation .
) There 's an red alert box in the test case overview when something fails .
Which gets checked every evening , before pushing anything to the live servers at night.The only thing that turns out to be a bit hard , is to test the client-side logic ( e.g .
of web-apps ) in a transparent manner ( = keeping the software configurations and serveride code the same to be able to rely on it ) .Then there is a emergency push and and emergency direct live update mechanism , for cases when you quickly have to fix something that got overlooked .
( Which usually should result in a new test case to be written , to catch all such problems .
) A well-integrated project management system is very important .
At the end of my first company , it was a self-written one with good integration .
But in the beginning , something like Trac might suffice.Then very important is , to have a knowledge base for all things that need to be remembered .
Like a meta-documentation .
Workflows and procedures .
Why the mysql server will not restart on a reboot of stage server clones .
Little hooks and mantraps like that .
I recommend a Wiki.And last but not least , never ever forget to have a Bugzilla .
If you 're good , you can integrate Bugzilla , the test validations and the task/project management into one system .
Making the validity tests create bugs in Bugzilla , and bugs being the same as tasks ( which makes test-driven development easier ) .Yet this all is completely worthless , if your colleagues do n't use it !
; ) Unfortunately , I learned , that when someone can/em &gt; do something wrong , he will.So if you ca n't lock down possibilities to only those required , you have to be very very careful with who you hire .
Especially with " web development " , where you get sinology students who learned HTML while working as a taxi driver , stating that they are " professional web developers with 5 years of experience " , while honestly believing that .
And team leaders believing it too , because they are just as " competent " .
Because they themselves either started as something an simple as link collectors , or the boss of the company does not know shit about his business , and hired those types .
They then usually get promoted to " Head of ... " .
It 's the mother of all PHB stories .
^ ^ The key is : Make them like to work the proper way .
If nothing helps , money can always push them in the right direction .
It 's called " bonus " .And making it their project too , by also embracing their decisions !
: )</tokentext>
<sentencetext>I have adapted my system from the 5 years I professionally did it.First of all, it's a 3-stage system.You have a couple of live servers, a identical staging server, and the user machines.Every system has a clone of the files.
the servers have rsync copies of the stage server files.And the users all sync to the stage with GIT.Everyone has a local clone of the stage server software too, so he can test server-side code right on his machine.That's important in every company where people could do conflicting (and even big, global) patches.The stage server then has validity tests running.
Compilations and unit test cases wherever possible.
Including the database, the server side code, and rendering test pages in all relevant browsers, to diff the rendered versions (images) of the pages.
(There's a app for that in Firefox, but otherwise it's desktop automation.
)There's an red alert box in the test case overview when something fails.
Which gets checked every evening, before pushing anything to the live servers at night.The only thing that turns out to be a bit hard, is to test the client-side logic (e.g.
of web-apps) in a transparent manner (= keeping the software configurations and serveride code the same to be able to rely on it).Then there is a emergency push and and emergency direct live update mechanism, for cases when you quickly have to fix something that got overlooked.
(Which usually should result in a new test case to be written, to catch all such problems.
)A well-integrated project management system is very important.
At the end of my first company, it was a self-written one with good integration.
But in the beginning, something like Trac might suffice.Then very important is, to have a knowledge base for all things that need to be remembered.
Like a meta-documentation.
Workflows and procedures.
Why the mysql server will not restart on a reboot of stage server clones.
Little hooks and mantraps like that.
I recommend a Wiki.And last but not least, never ever forget to have a Bugzilla.
If you're good, you can integrate Bugzilla, the test validations and the task/project management into one system.
Making the validity tests create bugs in Bugzilla, and bugs being the same as tasks (which makes test-driven development easier).Yet this all is completely worthless, if your colleagues don't use it!
;)Unfortunately, I learned, that when someone can/em&gt; do something wrong, he will.So if you can't lock down possibilities to only those required, you have to be very very careful with who you hire.
Especially with "web development", where you get sinology students who learned HTML while working as a taxi driver, stating that they are "professional web developers with 5 years of experience", while honestly believing that.
And team leaders believing it too, because they are just as "competent".
Because they themselves either started as something an simple as link collectors, or the boss of the company does not know shit about his business, and hired those types.
They then usually get promoted to "Head of ...".
It's the mother of all PHB stories.
^^The key is: Make them like to work the proper way.
If nothing helps, money can always push them in the right direction.
It's called "bonus".And making it their project too, by also embracing their decisions!
:)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813163</id>
	<title>Re:Most important thing in my book</title>
	<author>Anonymous</author>
	<datestamp>1256071560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Don't forget in some environments the production data cannot be given to developers, for legal, ethical or other reasons. Generating decent test data is a harder problem here.</p><p>Also there's no guarantee your production data tests all edge cases, etc (yet). If you don't have decent actual test data that actually tests your code, you're sunk. You can't actually go into every customer account to see if your presentation code handles all the foreign characters for example.</p><p>The production data will be useful for load and scale testing for processes that need to search or filter the entire dataset - but it'd be fairly easy to generate test data to the correct scale - and once you can generate it, you can generate bigger so you know how big you can get before you need a bigger server.</p><p>Having said all that, I do often use the production data in my dev instance (in my application the data in question has no particular sensitivity and I have full access to the production instance anyway).</p></htmltext>
<tokenext>Do n't forget in some environments the production data can not be given to developers , for legal , ethical or other reasons .
Generating decent test data is a harder problem here.Also there 's no guarantee your production data tests all edge cases , etc ( yet ) .
If you do n't have decent actual test data that actually tests your code , you 're sunk .
You ca n't actually go into every customer account to see if your presentation code handles all the foreign characters for example.The production data will be useful for load and scale testing for processes that need to search or filter the entire dataset - but it 'd be fairly easy to generate test data to the correct scale - and once you can generate it , you can generate bigger so you know how big you can get before you need a bigger server.Having said all that , I do often use the production data in my dev instance ( in my application the data in question has no particular sensitivity and I have full access to the production instance anyway ) .</tokentext>
<sentencetext>Don't forget in some environments the production data cannot be given to developers, for legal, ethical or other reasons.
Generating decent test data is a harder problem here.Also there's no guarantee your production data tests all edge cases, etc (yet).
If you don't have decent actual test data that actually tests your code, you're sunk.
You can't actually go into every customer account to see if your presentation code handles all the foreign characters for example.The production data will be useful for load and scale testing for processes that need to search or filter the entire dataset - but it'd be fairly easy to generate test data to the correct scale - and once you can generate it, you can generate bigger so you know how big you can get before you need a bigger server.Having said all that, I do often use the production data in my dev instance (in my application the data in question has no particular sensitivity and I have full access to the production instance anyway).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811721</id>
	<title>go virtual</title>
	<author>Anonymous</author>
	<datestamp>1256065860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p>Have a perfect virtual machine image ready. You can bring up a new server in about 5 minutes.</p></htmltext>
<tokenext>Have a perfect virtual machine image ready .
You can bring up a new server in about 5 minutes .</tokentext>
<sentencetext>Have a perfect virtual machine image ready.
You can bring up a new server in about 5 minutes.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29815497</id>
	<title>Testing, Dev and Production can be handled by</title>
	<author>Anonymous</author>
	<datestamp>1256036820000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Virtualization, Virtualization, Virtualization.</p><p>Which spacific technology you use is irrelevent, but any resonable one should have a way to clone VMs.</p><p>Clone production, Make modifications, test, push (the entire vm) back out. You should also be running your webserver and any DB on seperate VMs. This allows you to use clone-management to deploy site-changes without running the risk of affecting the DB. Changes to the DB are more complex, but so far I have found that to be inevitable and should always be cloned, and backed up beforehand (ask MS/Danger/T-Mobile).</p></htmltext>
<tokenext>Virtualization , Virtualization , Virtualization.Which spacific technology you use is irrelevent , but any resonable one should have a way to clone VMs.Clone production , Make modifications , test , push ( the entire vm ) back out .
You should also be running your webserver and any DB on seperate VMs .
This allows you to use clone-management to deploy site-changes without running the risk of affecting the DB .
Changes to the DB are more complex , but so far I have found that to be inevitable and should always be cloned , and backed up beforehand ( ask MS/Danger/T-Mobile ) .</tokentext>
<sentencetext>Virtualization, Virtualization, Virtualization.Which spacific technology you use is irrelevent, but any resonable one should have a way to clone VMs.Clone production, Make modifications, test, push (the entire vm) back out.
You should also be running your webserver and any DB on seperate VMs.
This allows you to use clone-management to deploy site-changes without running the risk of affecting the DB.
Changes to the DB are more complex, but so far I have found that to be inevitable and should always be cloned, and backed up beforehand (ask MS/Danger/T-Mobile).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820101</id>
	<title>Puppet</title>
	<author>Anonymous</author>
	<datestamp>1256067540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I cant believe no-one has mentioned puppet to this guy. Works like a dream...</p></htmltext>
<tokenext>I cant believe no-one has mentioned puppet to this guy .
Works like a dream.. .</tokentext>
<sentencetext>I cant believe no-one has mentioned puppet to this guy.
Works like a dream...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817341</id>
	<title>Re:happy with phing</title>
	<author>Jaime2</author>
	<datestamp>1256046300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I can't agree more, especially for data.  I take great care in source controlling database and I would never dream of auto-building a deployment package.<br>
<br>
Myself and all of the developers don't have access to the database schemas in the dev environment with our normal accounts.  I have a SubVersion hook script that runs all checkins to the database schema files on the dev DB.  If the script errors, the commit is rejected.  This guarantees that the only way to get changes into the database is to put those changes in source control.  Even with this level of control, every once in a while someone checks in a drop-create script on a table, or does similar nonsense.  Of course we have to recover the database from a backup, but the script is now forever in source control.  If I blindly turned the commits into a deployment script, there would be a lot of these things.  So, I have a custom tool that allows me to find all of the changed files in the database schema code in a range of revisions, allows me to choose and reorder them, and it generates a deployment script.  Sometimes the script needs hand tweaking.<br>
<br>
The process goes like this -- <br>
1.  Work on dev server, get all unit tests to pass.<br>
2.  Create deployment script.  Deploy to test server, do unit tests again.<br>
3.  Give deployment script (and app changes) to QA.  They run them on the Stage server and do user acceptance testing.<br>
4.  If all is well, both the final code and the deployment process have been verified, roll out to production.<br>
<br>
I'm also a big fan of using test data in test, not production data (for new development, bug fixing often requires production data).  I can put far more taxing scenarios in than the users ever will.  Production data rarely tests the edge cases.  I also have an issue in my production environment that all work begun on a given day is closed out by the end of the day.  So, my nightly backups rarely have any variety of data in them.<br>
<br>
I don't see any value in automating the process any farther, but I do automate the heck out of compiled code and content deployment.  The best tool depends on the environment.  I currently use WiX to package up static deployables as MSI packages as I do mostly Windows development.</htmltext>
<tokenext>I ca n't agree more , especially for data .
I take great care in source controlling database and I would never dream of auto-building a deployment package .
Myself and all of the developers do n't have access to the database schemas in the dev environment with our normal accounts .
I have a SubVersion hook script that runs all checkins to the database schema files on the dev DB .
If the script errors , the commit is rejected .
This guarantees that the only way to get changes into the database is to put those changes in source control .
Even with this level of control , every once in a while someone checks in a drop-create script on a table , or does similar nonsense .
Of course we have to recover the database from a backup , but the script is now forever in source control .
If I blindly turned the commits into a deployment script , there would be a lot of these things .
So , I have a custom tool that allows me to find all of the changed files in the database schema code in a range of revisions , allows me to choose and reorder them , and it generates a deployment script .
Sometimes the script needs hand tweaking .
The process goes like this -- 1 .
Work on dev server , get all unit tests to pass .
2. Create deployment script .
Deploy to test server , do unit tests again .
3. Give deployment script ( and app changes ) to QA .
They run them on the Stage server and do user acceptance testing .
4. If all is well , both the final code and the deployment process have been verified , roll out to production .
I 'm also a big fan of using test data in test , not production data ( for new development , bug fixing often requires production data ) .
I can put far more taxing scenarios in than the users ever will .
Production data rarely tests the edge cases .
I also have an issue in my production environment that all work begun on a given day is closed out by the end of the day .
So , my nightly backups rarely have any variety of data in them .
I do n't see any value in automating the process any farther , but I do automate the heck out of compiled code and content deployment .
The best tool depends on the environment .
I currently use WiX to package up static deployables as MSI packages as I do mostly Windows development .</tokentext>
<sentencetext>I can't agree more, especially for data.
I take great care in source controlling database and I would never dream of auto-building a deployment package.
Myself and all of the developers don't have access to the database schemas in the dev environment with our normal accounts.
I have a SubVersion hook script that runs all checkins to the database schema files on the dev DB.
If the script errors, the commit is rejected.
This guarantees that the only way to get changes into the database is to put those changes in source control.
Even with this level of control, every once in a while someone checks in a drop-create script on a table, or does similar nonsense.
Of course we have to recover the database from a backup, but the script is now forever in source control.
If I blindly turned the commits into a deployment script, there would be a lot of these things.
So, I have a custom tool that allows me to find all of the changed files in the database schema code in a range of revisions, allows me to choose and reorder them, and it generates a deployment script.
Sometimes the script needs hand tweaking.
The process goes like this -- 
1.
Work on dev server, get all unit tests to pass.
2.  Create deployment script.
Deploy to test server, do unit tests again.
3.  Give deployment script (and app changes) to QA.
They run them on the Stage server and do user acceptance testing.
4.  If all is well, both the final code and the deployment process have been verified, roll out to production.
I'm also a big fan of using test data in test, not production data (for new development, bug fixing often requires production data).
I can put far more taxing scenarios in than the users ever will.
Production data rarely tests the edge cases.
I also have an issue in my production environment that all work begun on a given day is closed out by the end of the day.
So, my nightly backups rarely have any variety of data in them.
I don't see any value in automating the process any farther, but I do automate the heck out of compiled code and content deployment.
The best tool depends on the environment.
I currently use WiX to package up static deployables as MSI packages as I do mostly Windows development.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811585</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817755</id>
	<title>My thoughts</title>
	<author>mindstrm</author>
	<datestamp>1256048700000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There is no magic solution - you are talking about managing multiple environments with different requirements and technologies in some meaningful, automated way.</p><p>You're looking at home-brew here.</p><p>What you want to aim for is</p><p>0) Stop using multiple technologies if you can. If that's not an option, it just makes more work.</p><p>1) Clearly define policies regarding development, testing, and release.  These have nothing to do with tools. You build and select your tools based on these policies.</p><p>2) Automated pushbutton deployment.  You want your code releases of each new version of a site to be automated.  You also want rolling back to the previous version to be automated.    This applies for CI, QA, and whatever other stages you want, all the way to Production.<br>3) Automated deployment should involve at a minimum tagging a given revision and pushing it to the correct environment.</p><p>4) You can use commit hooks or some other method against TRUNK to run a CI server that continually does regression testing and other funky stuff... as well as just shows you a live version of what's in trunk "right now".</p><p>5) When working towards a target release,developers need to include any necessary scripts to update (and rollback, if necessary) their respective databases.</p><p>6) Config data... can be handled by having a separate<nobr> <wbr></nobr>/config folder for each environment, version controlled separately - and where access and change control are again strictly defined and limited, and well documented.   this would automatically be inserted by your pushbutton deployment process.</p></htmltext>
<tokenext>There is no magic solution - you are talking about managing multiple environments with different requirements and technologies in some meaningful , automated way.You 're looking at home-brew here.What you want to aim for is0 ) Stop using multiple technologies if you can .
If that 's not an option , it just makes more work.1 ) Clearly define policies regarding development , testing , and release .
These have nothing to do with tools .
You build and select your tools based on these policies.2 ) Automated pushbutton deployment .
You want your code releases of each new version of a site to be automated .
You also want rolling back to the previous version to be automated .
This applies for CI , QA , and whatever other stages you want , all the way to Production.3 ) Automated deployment should involve at a minimum tagging a given revision and pushing it to the correct environment.4 ) You can use commit hooks or some other method against TRUNK to run a CI server that continually does regression testing and other funky stuff... as well as just shows you a live version of what 's in trunk " right now " .5 ) When working towards a target release,developers need to include any necessary scripts to update ( and rollback , if necessary ) their respective databases.6 ) Config data... can be handled by having a separate /config folder for each environment , version controlled separately - and where access and change control are again strictly defined and limited , and well documented .
this would automatically be inserted by your pushbutton deployment process .</tokentext>
<sentencetext>There is no magic solution - you are talking about managing multiple environments with different requirements and technologies in some meaningful, automated way.You're looking at home-brew here.What you want to aim for is0) Stop using multiple technologies if you can.
If that's not an option, it just makes more work.1) Clearly define policies regarding development, testing, and release.
These have nothing to do with tools.
You build and select your tools based on these policies.2) Automated pushbutton deployment.
You want your code releases of each new version of a site to be automated.
You also want rolling back to the previous version to be automated.
This applies for CI, QA, and whatever other stages you want, all the way to Production.3) Automated deployment should involve at a minimum tagging a given revision and pushing it to the correct environment.4) You can use commit hooks or some other method against TRUNK to run a CI server that continually does regression testing and other funky stuff... as well as just shows you a live version of what's in trunk "right now".5) When working towards a target release,developers need to include any necessary scripts to update (and rollback, if necessary) their respective databases.6) Config data... can be handled by having a separate /config folder for each environment, version controlled separately - and where access and change control are again strictly defined and limited, and well documented.
this would automatically be inserted by your pushbutton deployment process.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812659</id>
	<title>Re:How slashdot does it</title>
	<author>cayenne8</author>
	<datestamp>1256069220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><i>"I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix"</i> <p>
That's not that far from the truth in MANY places and projects I've seen.</p><p>
I've actually come to the conclusion, that on many govt/DoD projects, that the dev. environment in fact becomes the test and production environment!!</p><p>
I learned that it really pays, when spec'ing out the hardware and software that you need, to get as much as they will pay for for the 'dev' machines....because, it will inevitably become the production server as soon as stuff is working on it, the deadline hits, and there is suddenly no more funding for a proper test/prod environment.</p></htmltext>
<tokenext>" I do the same as Slashdot.org does - Make the changes on live code , except a little downtime and weird effects and then try to fix " That 's not that far from the truth in MANY places and projects I 've seen .
I 've actually come to the conclusion , that on many govt/DoD projects , that the dev .
environment in fact becomes the test and production environment ! !
I learned that it really pays , when spec'ing out the hardware and software that you need , to get as much as they will pay for for the 'dev ' machines....because , it will inevitably become the production server as soon as stuff is working on it , the deadline hits , and there is suddenly no more funding for a proper test/prod environment .</tokentext>
<sentencetext>"I do the same as Slashdot.org does - Make the changes on live code, except a little downtime and weird effects and then try to fix" 
That's not that far from the truth in MANY places and projects I've seen.
I've actually come to the conclusion, that on many govt/DoD projects, that the dev.
environment in fact becomes the test and production environment!!
I learned that it really pays, when spec'ing out the hardware and software that you need, to get as much as they will pay for for the 'dev' machines....because, it will inevitably become the production server as soon as stuff is working on it, the deadline hits, and there is suddenly no more funding for a proper test/prod environment.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814807</id>
	<title>Re:Tools, Practices and Standards</title>
	<author>Anonymous</author>
	<datestamp>1256034000000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>Capistrano/Webistrano for deployment (Webistrano is a nice GUI to capistrano - <a href="http://www.capify.org/" title="capify.org" rel="nofollow">http://www.capify.org/</a> [capify.org] / <a href="http://labs.peritor.com/webistrano" title="peritor.com" rel="nofollow">http://labs.peritor.com/webistrano</a> [peritor.com] </p></div><p>Webistrano is a *great* way to do this - I've used it for a while at our medium-sized webdev+hosting shop.  It can even do DB moves between environments with the right Recipes (code snippets called at defined points in the workflow or by themselves). Check it out.</p></div>
	</htmltext>
<tokenext>Capistrano/Webistrano for deployment ( Webistrano is a nice GUI to capistrano - http : //www.capify.org/ [ capify.org ] / http : //labs.peritor.com/webistrano [ peritor.com ] Webistrano is a * great * way to do this - I 've used it for a while at our medium-sized webdev + hosting shop .
It can even do DB moves between environments with the right Recipes ( code snippets called at defined points in the workflow or by themselves ) .
Check it out .</tokentext>
<sentencetext>Capistrano/Webistrano for deployment (Webistrano is a nice GUI to capistrano - http://www.capify.org/ [capify.org] / http://labs.peritor.com/webistrano [peritor.com] Webistrano is a *great* way to do this - I've used it for a while at our medium-sized webdev+hosting shop.
It can even do DB moves between environments with the right Recipes (code snippets called at defined points in the workflow or by themselves).
Check it out.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811973</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811917</id>
	<title>Hilarity</title>
	<author>eln</author>
	<datestamp>1256066400000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community. How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?"</p></div><p>I'm sure you have a legitimate problem, and there are lots of ways to solve it, but this line just cracks me up.  You COULD write it yourself or pay someone but if you use someone else's Open Source work (note: nothing is said about contributing to an OSS project, just using it) you'd be "giving back to the community.
<br> <br>
Translation: I have a problem, and I don't want to spend any of my own time or money to solve it, so I'm going to try and butter up the people on Slashdot in hope of taking advantage of the free labor force that is the OSS community.
<br> <br>
Simply using Open Source software is not giving back to the community...using open source software is what gives you the moral imperative to give back to the community, which you can do through contributing code, documentation, beta testing, providing support on the mailing lists, or whatever.</p></div>
	</htmltext>
<tokenext>Another option is to completely rewrite the scripts ( or hire someone to do it for me ) , but I would much rather use something OSS so I can give back to the community .
How have fellow slashdotters managed this process , what systems/scripts have you used , and what advice do you have ?
" I 'm sure you have a legitimate problem , and there are lots of ways to solve it , but this line just cracks me up .
You COULD write it yourself or pay someone but if you use someone else 's Open Source work ( note : nothing is said about contributing to an OSS project , just using it ) you 'd be " giving back to the community .
Translation : I have a problem , and I do n't want to spend any of my own time or money to solve it , so I 'm going to try and butter up the people on Slashdot in hope of taking advantage of the free labor force that is the OSS community .
Simply using Open Source software is not giving back to the community...using open source software is what gives you the moral imperative to give back to the community , which you can do through contributing code , documentation , beta testing , providing support on the mailing lists , or whatever .</tokentext>
<sentencetext>Another option is to completely rewrite the scripts (or hire someone to do it for me), but I would much rather use something OSS so I can give back to the community.
How have fellow slashdotters managed this process, what systems/scripts have you used, and what advice do you have?
"I'm sure you have a legitimate problem, and there are lots of ways to solve it, but this line just cracks me up.
You COULD write it yourself or pay someone but if you use someone else's Open Source work (note: nothing is said about contributing to an OSS project, just using it) you'd be "giving back to the community.
Translation: I have a problem, and I don't want to spend any of my own time or money to solve it, so I'm going to try and butter up the people on Slashdot in hope of taking advantage of the free labor force that is the OSS community.
Simply using Open Source software is not giving back to the community...using open source software is what gives you the moral imperative to give back to the community, which you can do through contributing code, documentation, beta testing, providing support on the mailing lists, or whatever.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29821483</id>
	<title>Drupal and staging</title>
	<author>pastie</author>
	<datestamp>1256128140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've felt the pain of attempting to stage Drupal set-ups, and it's horrid.</p><p>Whilst people say "Just migrate the database from dev-&gt;staging-&gt;live" it's not that easy when it comes to the live server, because that will have changes mixed in with what you want to make changes to (e.g., user-generated content on the live site creates nodes, which share a node-numbering namespace with your dev-created new content).  This is additionally complicated by Drupal putting all the config in the database, so you'll need to make changes to that when you push it live each time (since the dev server will have dev modules enabled that you won't want on the live site, say).</p><p>Separating the database changes you've made from the ones made on the live site (to allow you to merge them) leads to a whole set of options...</p><p>Drupal has lots and lots of different methods of working around this problem (just search for "staging" and "deployment" on drupal.org), but none of them really seems to work that painlessly -- they either require you to write all your changes as SQL scripts or to not use particular features (e.g., CCK).</p><p>Do all CMSes have this problem, or is it just because of the lack of separation which Drupal provides between live data, your own created data (from dev) and config?</p></htmltext>
<tokenext>I 've felt the pain of attempting to stage Drupal set-ups , and it 's horrid.Whilst people say " Just migrate the database from dev- &gt; staging- &gt; live " it 's not that easy when it comes to the live server , because that will have changes mixed in with what you want to make changes to ( e.g. , user-generated content on the live site creates nodes , which share a node-numbering namespace with your dev-created new content ) .
This is additionally complicated by Drupal putting all the config in the database , so you 'll need to make changes to that when you push it live each time ( since the dev server will have dev modules enabled that you wo n't want on the live site , say ) .Separating the database changes you 've made from the ones made on the live site ( to allow you to merge them ) leads to a whole set of options...Drupal has lots and lots of different methods of working around this problem ( just search for " staging " and " deployment " on drupal.org ) , but none of them really seems to work that painlessly -- they either require you to write all your changes as SQL scripts or to not use particular features ( e.g. , CCK ) .Do all CMSes have this problem , or is it just because of the lack of separation which Drupal provides between live data , your own created data ( from dev ) and config ?</tokentext>
<sentencetext>I've felt the pain of attempting to stage Drupal set-ups, and it's horrid.Whilst people say "Just migrate the database from dev-&gt;staging-&gt;live" it's not that easy when it comes to the live server, because that will have changes mixed in with what you want to make changes to (e.g., user-generated content on the live site creates nodes, which share a node-numbering namespace with your dev-created new content).
This is additionally complicated by Drupal putting all the config in the database, so you'll need to make changes to that when you push it live each time (since the dev server will have dev modules enabled that you won't want on the live site, say).Separating the database changes you've made from the ones made on the live site (to allow you to merge them) leads to a whole set of options...Drupal has lots and lots of different methods of working around this problem (just search for "staging" and "deployment" on drupal.org), but none of them really seems to work that painlessly -- they either require you to write all your changes as SQL scripts or to not use particular features (e.g., CCK).Do all CMSes have this problem, or is it just because of the lack of separation which Drupal provides between live data, your own created data (from dev) and config?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29823481</id>
	<title>Re:How slashdot does it</title>
	<author>Anonymous</author>
	<datestamp>1256140320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>cavenne8 speaks the truth.</p></htmltext>
<tokenext>cavenne8 speaks the truth .</tokentext>
<sentencetext>cavenne8 speaks the truth.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812687</id>
	<title>Re:Packaging Packaging Packaging...</title>
	<author>chrome</author>
	<datestamp>1256069280000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>+1

also, use the package signing system to verify that the packages distributed to machines are really released.
use the package dependencies to pull in all the required packages for a given system.

If you do it right, all you need is an apt repository, and you type "apt-get install prod-foobar-system" and everything will be pulled in and installed. In the correct order.

I converted a site to this method (on Fedora Core many years ago) and we went from taking a day to build machines to 30 minutes.

1) Put the mac address in the kickstart server and assign the appropriate profile.
2) Boo the machine from the network
3) Watch it build. The profile for that machine would have the packages for the environment we were building listed.
4) Reboot. Machine would have the right IP and be completely configured and running.

It just works.</htmltext>
<tokenext>+ 1 also , use the package signing system to verify that the packages distributed to machines are really released .
use the package dependencies to pull in all the required packages for a given system .
If you do it right , all you need is an apt repository , and you type " apt-get install prod-foobar-system " and everything will be pulled in and installed .
In the correct order .
I converted a site to this method ( on Fedora Core many years ago ) and we went from taking a day to build machines to 30 minutes .
1 ) Put the mac address in the kickstart server and assign the appropriate profile .
2 ) Boo the machine from the network 3 ) Watch it build .
The profile for that machine would have the packages for the environment we were building listed .
4 ) Reboot .
Machine would have the right IP and be completely configured and running .
It just works .</tokentext>
<sentencetext>+1

also, use the package signing system to verify that the packages distributed to machines are really released.
use the package dependencies to pull in all the required packages for a given system.
If you do it right, all you need is an apt repository, and you type "apt-get install prod-foobar-system" and everything will be pulled in and installed.
In the correct order.
I converted a site to this method (on Fedora Core many years ago) and we went from taking a day to build machines to 30 minutes.
1) Put the mac address in the kickstart server and assign the appropriate profile.
2) Boo the machine from the network
3) Watch it build.
The profile for that machine would have the packages for the environment we were building listed.
4) Reboot.
Machine would have the right IP and be completely configured and running.
It just works.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812269</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812663</id>
	<title>Re:You are not a n00b</title>
	<author>Anonymous</author>
	<datestamp>1256069220000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>If he is indeed allowing FTP logins over the public Internet (as the submission suggests), he is a n00b whether or not he realizes it.</p></htmltext>
<tokenext>If he is indeed allowing FTP logins over the public Internet ( as the submission suggests ) , he is a n00b whether or not he realizes it .</tokentext>
<sentencetext>If he is indeed allowing FTP logins over the public Internet (as the submission suggests), he is a n00b whether or not he realizes it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814333</id>
	<title>the php code is easy to manage</title>
	<author>Anonymous</author>
	<datestamp>1256032440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>svn or whatever works fine for that. It is the motherfucking data that is the problem, at least with drupal.</p></htmltext>
<tokenext>svn or whatever works fine for that .
It is the motherfucking data that is the problem , at least with drupal .</tokentext>
<sentencetext>svn or whatever works fine for that.
It is the motherfucking data that is the problem, at least with drupal.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812709</id>
	<title>Slashdot and this company ...</title>
	<author>Krishnoid</author>
	<datestamp>1256069400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext>Just roll them <a href="http://thedailywtf.com/Articles/The\_Developmestuction\_Environment.aspx" title="thedailywtf.com">into one</a> [thedailywtf.com].  It's even got a catchy name.</htmltext>
<tokenext>Just roll them into one [ thedailywtf.com ] .
It 's even got a catchy name .</tokentext>
<sentencetext>Just roll them into one [thedailywtf.com].
It's even got a catchy name.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29818813</id>
	<title>You have been the owner of fashion Ed Hardy   Bag,</title>
	<author>Anonymous</author>
	<datestamp>1256055360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>
&nbsp; we are a prefession online store, you can see more photos and price in our website which is show in the photos<br>if you are interested in our produ<br>
&nbsp; &nbsp; We offer kinds of Newest Style Handbag,Brand Handbag,Fashion Handbags,<br>Ladies' Leather Handbag,Replica Handbag--AmmonOnline<br>We ship to worldwide by EMS,TNT,DHL,UPS.<br>We supply you with smooth and fast services, and do dorp shipping.<br>Welcome to visit our factory.<br>Please visit our Website:www.tntshoes.com  or products Album,</p><p>Contact us now, We can send you more details.</p><p>
&nbsp; &nbsp; &nbsp; OUR WEBSITE:<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YAHOO:shoppertrade@yahoo.com.cn</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; MSN:shoppertrade@hotmail.com</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; HTTP://www.tntshoes.comCOOGI D&amp;G diesel ED HARDY lrg etc $33-50 free shipping. Jersey NBA Jersey MLB NLBM nike puma adidas $12-30 free shiping.</p><p>
&nbsp; OUR WEBSITE:</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; YAHOO:shoppertrade@yahoo.com.cn</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; MSN:shoppertrade@hotmail.com</p><p>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; HTTP://www.tntshoes.com</p></htmltext>
<tokenext>  we are a prefession online store , you can see more photos and price in our website which is show in the photosif you are interested in our produ     We offer kinds of Newest Style Handbag,Brand Handbag,Fashion Handbags,Ladies ' Leather Handbag,Replica Handbag--AmmonOnlineWe ship to worldwide by EMS,TNT,DHL,UPS.We supply you with smooth and fast services , and do dorp shipping.Welcome to visit our factory.Please visit our Website : www.tntshoes.com or products Album,Contact us now , We can send you more details .
      OUR WEBSITE :                                                               YAHOO : shoppertrade @ yahoo.com.cn                                                                 MSN : shoppertrade @ hotmail.com                                                                           HTTP : //www.tntshoes.comCOOGI D&amp;G diesel ED HARDY lrg etc $ 33-50 free shipping .
Jersey NBA Jersey MLB NLBM nike puma adidas $ 12-30 free shiping .
  OUR WEBSITE :                                                   YAHOO : shoppertrade @ yahoo.com.cn                                                         MSN : shoppertrade @ hotmail.com                                                               HTTP : //www.tntshoes.com</tokentext>
<sentencetext>
  we are a prefession online store, you can see more photos and price in our website which is show in the photosif you are interested in our produ
    We offer kinds of Newest Style Handbag,Brand Handbag,Fashion Handbags,Ladies' Leather Handbag,Replica Handbag--AmmonOnlineWe ship to worldwide by EMS,TNT,DHL,UPS.We supply you with smooth and fast services, and do dorp shipping.Welcome to visit our factory.Please visit our Website:www.tntshoes.com  or products Album,Contact us now, We can send you more details.
      OUR WEBSITE:
                                                              YAHOO:shoppertrade@yahoo.com.cn
                                                                MSN:shoppertrade@hotmail.com
                                                                          HTTP://www.tntshoes.comCOOGI D&amp;G diesel ED HARDY lrg etc $33-50 free shipping.
Jersey NBA Jersey MLB NLBM nike puma adidas $12-30 free shiping.
  OUR WEBSITE:
                                                  YAHOO:shoppertrade@yahoo.com.cn
                                                        MSN:shoppertrade@hotmail.com
                                                              HTTP://www.tntshoes.com</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813533</id>
	<title>You need version control and simplicity</title>
	<author>summery</author>
	<datestamp>1256029740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>#1 is version control.  Without that you are lost.  SVN, git, CVS, whatever as long as it's working for you.<br> <br>

#2 is simplicity.  Without it, your systems will simplify themselves in a sub-optimal manner (i.e. dev server becomes production server).<br> <br>

Here's what I do:<br>
 - Developers work and test on their local machines and commit using version control to a central repository<br>
 - Dev site is a sub-folder of the production site - easy way to make sure every single damn variable is the same<br>
 - When I want to upload to dev environment:<br>
     rm -r dev  (to remove all files in current dev site)<br>
     svn export file://(path) (to get a complete copy of latest code)<br>
 - When I want to upload to production environment:<br>
    source upload.sh, where upload.sh copies the existing site into a backup directory for quick access in case of disaster, then copies the dev site up into production, then re-copies a couple special files that differ between dev and production back (.htaccess, analytics.php)<br> <br>

Good luck!</htmltext>
<tokenext># 1 is version control .
Without that you are lost .
SVN , git , CVS , whatever as long as it 's working for you .
# 2 is simplicity .
Without it , your systems will simplify themselves in a sub-optimal manner ( i.e .
dev server becomes production server ) .
Here 's what I do : - Developers work and test on their local machines and commit using version control to a central repository - Dev site is a sub-folder of the production site - easy way to make sure every single damn variable is the same - When I want to upload to dev environment : rm -r dev ( to remove all files in current dev site ) svn export file : // ( path ) ( to get a complete copy of latest code ) - When I want to upload to production environment : source upload.sh , where upload.sh copies the existing site into a backup directory for quick access in case of disaster , then copies the dev site up into production , then re-copies a couple special files that differ between dev and production back ( .htaccess , analytics.php ) Good luck !</tokentext>
<sentencetext>#1 is version control.
Without that you are lost.
SVN, git, CVS, whatever as long as it's working for you.
#2 is simplicity.
Without it, your systems will simplify themselves in a sub-optimal manner (i.e.
dev server becomes production server).
Here's what I do:
 - Developers work and test on their local machines and commit using version control to a central repository
 - Dev site is a sub-folder of the production site - easy way to make sure every single damn variable is the same
 - When I want to upload to dev environment:
     rm -r dev  (to remove all files in current dev site)
     svn export file://(path) (to get a complete copy of latest code)
 - When I want to upload to production environment:
    source upload.sh, where upload.sh copies the existing site into a backup directory for quick access in case of disaster, then copies the dev site up into production, then re-copies a couple special files that differ between dev and production back (.htaccess, analytics.php) 

Good luck!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813161</id>
	<title>Re:You are not a n00b</title>
	<author>Anonymous</author>
	<datestamp>1256071560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think the word is "rookie".</p><p>Or you could look at it in the evil way, in that he <em>is</em> a n00b, and before realizing it, he was even worse than that. (What I call a "z00b". A zero-knowledge n00b.)</p></htmltext>
<tokenext>I think the word is " rookie " .Or you could look at it in the evil way , in that he is a n00b , and before realizing it , he was even worse than that .
( What I call a " z00b " .
A zero-knowledge n00b .
)</tokentext>
<sentencetext>I think the word is "rookie".Or you could look at it in the evil way, in that he is a n00b, and before realizing it, he was even worse than that.
(What I call a "z00b".
A zero-knowledge n00b.
)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29818563</id>
	<title>Puppet and packages</title>
	<author>Etherized</author>
	<datestamp>1256053620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There are many ways to do the things you describe. I personally make extensive use of <a href="http://reductivelabs.com/products/puppet/" title="reductivelabs.com" rel="nofollow">Puppet</a> [reductivelabs.com].</p><p>This is a great solution for your configuration files, but note (directly) your code. This is where your distribution's packaging system comes in.</p><p>Build packages of your code for your OS package manager (be it RPM, portage, apt, whatever... it's usually not that difficult). Give the packages version numbers based on svn revision, if you need that granularity. Create an automated mechanism to build your package and insert it into a local repository.</p><p>Tell puppet to ensure that your 'dev' environment is always using the latest package. Tell puppet to ensure that your production and test environments are running whichever specific version they're supposed to be running.</p><p>A downside of puppet is that it's a 'pull' based system, by default every 30 minutes. For most situations, this is adequate - but not all. You might also investigate <a href="https://fedorahosted.org/func/" title="fedorahosted.org" rel="nofollow">Func</a> [fedorahosted.org] as, at the very least, a convenient way to tell a group of notes to phone back home to puppet on demand.</p></htmltext>
<tokenext>There are many ways to do the things you describe .
I personally make extensive use of Puppet [ reductivelabs.com ] .This is a great solution for your configuration files , but note ( directly ) your code .
This is where your distribution 's packaging system comes in.Build packages of your code for your OS package manager ( be it RPM , portage , apt , whatever... it 's usually not that difficult ) .
Give the packages version numbers based on svn revision , if you need that granularity .
Create an automated mechanism to build your package and insert it into a local repository.Tell puppet to ensure that your 'dev ' environment is always using the latest package .
Tell puppet to ensure that your production and test environments are running whichever specific version they 're supposed to be running.A downside of puppet is that it 's a 'pull ' based system , by default every 30 minutes .
For most situations , this is adequate - but not all .
You might also investigate Func [ fedorahosted.org ] as , at the very least , a convenient way to tell a group of notes to phone back home to puppet on demand .</tokentext>
<sentencetext>There are many ways to do the things you describe.
I personally make extensive use of Puppet [reductivelabs.com].This is a great solution for your configuration files, but note (directly) your code.
This is where your distribution's packaging system comes in.Build packages of your code for your OS package manager (be it RPM, portage, apt, whatever... it's usually not that difficult).
Give the packages version numbers based on svn revision, if you need that granularity.
Create an automated mechanism to build your package and insert it into a local repository.Tell puppet to ensure that your 'dev' environment is always using the latest package.
Tell puppet to ensure that your production and test environments are running whichever specific version they're supposed to be running.A downside of puppet is that it's a 'pull' based system, by default every 30 minutes.
For most situations, this is adequate - but not all.
You might also investigate Func [fedorahosted.org] as, at the very least, a convenient way to tell a group of notes to phone back home to puppet on demand.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812355</id>
	<title>No, thanks</title>
	<author>Anonymous</author>
	<datestamp>1256068020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>OSS CMS !? Call me old fashioned (and it won't be the first time), but I will use plain HTML, thank you.</p></htmltext>
<tokenext>OSS CMS ! ?
Call me old fashioned ( and it wo n't be the first time ) , but I will use plain HTML , thank you .</tokentext>
<sentencetext>OSS CMS !?
Call me old fashioned (and it won't be the first time), but I will use plain HTML, thank you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814701</id>
	<title>Re:You are not a n00b</title>
	<author>nametaken</author>
	<datestamp>1256033700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Or maybe they're not over public internet, or are tunneled, or they're sftp and he just calls it ftp?  We dunno.</p></htmltext>
<tokenext>Or maybe they 're not over public internet , or are tunneled , or they 're sftp and he just calls it ftp ?
We dunno .</tokentext>
<sentencetext>Or maybe they're not over public internet, or are tunneled, or they're sftp and he just calls it ftp?
We dunno.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812663</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814851</id>
	<title>Re:Look at Capistrano, steal ideas from Rails</title>
	<author>Anonymous</author>
	<datestamp>1256034180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Parent has some good ideas.</p><p>h2. How to Manage Dev, Test, Pre-Production, Production and Disaster Recovery Environments.</p><p>First, let's agree that you need most of these environments. You may be able to drop 1 of them and you may require even more to support multiple development environments for different product lines to be deployed 6 months apart. Ok?  Ok.</p><p>Next, let's agree that you have developers, Testers, QA and Production Support teams. These may overlap, but that wouldn't be a best practice. Developers should never be production support. There are many reasons this is a bad idea. See other articles for why that is.</p><p>If you aren't doing custom development, dropping many of these environments may be possible without risking everything.</p><p>Finally, let's assume you are the system administrator for all these environments<br>and don't get overruled by the development teams. You CONTROL THE OS and DEPLOYMENT to all the systems that aren't Dev.</p><p>h3. Maagement Best Practices<br># Use Virtual Machines for all these environments. There are many, many reasons why this is a good idea. See other blog posts for those reasons.<br># Never allow a developer root access on **any** of the systems.<br># Never allow a developer write access to any systems that aren't development. Don't let them quickly login to fix a tiny error. They need to provide deployment packages that deal with everything necessary.<br># Installations and upgrades need to be scripted and part of the development deliverables. If the installation and/or upgrade scripts don't work, then that entire build is broken. Send it back.<br># Automate server management using some kind of tool that builds a server from bare metal hardware to production ready. Only the data should be missing.  Check out Puppet for doing this, but there are other alternatives.<br># Server configurations need to be configuration managed too, just like source code for developers is. You should be able to deploy the server config from last month or last year or 3 days ago.<br># Instrument each of the servers with performance monitoring tools. It could be a complex solution or something fairly simple like SysUsage.<br># Instrument specific process monitoring for the main processes on a server. Web server, DB server, and any specific applications.<br># Place alarming scripts/tools that verify applications are actually working - naganos is an option.<br># Use Distributed Version Control Systems, DVCS, for code deployment. Git and Bazaar are good choices. These allow branch and merge constantly without a big commitment to a specific branch required. Your development team may have a different tool in mind. You may use their tool, but be certain that the code to be deployed to a specific server is carefully matched.<br># Be lazy once you get this setup.  It should become 1 cmd to do everything necessary to build a server, install all necessary packages, install the custom code for the environment, configure everything, deploy data to it and verify that the application is running. Lazy is good for an admin.<br># Backup, Backup, Backup.  Then verify that the backup can be deployed. Backups that have never been validated with a recovery are \_make work\_ and not really backups. This is really important.<br># Verify that your disaster recovery play works. Best practice is to load balance users all the time, but switching **primary** location weekly is also good enough.</p><p>Let's reiterate that if you are afraid to type **drop database your\_db\_name;** because you aren't certain you can recover, then you have failed as a system admin in my book.</p><p>Finally, document the the process you use to deploy to each environment. This document should be trivial since it will be login to X-server as Y-userid and run Z-command.</p></htmltext>
<tokenext>Parent has some good ideas.h2 .
How to Manage Dev , Test , Pre-Production , Production and Disaster Recovery Environments.First , let 's agree that you need most of these environments .
You may be able to drop 1 of them and you may require even more to support multiple development environments for different product lines to be deployed 6 months apart .
Ok ? Ok.Next , let 's agree that you have developers , Testers , QA and Production Support teams .
These may overlap , but that would n't be a best practice .
Developers should never be production support .
There are many reasons this is a bad idea .
See other articles for why that is.If you are n't doing custom development , dropping many of these environments may be possible without risking everything.Finally , let 's assume you are the system administrator for all these environmentsand do n't get overruled by the development teams .
You CONTROL THE OS and DEPLOYMENT to all the systems that are n't Dev.h3 .
Maagement Best Practices # Use Virtual Machines for all these environments .
There are many , many reasons why this is a good idea .
See other blog posts for those reasons. # Never allow a developer root access on * * any * * of the systems. # Never allow a developer write access to any systems that are n't development .
Do n't let them quickly login to fix a tiny error .
They need to provide deployment packages that deal with everything necessary. # Installations and upgrades need to be scripted and part of the development deliverables .
If the installation and/or upgrade scripts do n't work , then that entire build is broken .
Send it back. # Automate server management using some kind of tool that builds a server from bare metal hardware to production ready .
Only the data should be missing .
Check out Puppet for doing this , but there are other alternatives. # Server configurations need to be configuration managed too , just like source code for developers is .
You should be able to deploy the server config from last month or last year or 3 days ago. # Instrument each of the servers with performance monitoring tools .
It could be a complex solution or something fairly simple like SysUsage. # Instrument specific process monitoring for the main processes on a server .
Web server , DB server , and any specific applications. # Place alarming scripts/tools that verify applications are actually working - naganos is an option. # Use Distributed Version Control Systems , DVCS , for code deployment .
Git and Bazaar are good choices .
These allow branch and merge constantly without a big commitment to a specific branch required .
Your development team may have a different tool in mind .
You may use their tool , but be certain that the code to be deployed to a specific server is carefully matched. # Be lazy once you get this setup .
It should become 1 cmd to do everything necessary to build a server , install all necessary packages , install the custom code for the environment , configure everything , deploy data to it and verify that the application is running .
Lazy is good for an admin. # Backup , Backup , Backup .
Then verify that the backup can be deployed .
Backups that have never been validated with a recovery are \ _make work \ _ and not really backups .
This is really important. # Verify that your disaster recovery play works .
Best practice is to load balance users all the time , but switching * * primary * * location weekly is also good enough.Let 's reiterate that if you are afraid to type * * drop database your \ _db \ _name ; * * because you are n't certain you can recover , then you have failed as a system admin in my book.Finally , document the the process you use to deploy to each environment .
This document should be trivial since it will be login to X-server as Y-userid and run Z-command .</tokentext>
<sentencetext>Parent has some good ideas.h2.
How to Manage Dev, Test, Pre-Production, Production and Disaster Recovery Environments.First, let's agree that you need most of these environments.
You may be able to drop 1 of them and you may require even more to support multiple development environments for different product lines to be deployed 6 months apart.
Ok?  Ok.Next, let's agree that you have developers, Testers, QA and Production Support teams.
These may overlap, but that wouldn't be a best practice.
Developers should never be production support.
There are many reasons this is a bad idea.
See other articles for why that is.If you aren't doing custom development, dropping many of these environments may be possible without risking everything.Finally, let's assume you are the system administrator for all these environmentsand don't get overruled by the development teams.
You CONTROL THE OS and DEPLOYMENT to all the systems that aren't Dev.h3.
Maagement Best Practices# Use Virtual Machines for all these environments.
There are many, many reasons why this is a good idea.
See other blog posts for those reasons.# Never allow a developer root access on **any** of the systems.# Never allow a developer write access to any systems that aren't development.
Don't let them quickly login to fix a tiny error.
They need to provide deployment packages that deal with everything necessary.# Installations and upgrades need to be scripted and part of the development deliverables.
If the installation and/or upgrade scripts don't work, then that entire build is broken.
Send it back.# Automate server management using some kind of tool that builds a server from bare metal hardware to production ready.
Only the data should be missing.
Check out Puppet for doing this, but there are other alternatives.# Server configurations need to be configuration managed too, just like source code for developers is.
You should be able to deploy the server config from last month or last year or 3 days ago.# Instrument each of the servers with performance monitoring tools.
It could be a complex solution or something fairly simple like SysUsage.# Instrument specific process monitoring for the main processes on a server.
Web server, DB server, and any specific applications.# Place alarming scripts/tools that verify applications are actually working - naganos is an option.# Use Distributed Version Control Systems, DVCS, for code deployment.
Git and Bazaar are good choices.
These allow branch and merge constantly without a big commitment to a specific branch required.
Your development team may have a different tool in mind.
You may use their tool, but be certain that the code to be deployed to a specific server is carefully matched.# Be lazy once you get this setup.
It should become 1 cmd to do everything necessary to build a server, install all necessary packages, install the custom code for the environment, configure everything, deploy data to it and verify that the application is running.
Lazy is good for an admin.# Backup, Backup, Backup.
Then verify that the backup can be deployed.
Backups that have never been validated with a recovery are \_make work\_ and not really backups.
This is really important.# Verify that your disaster recovery play works.
Best practice is to load balance users all the time, but switching **primary** location weekly is also good enough.Let's reiterate that if you are afraid to type **drop database your\_db\_name;** because you aren't certain you can recover, then you have failed as a system admin in my book.Finally, document the the process you use to deploy to each environment.
This document should be trivial since it will be login to X-server as Y-userid and run Z-command.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812029</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812657</id>
	<title>AppLogic</title>
	<author>ldgeorge85</author>
	<datestamp>1256069160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I would suggest using some virtualization technologies for that. Something that would make it easy to deploy multiple copies of the same template, easily manage different large scale architectures, and such. I have personally used 3tera's AppLogic, and have had a lot of great experiences there. With a few physical servers you can manage multiple separate VM's, create templates, automate functionality... blah blah. Good luck finding the best solution for you though.</htmltext>
<tokenext>I would suggest using some virtualization technologies for that .
Something that would make it easy to deploy multiple copies of the same template , easily manage different large scale architectures , and such .
I have personally used 3tera 's AppLogic , and have had a lot of great experiences there .
With a few physical servers you can manage multiple separate VM 's , create templates , automate functionality... blah blah .
Good luck finding the best solution for you though .</tokentext>
<sentencetext>I would suggest using some virtualization technologies for that.
Something that would make it easy to deploy multiple copies of the same template, easily manage different large scale architectures, and such.
I have personally used 3tera's AppLogic, and have had a lot of great experiences there.
With a few physical servers you can manage multiple separate VM's, create templates, automate functionality... blah blah.
Good luck finding the best solution for you though.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813151</id>
	<title>Re:Most important thing in my book</title>
	<author>zztong</author>
	<datestamp>1256071440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p>Testing with real data is not necessarily a good practice. Consider sensitive data, such as social security numbers. Auditors may ding your development practices for providing developers access to information they do not need. You need realistic data, not necessarily the real data. If you're bringing real data from prod back to test and dev, consider having something scrub the data.</p></htmltext>
<tokenext>Testing with real data is not necessarily a good practice .
Consider sensitive data , such as social security numbers .
Auditors may ding your development practices for providing developers access to information they do not need .
You need realistic data , not necessarily the real data .
If you 're bringing real data from prod back to test and dev , consider having something scrub the data .</tokentext>
<sentencetext>Testing with real data is not necessarily a good practice.
Consider sensitive data, such as social security numbers.
Auditors may ding your development practices for providing developers access to information they do not need.
You need realistic data, not necessarily the real data.
If you're bringing real data from prod back to test and dev, consider having something scrub the data.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817365</id>
	<title>Re:Hilarity</title>
	<author>Anonymous</author>
	<datestamp>1256046360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>When you use OSS for a serious project, you will find bugs. The best way to deal with those bugs is to report them and share any workaround you may have found. That is already giving back to the community, and it's done out of entirely selfish motives, to make it work in your concrete project.</p></htmltext>
<tokenext>When you use OSS for a serious project , you will find bugs .
The best way to deal with those bugs is to report them and share any workaround you may have found .
That is already giving back to the community , and it 's done out of entirely selfish motives , to make it work in your concrete project .</tokentext>
<sentencetext>When you use OSS for a serious project, you will find bugs.
The best way to deal with those bugs is to report them and share any workaround you may have found.
That is already giving back to the community, and it's done out of entirely selfish motives, to make it work in your concrete project.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811917</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812081</id>
	<title>Seperate Development and Production First . . .</title>
	<author>crrkrieger</author>
	<datestamp>1256067000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>. . . everything else comes after that.  A small illustration:</p><p>When I was system admin for a small brokerage, one of my first tasks was to determine the hardware configuration of every server.  There was one particular server that I needed to shutdown in the process.  I asked every employee (it was that small) if there were any critical services on that machine.  All agreed it was ok to take it off line.  For the next 15 minutes, while the machine rebooted, no trading happened because the main program was linking to some libraries that were served off of that server.</p><p>I immediately put a new task at the top of my to-do list:  reconfiguring the network.  Thereafter, production was done on one network and development on another.  The router between them would not allow nfs mounts.  Production users were not given accounts on development machines.  Developers were no longer given the root password, but it was kept in a safe for emergencies.</p><p>I know that wasn't what you were asking, but that is the first thing I would take care of.</p></htmltext>
<tokenext>.
. .
everything else comes after that .
A small illustration : When I was system admin for a small brokerage , one of my first tasks was to determine the hardware configuration of every server .
There was one particular server that I needed to shutdown in the process .
I asked every employee ( it was that small ) if there were any critical services on that machine .
All agreed it was ok to take it off line .
For the next 15 minutes , while the machine rebooted , no trading happened because the main program was linking to some libraries that were served off of that server.I immediately put a new task at the top of my to-do list : reconfiguring the network .
Thereafter , production was done on one network and development on another .
The router between them would not allow nfs mounts .
Production users were not given accounts on development machines .
Developers were no longer given the root password , but it was kept in a safe for emergencies.I know that was n't what you were asking , but that is the first thing I would take care of .</tokentext>
<sentencetext>.
. .
everything else comes after that.
A small illustration:When I was system admin for a small brokerage, one of my first tasks was to determine the hardware configuration of every server.
There was one particular server that I needed to shutdown in the process.
I asked every employee (it was that small) if there were any critical services on that machine.
All agreed it was ok to take it off line.
For the next 15 minutes, while the machine rebooted, no trading happened because the main program was linking to some libraries that were served off of that server.I immediately put a new task at the top of my to-do list:  reconfiguring the network.
Thereafter, production was done on one network and development on another.
The router between them would not allow nfs mounts.
Production users were not given accounts on development machines.
Developers were no longer given the root password, but it was kept in a safe for emergencies.I know that wasn't what you were asking, but that is the first thing I would take care of.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811669</id>
	<title>Have the hosts email problems to an email account</title>
	<author>denis-The-menace</author>
	<datestamp>1256065740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When I did this years ago, each server would run scripts to read logs, etc and if they found something bad they would email me with what they found.</p><p>Simple and scalable</p></htmltext>
<tokenext>When I did this years ago , each server would run scripts to read logs , etc and if they found something bad they would email me with what they found.Simple and scalable</tokentext>
<sentencetext>When I did this years ago, each server would run scripts to read logs, etc and if they found something bad they would email me with what they found.Simple and scalable</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816447</id>
	<title>Re:How slashdot does it</title>
	<author>RichardJenkins</author>
	<datestamp>1256042040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's crazy. You need to have a stage and live. Fortunatlythese daysyou can doosixmachines on one virtual machine. I suggest you:</p><p>
&nbsp; * Buy one server (it needs to be powerful, so maybe one of those ones that goes in a rack (don't worry about a rack though)<br>
&nbsp; * Install Windows on it and VMWareServer (the free one)<br>
&nbsp; * Install 3VMs and put Windows on each.<br>
&nbsp; * You'll need one web server and one database server for live and stage. So use two  VMs for production, and the other VM and the machine with VMware server installed for stage.<br>
&nbsp; * Eventually this might not be enough performance, if so you can just add more virtual machines to spread the load. Maybe but more  virtual CPUs in them?</p><p>Slashdot is a great source for expert info like this: be sure to ask about backups later too!</p></htmltext>
<tokenext>That 's crazy .
You need to have a stage and live .
Fortunatlythese daysyou can doosixmachines on one virtual machine .
I suggest you :   * Buy one server ( it needs to be powerful , so maybe one of those ones that goes in a rack ( do n't worry about a rack though )   * Install Windows on it and VMWareServer ( the free one )   * Install 3VMs and put Windows on each .
  * You 'll need one web server and one database server for live and stage .
So use two VMs for production , and the other VM and the machine with VMware server installed for stage .
  * Eventually this might not be enough performance , if so you can just add more virtual machines to spread the load .
Maybe but more virtual CPUs in them ? Slashdot is a great source for expert info like this : be sure to ask about backups later too !</tokentext>
<sentencetext>That's crazy.
You need to have a stage and live.
Fortunatlythese daysyou can doosixmachines on one virtual machine.
I suggest you:
  * Buy one server (it needs to be powerful, so maybe one of those ones that goes in a rack (don't worry about a rack though)
  * Install Windows on it and VMWareServer (the free one)
  * Install 3VMs and put Windows on each.
  * You'll need one web server and one database server for live and stage.
So use two  VMs for production, and the other VM and the machine with VMware server installed for stage.
  * Eventually this might not be enough performance, if so you can just add more virtual machines to spread the load.
Maybe but more  virtual CPUs in them?Slashdot is a great source for expert info like this: be sure to ask about backups later too!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812863</id>
	<title>Read "Pro PHP Security"</title>
	<author>garyebickford</author>
	<datestamp>1256069940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I'm just now reading "Pro PHP Security" (Snyder &amp; Southwell, Apress), and it's got a lot of good information - hands-on examples, best practices and technical background that is useful whether you support PHP or not.  It covers both local and web-based attacks such as XSS, SQL injection, vuln exploits, etc.</p><p>Among other things, it suggests you set up virtual servers for each domain user.  You could use FreeBSD 'jails', linux virtualization tools, etc. - the book is agnostic on which ones you use, and doesn't cover a lot of detail in this area, at least so far.  Virtualization of this type almost completely eliminates the ability of any client user accessing anything outside their virtual server space.</p><p>It also suggests that you automate the creation of new domains and hosts, with a form-entry input of some kind, perhaps a well-secured web-based front end, that assures everything gets done properly.  Such a parametric front end (of whichever type) helps in preventing the sysadmin from forgetting or purposely ignoring certain setup tasks.  I have not kept up to date in this area, because it's not something I do any more, but I'm sure there are some packages that do most or all this work.</p><p>You might even use webmin for some of this.</p><p>I also have on my shelf the Cisco book "Data Center Fundamentals" available directly from Cisco.  It's $60 but has a slew of information.</p><p>I'm sure there are other books with more information, I just don't have them off the top of my head.</p></htmltext>
<tokenext>I 'm just now reading " Pro PHP Security " ( Snyder &amp; Southwell , Apress ) , and it 's got a lot of good information - hands-on examples , best practices and technical background that is useful whether you support PHP or not .
It covers both local and web-based attacks such as XSS , SQL injection , vuln exploits , etc.Among other things , it suggests you set up virtual servers for each domain user .
You could use FreeBSD 'jails ' , linux virtualization tools , etc .
- the book is agnostic on which ones you use , and does n't cover a lot of detail in this area , at least so far .
Virtualization of this type almost completely eliminates the ability of any client user accessing anything outside their virtual server space.It also suggests that you automate the creation of new domains and hosts , with a form-entry input of some kind , perhaps a well-secured web-based front end , that assures everything gets done properly .
Such a parametric front end ( of whichever type ) helps in preventing the sysadmin from forgetting or purposely ignoring certain setup tasks .
I have not kept up to date in this area , because it 's not something I do any more , but I 'm sure there are some packages that do most or all this work.You might even use webmin for some of this.I also have on my shelf the Cisco book " Data Center Fundamentals " available directly from Cisco .
It 's $ 60 but has a slew of information.I 'm sure there are other books with more information , I just do n't have them off the top of my head .</tokentext>
<sentencetext>I'm just now reading "Pro PHP Security" (Snyder &amp; Southwell, Apress), and it's got a lot of good information - hands-on examples, best practices and technical background that is useful whether you support PHP or not.
It covers both local and web-based attacks such as XSS, SQL injection, vuln exploits, etc.Among other things, it suggests you set up virtual servers for each domain user.
You could use FreeBSD 'jails', linux virtualization tools, etc.
- the book is agnostic on which ones you use, and doesn't cover a lot of detail in this area, at least so far.
Virtualization of this type almost completely eliminates the ability of any client user accessing anything outside their virtual server space.It also suggests that you automate the creation of new domains and hosts, with a form-entry input of some kind, perhaps a well-secured web-based front end, that assures everything gets done properly.
Such a parametric front end (of whichever type) helps in preventing the sysadmin from forgetting or purposely ignoring certain setup tasks.
I have not kept up to date in this area, because it's not something I do any more, but I'm sure there are some packages that do most or all this work.You might even use webmin for some of this.I also have on my shelf the Cisco book "Data Center Fundamentals" available directly from Cisco.
It's $60 but has a slew of information.I'm sure there are other books with more information, I just don't have them off the top of my head.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839</id>
	<title>SVN etc.</title>
	<author>Anonymous</author>
	<datestamp>1256066220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>My company (for upwards of 10 years) has been using:<ul>
<li>An SVN (Subversion) server on our dev box</li><li>Developer or group specific subdomains in IIS / Apache on the dev server, to which working copies are checked-out</li><li>Deployment to live servers via SVN checkout when the time comes</li><li>Global variables to check which server the app's running on, and to switch between DB connection strings etc.</li></ul><p>
Still not figured out an efficient way to version MSSQL and MySQL databases using OSS, though. Open to suggestions!</p></htmltext>
<tokenext>My company ( for upwards of 10 years ) has been using : An SVN ( Subversion ) server on our dev boxDeveloper or group specific subdomains in IIS / Apache on the dev server , to which working copies are checked-outDeployment to live servers via SVN checkout when the time comesGlobal variables to check which server the app 's running on , and to switch between DB connection strings etc .
Still not figured out an efficient way to version MSSQL and MySQL databases using OSS , though .
Open to suggestions !</tokentext>
<sentencetext>My company (for upwards of 10 years) has been using:
An SVN (Subversion) server on our dev boxDeveloper or group specific subdomains in IIS / Apache on the dev server, to which working copies are checked-outDeployment to live servers via SVN checkout when the time comesGlobal variables to check which server the app's running on, and to switch between DB connection strings etc.
Still not figured out an efficient way to version MSSQL and MySQL databases using OSS, though.
Open to suggestions!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814001</id>
	<title>Re:You are not a n00b</title>
	<author>Dragonslicer</author>
	<datestamp>1256031480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's the difference between "n00b" and "newbie".</htmltext>
<tokenext>It 's the difference between " n00b " and " newbie " .</tokentext>
<sentencetext>It's the difference between "n00b" and "newbie".</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813161</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813957</id>
	<title>Trying to hard..</title>
	<author>theNAM666</author>
	<datestamp>1256031300000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>With no personal offense to the OP,  (and noting that this is Drupal),  I think the OP is trying a little to hard or suffers from inexperience.  My first Drupal server hosts 100+ sites and Dev/Test/Production was rarely an issue-- which is to say,  what is the OP doing that requires that level of segmentation?  It's simply not that difficult on the scale mentioned.</p><p>For large sites,  of course,  Drupal dev/test/production is another matter-- and there is a Drupal group that handles such questions and considerations.  Reading it would be useful to the OP;  for most people here,  it is likely so Drupal specific as to give no lessons not already familiar.</p><p>For small-to-medium sites,  keeping separate dev/test/production copies on separate subdomains,  flipping between them as necessary,  and maintaining a backup schedule of everything is practical.  Module management is a different story,  but my choice is to multisite (and rsync)-- custom modules go in each sites's custom modules directory.  Beyond this,  again,  we descend into Drupal-specific discussion that would probably be best on drupal.org-- where it has already been occurring for years.</p><p>The general question-- well,  away from Drupal-- is sort of platform dependent,  isn't it?  On Drupal (up to D6),  there is an annoying possibility of data structure collision if you have different versions running,  for instance-- no easy way to merge databases unless you plan for it and,  say,  index the db entries (odd/even).  Other systems are better at merging different copies.  One should probably write a system that plans in advance.</p><p>But for 50 sites,  I<nobr> <wbr></nobr>... just don't have a "good question here,"  as most of the issues should be manageable...</p><p>after you spend two days reading UNIX man pages.  So there's my answer.  Read-The-Fine-Man pages.</p></htmltext>
<tokenext>With no personal offense to the OP , ( and noting that this is Drupal ) , I think the OP is trying a little to hard or suffers from inexperience .
My first Drupal server hosts 100 + sites and Dev/Test/Production was rarely an issue-- which is to say , what is the OP doing that requires that level of segmentation ?
It 's simply not that difficult on the scale mentioned.For large sites , of course , Drupal dev/test/production is another matter-- and there is a Drupal group that handles such questions and considerations .
Reading it would be useful to the OP ; for most people here , it is likely so Drupal specific as to give no lessons not already familiar.For small-to-medium sites , keeping separate dev/test/production copies on separate subdomains , flipping between them as necessary , and maintaining a backup schedule of everything is practical .
Module management is a different story , but my choice is to multisite ( and rsync ) -- custom modules go in each sites 's custom modules directory .
Beyond this , again , we descend into Drupal-specific discussion that would probably be best on drupal.org-- where it has already been occurring for years.The general question-- well , away from Drupal-- is sort of platform dependent , is n't it ?
On Drupal ( up to D6 ) , there is an annoying possibility of data structure collision if you have different versions running , for instance-- no easy way to merge databases unless you plan for it and , say , index the db entries ( odd/even ) .
Other systems are better at merging different copies .
One should probably write a system that plans in advance.But for 50 sites , I ... just do n't have a " good question here , " as most of the issues should be manageable...after you spend two days reading UNIX man pages .
So there 's my answer .
Read-The-Fine-Man pages .</tokentext>
<sentencetext>With no personal offense to the OP,  (and noting that this is Drupal),  I think the OP is trying a little to hard or suffers from inexperience.
My first Drupal server hosts 100+ sites and Dev/Test/Production was rarely an issue-- which is to say,  what is the OP doing that requires that level of segmentation?
It's simply not that difficult on the scale mentioned.For large sites,  of course,  Drupal dev/test/production is another matter-- and there is a Drupal group that handles such questions and considerations.
Reading it would be useful to the OP;  for most people here,  it is likely so Drupal specific as to give no lessons not already familiar.For small-to-medium sites,  keeping separate dev/test/production copies on separate subdomains,  flipping between them as necessary,  and maintaining a backup schedule of everything is practical.
Module management is a different story,  but my choice is to multisite (and rsync)-- custom modules go in each sites's custom modules directory.
Beyond this,  again,  we descend into Drupal-specific discussion that would probably be best on drupal.org-- where it has already been occurring for years.The general question-- well,  away from Drupal-- is sort of platform dependent,  isn't it?
On Drupal (up to D6),  there is an annoying possibility of data structure collision if you have different versions running,  for instance-- no easy way to merge databases unless you plan for it and,  say,  index the db entries (odd/even).
Other systems are better at merging different copies.
One should probably write a system that plans in advance.But for 50 sites,  I ... just don't have a "good question here,"  as most of the issues should be manageable...after you spend two days reading UNIX man pages.
So there's my answer.
Read-The-Fine-Man pages.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812965</id>
	<title>A small note</title>
	<author>Anonymous</author>
	<datestamp>1256070540000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I modded, so posting as AC.</p><p>A minor technique we use is a versioned install structure, with symlinks to the current version.  Eg.,<nobr> <wbr></nobr>/opt/application\_root<nobr> <wbr></nobr>/application\_1023  (1023 is our build number)<nobr> <wbr></nobr>/application\_1034<nobr> <wbr></nobr>/application\_1045<nobr> <wbr></nobr>/current ==&gt;<nobr> <wbr></nobr>/application\_1045 (symlink to one of the install trees)</p><p>This allows easy roll back if an upgrade fails without having to reload from staging.  Our update scripts build the new directory, stop the server processes, re-aim the symlink, and restart the server processes.  All the execution scripts use the path<nobr> <wbr></nobr>/opt/application\_root/current/...</p></htmltext>
<tokenext>I modded , so posting as AC.A minor technique we use is a versioned install structure , with symlinks to the current version .
Eg. , /opt/application \ _root /application \ _1023 ( 1023 is our build number ) /application \ _1034 /application \ _1045 /current = = &gt; /application \ _1045 ( symlink to one of the install trees ) This allows easy roll back if an upgrade fails without having to reload from staging .
Our update scripts build the new directory , stop the server processes , re-aim the symlink , and restart the server processes .
All the execution scripts use the path /opt/application \ _root/current/.. .</tokentext>
<sentencetext>I modded, so posting as AC.A minor technique we use is a versioned install structure, with symlinks to the current version.
Eg., /opt/application\_root /application\_1023  (1023 is our build number) /application\_1034 /application\_1045 /current ==&gt; /application\_1045 (symlink to one of the install trees)This allows easy roll back if an upgrade fails without having to reload from staging.
Our update scripts build the new directory, stop the server processes, re-aim the symlink, and restart the server processes.
All the execution scripts use the path /opt/application\_root/current/...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811547</id>
	<title>Separate SVN deploys</title>
	<author>Foofoobar</author>
	<datestamp>1256065320000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext>Create separate SVN deploys as separate environments. Deploy them as subdomains. If they require database access, create a test database they can share or separate test databases for each environment. Make sure the database class in the source is written as DB.bkp so when you deploy it, your deployed DB class won't be overwritten by changes to the source DB class.</htmltext>
<tokenext>Create separate SVN deploys as separate environments .
Deploy them as subdomains .
If they require database access , create a test database they can share or separate test databases for each environment .
Make sure the database class in the source is written as DB.bkp so when you deploy it , your deployed DB class wo n't be overwritten by changes to the source DB class .</tokentext>
<sentencetext>Create separate SVN deploys as separate environments.
Deploy them as subdomains.
If they require database access, create a test database they can share or separate test databases for each environment.
Make sure the database class in the source is written as DB.bkp so when you deploy it, your deployed DB class won't be overwritten by changes to the source DB class.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29871505</id>
	<title>WZZ</title>
	<author>Anonymous</author>
	<datestamp>1256566980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Owen <a href="http://www.aion4gold.com/" title="aion4gold.com" rel="nofollow">aion gold</a> [aion4gold.com]<br><a href="http://www.metin2sale.com/" title="metin2sale.com" rel="nofollow">metin2 yang</a> [metin2sale.com] will be<br><a href="http://www.itemchannel.com/" title="itemchannel.com" rel="nofollow">wow gold cheap</a> [itemchannel.com]<br><a href="http://www.aionshopping.com/" title="aionshopping.com" rel="nofollow">aion gold</a> [aionshopping.com] possible<br><a href="http://www.vipwarhammergold.com/" title="vipwarhammergold.com" rel="nofollow">world of warcraft gold</a> [vipwarhammergold.com] to<br><a href="http://www.aion4gold.com/" title="aion4gold.com" rel="nofollow">aion4gold</a> [aion4gold.com] return to<br><a href="http://www.aion4gold.com/" title="aion4gold.com" rel="nofollow">cheap aion gold</a> [aion4gold.com] the national<br>
&nbsp; team <a href="http://www.cheapaion.com/" title="cheapaion.com" rel="nofollow">Aion Kina</a> [cheapaion.com]</p></htmltext>
<tokenext>Owen aion gold [ aion4gold.com ] metin2 yang [ metin2sale.com ] will bewow gold cheap [ itemchannel.com ] aion gold [ aionshopping.com ] possibleworld of warcraft gold [ vipwarhammergold.com ] toaion4gold [ aion4gold.com ] return tocheap aion gold [ aion4gold.com ] the national   team Aion Kina [ cheapaion.com ]</tokentext>
<sentencetext>Owen aion gold [aion4gold.com]metin2 yang [metin2sale.com] will bewow gold cheap [itemchannel.com]aion gold [aionshopping.com] possibleworld of warcraft gold [vipwarhammergold.com] toaion4gold [aion4gold.com] return tocheap aion gold [aion4gold.com] the national
  team Aion Kina [cheapaion.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813175</id>
	<title>I'm dealing with the same issues</title>
	<author>Anonymous</author>
	<datestamp>1256071620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I am in a similar situation but perhaps a few more steps down the path.  We have environments for internal and external clients to develop, test, approve and release code.  These environments all run the same version of the base platform, and are all considered production systems.  We have other environments to test platform level code.  These are specifically to assist in the development and release process of individual projects.</p><p>We wrote a framework application that is a single access point to publish and commit code, and move it from environment to environment.  Each movement requires approval, and this can be customized for each customer.  To place something on the first environment, very like a sandbox, we just copy it out there.  In order to get further a commit must occur.  Each commit creates a new version of the project.  That version must then be marked before it can be loaded on a different environment.  For the second environment, test, the project manager marks the version, indicating it is ready for testing.  The tester loads the version, maintaining control over his testing environment.  Each loading requires the version be previously marked.  For release into production QA marks it as acceptable and the PM coordinates the release (load).  When a new version is created it can move along the path and then eventually replace the original version.</p><p>Our projects consist of jsps, java and configuration files, and sometimes other files.  They are stored in CVS, each project on it's own branch.  The project specific java is compiled on the fly in order to ensure compatibility with the existing codebase.  A compilation failure results in a loading failure.  We have a custom class loader written to handle the hotswapping of the resulting class files.  Approximately 100 versions get loaded to various environments every single day, and we only restart the JVM maybe once every month.</p><p>The only other significant aspect is that we CVS tag all currently loaded files so that we can reproduce any environment's contents at any time.</p></htmltext>
<tokenext>I am in a similar situation but perhaps a few more steps down the path .
We have environments for internal and external clients to develop , test , approve and release code .
These environments all run the same version of the base platform , and are all considered production systems .
We have other environments to test platform level code .
These are specifically to assist in the development and release process of individual projects.We wrote a framework application that is a single access point to publish and commit code , and move it from environment to environment .
Each movement requires approval , and this can be customized for each customer .
To place something on the first environment , very like a sandbox , we just copy it out there .
In order to get further a commit must occur .
Each commit creates a new version of the project .
That version must then be marked before it can be loaded on a different environment .
For the second environment , test , the project manager marks the version , indicating it is ready for testing .
The tester loads the version , maintaining control over his testing environment .
Each loading requires the version be previously marked .
For release into production QA marks it as acceptable and the PM coordinates the release ( load ) .
When a new version is created it can move along the path and then eventually replace the original version.Our projects consist of jsps , java and configuration files , and sometimes other files .
They are stored in CVS , each project on it 's own branch .
The project specific java is compiled on the fly in order to ensure compatibility with the existing codebase .
A compilation failure results in a loading failure .
We have a custom class loader written to handle the hotswapping of the resulting class files .
Approximately 100 versions get loaded to various environments every single day , and we only restart the JVM maybe once every month.The only other significant aspect is that we CVS tag all currently loaded files so that we can reproduce any environment 's contents at any time .</tokentext>
<sentencetext>I am in a similar situation but perhaps a few more steps down the path.
We have environments for internal and external clients to develop, test, approve and release code.
These environments all run the same version of the base platform, and are all considered production systems.
We have other environments to test platform level code.
These are specifically to assist in the development and release process of individual projects.We wrote a framework application that is a single access point to publish and commit code, and move it from environment to environment.
Each movement requires approval, and this can be customized for each customer.
To place something on the first environment, very like a sandbox, we just copy it out there.
In order to get further a commit must occur.
Each commit creates a new version of the project.
That version must then be marked before it can be loaded on a different environment.
For the second environment, test, the project manager marks the version, indicating it is ready for testing.
The tester loads the version, maintaining control over his testing environment.
Each loading requires the version be previously marked.
For release into production QA marks it as acceptable and the PM coordinates the release (load).
When a new version is created it can move along the path and then eventually replace the original version.Our projects consist of jsps, java and configuration files, and sometimes other files.
They are stored in CVS, each project on it's own branch.
The project specific java is compiled on the fly in order to ensure compatibility with the existing codebase.
A compilation failure results in a loading failure.
We have a custom class loader written to handle the hotswapping of the resulting class files.
Approximately 100 versions get loaded to various environments every single day, and we only restart the JVM maybe once every month.The only other significant aspect is that we CVS tag all currently loaded files so that we can reproduce any environment's contents at any time.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29818339</id>
	<title>Keep Test DB up-to-date</title>
	<author>minstrelmike</author>
	<datestamp>1256052240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>One of the things we do that solves a lot of issues is that every night we export our production data to a file, zip it, stash it a few places,<br> <br>
and then we copy it to our Test site, drop the db and reload all the data.
<br> <br>Tells us fer sure the export was good and it makes sure we work on live data. <br>Another thing it guarantees is that you will write the correct scripts to make any db changes on production because you have to make those changes every day on test until they go live.<br> <br>One of my co-workers suggested it a few years a go and I wouldn't go any other way.Making a backup is only half the solution. Guaranteeing the restore is the entire solution.</htmltext>
<tokenext>One of the things we do that solves a lot of issues is that every night we export our production data to a file , zip it , stash it a few places , and then we copy it to our Test site , drop the db and reload all the data .
Tells us fer sure the export was good and it makes sure we work on live data .
Another thing it guarantees is that you will write the correct scripts to make any db changes on production because you have to make those changes every day on test until they go live .
One of my co-workers suggested it a few years a go and I would n't go any other way.Making a backup is only half the solution .
Guaranteeing the restore is the entire solution .</tokentext>
<sentencetext>One of the things we do that solves a lot of issues is that every night we export our production data to a file, zip it, stash it a few places, 
and then we copy it to our Test site, drop the db and reload all the data.
Tells us fer sure the export was good and it makes sure we work on live data.
Another thing it guarantees is that you will write the correct scripts to make any db changes on production because you have to make those changes every day on test until they go live.
One of my co-workers suggested it a few years a go and I wouldn't go any other way.Making a backup is only half the solution.
Guaranteeing the restore is the entire solution.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812941</id>
	<title>Re:You are not a n00b</title>
	<author>hesaigo999ca</author>
	<datestamp>1256070360000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p>Yeah, right, see if that one works on WoW!</p></htmltext>
<tokenext>Yeah , right , see if that one works on WoW !</tokentext>
<sentencetext>Yeah, right, see if that one works on WoW!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812305</id>
	<title>In a word...</title>
	<author>Gordonjcp</author>
	<datestamp>1256067840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Fabric.</p><p><a href="http://www.gjcp.net/articles/fabric/" title="gjcp.net">http://www.gjcp.net/articles/fabric/</a> [gjcp.net]</p><p>Saves so much hassle and buggering about.</p></htmltext>
<tokenext>Fabric.http : //www.gjcp.net/articles/fabric/ [ gjcp.net ] Saves so much hassle and buggering about .</tokentext>
<sentencetext>Fabric.http://www.gjcp.net/articles/fabric/ [gjcp.net]Saves so much hassle and buggering about.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817475</id>
	<title>senor nebuloso</title>
	<author>chef\_raekwon</author>
	<datestamp>1256047020000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>this is such a nebulous question.  you want your dev/qa/pre-prod to emulate your production environment as much as possible.  this subject in itself could fill a book on best practices, techniques, and the like.  Easiest said by saying: keep all developed code separate from 3rd party application code.  packages/versioning/repositories are a good start.  make things relocatable, have one installer, and have it take multiple environmental variables.  ie - make environment variables 'run time', don't make the same mistake everyone makes - and make them 'build-time'.</p><p>Best of Luck.</p></htmltext>
<tokenext>this is such a nebulous question .
you want your dev/qa/pre-prod to emulate your production environment as much as possible .
this subject in itself could fill a book on best practices , techniques , and the like .
Easiest said by saying : keep all developed code separate from 3rd party application code .
packages/versioning/repositories are a good start .
make things relocatable , have one installer , and have it take multiple environmental variables .
ie - make environment variables 'run time ' , do n't make the same mistake everyone makes - and make them 'build-time'.Best of Luck .</tokentext>
<sentencetext>this is such a nebulous question.
you want your dev/qa/pre-prod to emulate your production environment as much as possible.
this subject in itself could fill a book on best practices, techniques, and the like.
Easiest said by saying: keep all developed code separate from 3rd party application code.
packages/versioning/repositories are a good start.
make things relocatable, have one installer, and have it take multiple environmental variables.
ie - make environment variables 'run time', don't make the same mistake everyone makes - and make them 'build-time'.Best of Luck.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811873</id>
	<title>Ant.</title>
	<author>Anonymous</author>
	<datestamp>1256066280000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Ant.</p></htmltext>
<tokenext>Ant .</tokentext>
<sentencetext>Ant.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537</id>
	<title>You are not a n00b</title>
	<author>Anonymous</author>
	<datestamp>1256065260000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>You may be a new system administrator, but you are not a n00b.</p><p>A n00b wouldn't realize he was a n00b.</p></htmltext>
<tokenext>You may be a new system administrator , but you are not a n00b.A n00b would n't realize he was a n00b .</tokentext>
<sentencetext>You may be a new system administrator, but you are not a n00b.A n00b wouldn't realize he was a n00b.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812209</id>
	<title>I didnt know</title>
	<author>Anonymous</author>
	<datestamp>1256067480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>that<nobr> <wbr></nobr>/. allowed religious discussions</htmltext>
<tokenext>that / .
allowed religious discussions</tokentext>
<sentencetext>that /.
allowed religious discussions</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812881</id>
	<title>The answer's simple</title>
	<author>Shadow-isoHunt</author>
	<datestamp>1256070060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Segue in to a new paradigm and experience increased synergy - consolidate already!<br> <br>
Kidding, naturally.</htmltext>
<tokenext>Segue in to a new paradigm and experience increased synergy - consolidate already !
Kidding , naturally .</tokentext>
<sentencetext>Segue in to a new paradigm and experience increased synergy - consolidate already!
Kidding, naturally.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812367</id>
	<title>How do i manage those environments?</title>
	<author>Mister Whirly</author>
	<datestamp>1256068140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Carefully. And you?</htmltext>
<tokenext>Carefully .
And you ?</tokentext>
<sentencetext>Carefully.
And you?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29815929</id>
	<title>Our Method</title>
	<author>endus</author>
	<datestamp>1256038920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Put everything in test, not configured properly for production, until such time as enough people start using test that it becomes production on its own.  This usually happens slowly and organically, and usually in the middle of the night.  Once you have at least 2-3 different groups screaming at you over the lack of availability of your test system you can be reasonably confident that it is now production.</htmltext>
<tokenext>Put everything in test , not configured properly for production , until such time as enough people start using test that it becomes production on its own .
This usually happens slowly and organically , and usually in the middle of the night .
Once you have at least 2-3 different groups screaming at you over the lack of availability of your test system you can be reasonably confident that it is now production .</tokentext>
<sentencetext>Put everything in test, not configured properly for production, until such time as enough people start using test that it becomes production on its own.
This usually happens slowly and organically, and usually in the middle of the night.
Once you have at least 2-3 different groups screaming at you over the lack of availability of your test system you can be reasonably confident that it is now production.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813071</id>
	<title>Symbolic links</title>
	<author>CyberDong</author>
	<datestamp>1256070960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I've found it's useful to put any env-specific properties in external properties files, and then make a copy for each env.  On each environment, there's a one-time exercise of creating symbolic links to point to the appropriate files.<br>
&nbsp; &nbsp; e.g.<br>
&nbsp; &nbsp; &nbsp; ln -s db.properties.dev db.properties<br>
&nbsp; &nbsp; &nbsp; ln -s server.properties.dev server.properties<nobr> <wbr></nobr>...</p><p>Then just use the links in the app code.</p></htmltext>
<tokenext>I 've found it 's useful to put any env-specific properties in external properties files , and then make a copy for each env .
On each environment , there 's a one-time exercise of creating symbolic links to point to the appropriate files .
    e.g .
      ln -s db.properties.dev db.properties       ln -s server.properties.dev server.properties ...Then just use the links in the app code .</tokentext>
<sentencetext>I've found it's useful to put any env-specific properties in external properties files, and then make a copy for each env.
On each environment, there's a one-time exercise of creating symbolic links to point to the appropriate files.
    e.g.
      ln -s db.properties.dev db.properties
      ln -s server.properties.dev server.properties ...Then just use the links in the app code.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812179</id>
	<title>What's a DEV environment? =:O</title>
	<author>starglider29a</author>
	<datestamp>1256067360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>People are supposed to TEST this stuff first!?<br> <br> <br>
<i>Did he forget the Sarcasm Mark ~, or does he not know about it?</i></htmltext>
<tokenext>People are supposed to TEST this stuff first ! ?
Did he forget the Sarcasm Mark ~ , or does he not know about it ?</tokentext>
<sentencetext>People are supposed to TEST this stuff first!?
Did he forget the Sarcasm Mark ~, or does he not know about it?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811765</id>
	<title>I thought...</title>
	<author>pyrr</author>
	<datestamp>1256065980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>...testing was what the production environment was for. Nothing like having dozens of end users flooding the help desk with calls because someone messed with a server or an active database. They take care of all that pesky and tedious testing for you!

<p>/sarcasm (in case you couldn't tell)</p></htmltext>
<tokenext>...testing was what the production environment was for .
Nothing like having dozens of end users flooding the help desk with calls because someone messed with a server or an active database .
They take care of all that pesky and tedious testing for you !
/sarcasm ( in case you could n't tell )</tokentext>
<sentencetext>...testing was what the production environment was for.
Nothing like having dozens of end users flooding the help desk with calls because someone messed with a server or an active database.
They take care of all that pesky and tedious testing for you!
/sarcasm (in case you couldn't tell)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813333</id>
	<title>Badly.</title>
	<author>Elwood P Dowd</author>
	<datestamp>1256072280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>(See subject.)</p></htmltext>
<tokenext>( See subject .
)</tokentext>
<sentencetext>(See subject.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811779</id>
	<title>Acronym hell...</title>
	<author>Anonymous</author>
	<datestamp>1256065980000</datestamp>
	<modclass>Funny</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I am a n00b system administrator for a small web development company that builds and hosts <b>OSS CMSes</b> on a few <b>LAMP</b> servers (mostly Drupal). I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a <b>URL</b> (ex: site1.developer.example.com). Developers also get <b>FTP</b> access and <b>MySQL</b> access (through <b>php</b>MyAdmin). Additional scripts check in files to the repository and move files/<b>DB</b>s [...]</p></div><p>If you have a WYSIWYG front end done DIY style then you need to CYA and RTFM, simply because the newer style AJAX IDEs dont support IDEA. Make sure that you mind your P's and Q's, or the FBI will make you MIA thanks to the P.A.T.R.I.O.T. act. It's pretty much a PEBKAC issue. Oh, did I mention that you should leverage as many TLA's as possible?</p></div>
	</htmltext>
<tokenext>I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers ( mostly Drupal ) .
I 've written a few scripts that check out dev/test/production environments from our repository , so web developers can access the site they 're working on from a URL ( ex : site1.developer.example.com ) .
Developers also get FTP access and MySQL access ( through phpMyAdmin ) .
Additional scripts check in files to the repository and move files/DBs [ ... ] If you have a WYSIWYG front end done DIY style then you need to CYA and RTFM , simply because the newer style AJAX IDEs dont support IDEA .
Make sure that you mind your P 's and Q 's , or the FBI will make you MIA thanks to the P.A.T.R.I.O.T .
act. It 's pretty much a PEBKAC issue .
Oh , did I mention that you should leverage as many TLA 's as possible ?</tokentext>
<sentencetext>I am a n00b system administrator for a small web development company that builds and hosts OSS CMSes on a few LAMP servers (mostly Drupal).
I've written a few scripts that check out dev/test/production environments from our repository, so web developers can access the site they're working on from a URL (ex: site1.developer.example.com).
Developers also get FTP access and MySQL access (through phpMyAdmin).
Additional scripts check in files to the repository and move files/DBs [...]If you have a WYSIWYG front end done DIY style then you need to CYA and RTFM, simply because the newer style AJAX IDEs dont support IDEA.
Make sure that you mind your P's and Q's, or the FBI will make you MIA thanks to the P.A.T.R.I.O.T.
act. It's pretty much a PEBKAC issue.
Oh, did I mention that you should leverage as many TLA's as possible?
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29821899</id>
	<title>Nolio may be the answer.</title>
	<author>Anonymous</author>
	<datestamp>1256131560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You should taking a look at Nolio at noliosoft.com.  It's a pretty full featured automation platform that should be able to handle everything you might need.</p></htmltext>
<tokenext>You should taking a look at Nolio at noliosoft.com .
It 's a pretty full featured automation platform that should be able to handle everything you might need .</tokentext>
<sentencetext>You should taking a look at Nolio at noliosoft.com.
It's a pretty full featured automation platform that should be able to handle everything you might need.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820159</id>
	<title>Re:Quick Brief</title>
	<author>kuzb</author>
	<datestamp>1256068680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>...and then you realize he's part of a small web firm, and probably doesn't have the budget for full-blown QA team (which is often expensive, as it's a tedious job).</htmltext>
<tokenext>...and then you realize he 's part of a small web firm , and probably does n't have the budget for full-blown QA team ( which is often expensive , as it 's a tedious job ) .</tokentext>
<sentencetext>...and then you realize he's part of a small web firm, and probably doesn't have the budget for full-blown QA team (which is often expensive, as it's a tedious job).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812641</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29862605</id>
	<title>Re:How slashdot does it</title>
	<author>Anonymous</author>
	<datestamp>1256410980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><tt>Don't forget about licensing! Make sure you follow the EULAs and pay for every copy of Windows.<br><br>But seriously, using Windows for virtualisation is a royal pain. I don't want to have to call my lawyer every time I set up a server or clone an existing VM.<br><br>Want something simple, just use Linux and KVM/Xen/VMware and be happy. If you really need Windows for a project, you can still have it for that specific VM.</tt></htmltext>
<tokenext>Do n't forget about licensing !
Make sure you follow the EULAs and pay for every copy of Windows.But seriously , using Windows for virtualisation is a royal pain .
I do n't want to have to call my lawyer every time I set up a server or clone an existing VM.Want something simple , just use Linux and KVM/Xen/VMware and be happy .
If you really need Windows for a project , you can still have it for that specific VM .</tokentext>
<sentencetext>Don't forget about licensing!
Make sure you follow the EULAs and pay for every copy of Windows.But seriously, using Windows for virtualisation is a royal pain.
I don't want to have to call my lawyer every time I set up a server or clone an existing VM.Want something simple, just use Linux and KVM/Xen/VMware and be happy.
If you really need Windows for a project, you can still have it for that specific VM.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816447</parent>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812663
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814701
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820303
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813127
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813315
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813163
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813483
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816447
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29862605
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29822109
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811547
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29815197
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812765
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813373
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812573
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812911
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820713
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812941
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813151
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812029
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820185
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812659
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29823481
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812399
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816403
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817803
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813161
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814001
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811917
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813753
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811665
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29821539
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811585
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817341
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812269
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812687
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814183
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813361
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811973
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814807
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811917
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817365
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812709
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812029
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814851
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812401
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_10_20_1733228_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812641
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820159
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813317
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811779
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811537
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811665
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812663
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814701
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29821539
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812941
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813161
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814001
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813361
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811547
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29815197
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812029
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814851
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820185
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813175
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812081
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811669
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811973
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814807
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811917
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813753
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817365
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812573
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812911
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816403
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817803
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811941
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29822109
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812765
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813163
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812399
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813373
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813151
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812641
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820159
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811585
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29817341
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812269
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812687
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811721
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813957
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811471
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29814183
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29816447
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29862605
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812709
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812659
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29823481
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812005
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811777
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813127
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813315
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812209
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_10_20_1733228.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29811839
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820713
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29813483
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29820303
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_10_20_1733228.29812401
</commentlist>
</conversation>
