<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_07_11_2017214</id>
	<title>How Do You Create Config Files Automatically?</title>
	<author>timothy</author>
	<datestamp>1247302980000</datestamp>
	<htmltext>An anonymous reader writes <i>"When deploying new server/servergroup/cluster to your IT infrastructure, deployment (simplified) consist of following steps: OS installation: to do it over network, boot server must be configured for this new server/servergroup/cluster; configuration/package management: configuration server has to be aware of the newcomer(s); monitoring and alerting: monitoring software must be reconfigured; and performance metrics: a tool for collecting data must be reconfigured. There are many excellent software solutions for those particular jobs, say configuration management (Puppet, Chef, cfengine, bcfg2), monitoring hosts and services (Nagios, Zabbix, OpenNMS, Zenoss, etc) and performance metrics (Ganglia, etc.). But each of these tools has to be configured independently or at least configuration has to be generated. What tools do you use to achieve this? For example, when you have to deploy a new server, how do you create configs for, let's say, PXE boot server, Puppet, Nagios and Ganglia, at once?"</i></htmltext>
<tokenext>An anonymous reader writes " When deploying new server/servergroup/cluster to your IT infrastructure , deployment ( simplified ) consist of following steps : OS installation : to do it over network , boot server must be configured for this new server/servergroup/cluster ; configuration/package management : configuration server has to be aware of the newcomer ( s ) ; monitoring and alerting : monitoring software must be reconfigured ; and performance metrics : a tool for collecting data must be reconfigured .
There are many excellent software solutions for those particular jobs , say configuration management ( Puppet , Chef , cfengine , bcfg2 ) , monitoring hosts and services ( Nagios , Zabbix , OpenNMS , Zenoss , etc ) and performance metrics ( Ganglia , etc. ) .
But each of these tools has to be configured independently or at least configuration has to be generated .
What tools do you use to achieve this ?
For example , when you have to deploy a new server , how do you create configs for , let 's say , PXE boot server , Puppet , Nagios and Ganglia , at once ?
"</tokentext>
<sentencetext>An anonymous reader writes "When deploying new server/servergroup/cluster to your IT infrastructure, deployment (simplified) consist of following steps: OS installation: to do it over network, boot server must be configured for this new server/servergroup/cluster; configuration/package management: configuration server has to be aware of the newcomer(s); monitoring and alerting: monitoring software must be reconfigured; and performance metrics: a tool for collecting data must be reconfigured.
There are many excellent software solutions for those particular jobs, say configuration management (Puppet, Chef, cfengine, bcfg2), monitoring hosts and services (Nagios, Zabbix, OpenNMS, Zenoss, etc) and performance metrics (Ganglia, etc.).
But each of these tools has to be configured independently or at least configuration has to be generated.
What tools do you use to achieve this?
For example, when you have to deploy a new server, how do you create configs for, let's say, PXE boot server, Puppet, Nagios and Ganglia, at once?
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664951</id>
	<title>Your configuration management toolkit should..</title>
	<author>bol</author>
	<datestamp>1247326320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Puppet can do all of that for you, including adding the host to nagios if you manage nagios's configuration with Puppet that is.</p><p>For my installations I'm currently using Cobbler to deploy a base install, which handles installing the OS and its configuration (IP, hostname, etc.) Cobbler also installs a number of post-install scripts which then run on first boot to install things like vendor specific drivers/packages (eg the HP PSP) and does an initial run of puppet, which automatically registers with puppermaster. The node will pull down everything else it needs based on its standard configuration and any assigned classes. Cobbler can also control Puppet, via external files, to allow all of this to be configured via Cobbler on the command line when you add a host. If you control Nagios via Puppet, it can generate all of the nagios configurations for it as well.</p><p>As far as I'm concerned generating configuration files lies solely with the configuration management system, eg Puppet or your own tools (stored in version control!) I use Puppet for everything possible and for things that I am too lazy to put together in Puppet I generate them via custom tools and have the output stored in svn (apache vhosts, etc.)</p><p>It's also important to make things as generic as possible and try to use standard tools wherever possible, eg SNMP for monitoring.</p></htmltext>
<tokenext>Puppet can do all of that for you , including adding the host to nagios if you manage nagios 's configuration with Puppet that is.For my installations I 'm currently using Cobbler to deploy a base install , which handles installing the OS and its configuration ( IP , hostname , etc .
) Cobbler also installs a number of post-install scripts which then run on first boot to install things like vendor specific drivers/packages ( eg the HP PSP ) and does an initial run of puppet , which automatically registers with puppermaster .
The node will pull down everything else it needs based on its standard configuration and any assigned classes .
Cobbler can also control Puppet , via external files , to allow all of this to be configured via Cobbler on the command line when you add a host .
If you control Nagios via Puppet , it can generate all of the nagios configurations for it as well.As far as I 'm concerned generating configuration files lies solely with the configuration management system , eg Puppet or your own tools ( stored in version control !
) I use Puppet for everything possible and for things that I am too lazy to put together in Puppet I generate them via custom tools and have the output stored in svn ( apache vhosts , etc .
) It 's also important to make things as generic as possible and try to use standard tools wherever possible , eg SNMP for monitoring .</tokentext>
<sentencetext>Puppet can do all of that for you, including adding the host to nagios if you manage nagios's configuration with Puppet that is.For my installations I'm currently using Cobbler to deploy a base install, which handles installing the OS and its configuration (IP, hostname, etc.
) Cobbler also installs a number of post-install scripts which then run on first boot to install things like vendor specific drivers/packages (eg the HP PSP) and does an initial run of puppet, which automatically registers with puppermaster.
The node will pull down everything else it needs based on its standard configuration and any assigned classes.
Cobbler can also control Puppet, via external files, to allow all of this to be configured via Cobbler on the command line when you add a host.
If you control Nagios via Puppet, it can generate all of the nagios configurations for it as well.As far as I'm concerned generating configuration files lies solely with the configuration management system, eg Puppet or your own tools (stored in version control!
) I use Puppet for everything possible and for things that I am too lazy to put together in Puppet I generate them via custom tools and have the output stored in svn (apache vhosts, etc.
)It's also important to make things as generic as possible and try to use standard tools wherever possible, eg SNMP for monitoring.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663669</id>
	<title>too variable to automate</title>
	<author>bzipitidoo</author>
	<datestamp>1247310240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>In the small shops where I have worked, I find the uses and specific hardware a little too variable to easily automate configurations.  One machine is a database server, another is part of a file server cluster, another is a web server, and yet another is a firewall and spam filter.  One will have a single large hard drive, another will use software RAID, the others will have hardware RAID.  Some have multiple network connections.  A large organization that sets up many identical servers every day might find automatic configuration useful.  But in that case, why not just use imaging?  Much faster than installing an OS over and over.

</p><p>If that isn't enough, things change so quickly.  New versions of OSes come out a few times a year.  Specific hardware might be available only in a 6 month window.  Expect any automatic configuration to take lots of maintenance or quickly rot.</p></htmltext>
<tokenext>In the small shops where I have worked , I find the uses and specific hardware a little too variable to easily automate configurations .
One machine is a database server , another is part of a file server cluster , another is a web server , and yet another is a firewall and spam filter .
One will have a single large hard drive , another will use software RAID , the others will have hardware RAID .
Some have multiple network connections .
A large organization that sets up many identical servers every day might find automatic configuration useful .
But in that case , why not just use imaging ?
Much faster than installing an OS over and over .
If that is n't enough , things change so quickly .
New versions of OSes come out a few times a year .
Specific hardware might be available only in a 6 month window .
Expect any automatic configuration to take lots of maintenance or quickly rot .</tokentext>
<sentencetext>In the small shops where I have worked, I find the uses and specific hardware a little too variable to easily automate configurations.
One machine is a database server, another is part of a file server cluster, another is a web server, and yet another is a firewall and spam filter.
One will have a single large hard drive, another will use software RAID, the others will have hardware RAID.
Some have multiple network connections.
A large organization that sets up many identical servers every day might find automatic configuration useful.
But in that case, why not just use imaging?
Much faster than installing an OS over and over.
If that isn't enough, things change so quickly.
New versions of OSes come out a few times a year.
Specific hardware might be available only in a 6 month window.
Expect any automatic configuration to take lots of maintenance or quickly rot.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668495</id>
	<title>UniCluster</title>
	<author>CE@UIC</author>
	<datestamp>1247425320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There is an open source cluster management stack called UniCluster available at http://grid.org.  (disclosure:  I work for the company that makes UniCluster).  Its intended for managing HPC clusters but it can do everything that you're looking for in one tool.  It has support for ganglia, nagios, cacti already built in and adding new third party components is pretty simple.  It has a tool to push config files around and will do bare metal provisioning (ie. setup PXE and kickstart for you).</p><p>Tom</p></htmltext>
<tokenext>There is an open source cluster management stack called UniCluster available at http : //grid.org .
( disclosure : I work for the company that makes UniCluster ) .
Its intended for managing HPC clusters but it can do everything that you 're looking for in one tool .
It has support for ganglia , nagios , cacti already built in and adding new third party components is pretty simple .
It has a tool to push config files around and will do bare metal provisioning ( ie .
setup PXE and kickstart for you ) .Tom</tokentext>
<sentencetext>There is an open source cluster management stack called UniCluster available at http://grid.org.
(disclosure:  I work for the company that makes UniCluster).
Its intended for managing HPC clusters but it can do everything that you're looking for in one tool.
It has support for ganglia, nagios, cacti already built in and adding new third party components is pretty simple.
It has a tool to push config files around and will do bare metal provisioning (ie.
setup PXE and kickstart for you).Tom</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665615</id>
	<title>Re:LDAP</title>
	<author>ckaminski</author>
	<datestamp>1247337840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Have you done this or are you just talking out of your ass? j/k<nobr> <wbr></nobr>:) Make sure your app doesn't "seek()"?  How'd this work with apache??</htmltext>
<tokenext>Have you done this or are you just talking out of your ass ?
j/k : ) Make sure your app does n't " seek ( ) " ?
How 'd this work with apache ?
?</tokentext>
<sentencetext>Have you done this or are you just talking out of your ass?
j/k :) Make sure your app doesn't "seek()"?
How'd this work with apache?
?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663535</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607</id>
	<title>M4 baby, M4</title>
	<author>Anonymous</author>
	<datestamp>1247309700000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.</p><p>In a build process for example you often have text files which are the input for some specialized tool. These could be text files in XML for your object-relational mapping tool. These probably won't support some kind of variable input and this is where M4 comes in handy.</p><p>Create a file with the extension ".m4" containing macro's like these (mind the quotes, M4 is kind of picky on that):</p><p>
&nbsp; &nbsp; define(`PREFIX', `jackv')</p><p>Then let M4 replace all instances of PREFIX:</p><p>
&nbsp; &nbsp; $ m4 mymacros.m4 orm-tool.xml</p><p>By default, m4 prints to the screen (standard output). Use the shell to redirect to a new file:</p><p>
&nbsp; &nbsp; $ m4 mymacros.m4 orm-tool.xml &gt; personalized-orm-tool.xml</p><p>Sometimes, it's nice to define a macro based on an environment variable. That's possible too. The following command would suit your needs:</p><p>
&nbsp; &nbsp; [jackv@testbox1]$ m4 -DPREFIX="$USERNAME" mymacros.m4 orm-tool.xml<br>The shell will expand the variable $USERNAME and the -D option tells M4 that the macro PREFIX is defined as jackv.</p></htmltext>
<tokenext>Everyone seems to have forgotten about M4 , an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis .
I 'm a developer and I used M4 in my projects.In a build process for example you often have text files which are the input for some specialized tool .
These could be text files in XML for your object-relational mapping tool .
These probably wo n't support some kind of variable input and this is where M4 comes in handy.Create a file with the extension " .m4 " containing macro 's like these ( mind the quotes , M4 is kind of picky on that ) :     define ( ` PREFIX ' , ` jackv ' ) Then let M4 replace all instances of PREFIX :     $ m4 mymacros.m4 orm-tool.xmlBy default , m4 prints to the screen ( standard output ) .
Use the shell to redirect to a new file :     $ m4 mymacros.m4 orm-tool.xml &gt; personalized-orm-tool.xmlSometimes , it 's nice to define a macro based on an environment variable .
That 's possible too .
The following command would suit your needs :     [ jackv @ testbox1 ] $ m4 -DPREFIX = " $ USERNAME " mymacros.m4 orm-tool.xmlThe shell will expand the variable $ USERNAME and the -D option tells M4 that the macro PREFIX is defined as jackv .</tokentext>
<sentencetext>Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis.
I'm a developer and I used M4 in my projects.In a build process for example you often have text files which are the input for some specialized tool.
These could be text files in XML for your object-relational mapping tool.
These probably won't support some kind of variable input and this is where M4 comes in handy.Create a file with the extension ".m4" containing macro's like these (mind the quotes, M4 is kind of picky on that):
    define(`PREFIX', `jackv')Then let M4 replace all instances of PREFIX:
    $ m4 mymacros.m4 orm-tool.xmlBy default, m4 prints to the screen (standard output).
Use the shell to redirect to a new file:
    $ m4 mymacros.m4 orm-tool.xml &gt; personalized-orm-tool.xmlSometimes, it's nice to define a macro based on an environment variable.
That's possible too.
The following command would suit your needs:
    [jackv@testbox1]$ m4 -DPREFIX="$USERNAME" mymacros.m4 orm-tool.xmlThe shell will expand the variable $USERNAME and the -D option tells M4 that the macro PREFIX is defined as jackv.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664003</id>
	<title>Re:Create a single boot image</title>
	<author>SanityInAnarchy</author>
	<datestamp>1247313240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>Boot to ramdisk... Depending on how big your image is and how much ram you've got.</p></div><p>In what way is that better than booting to ramfs? Then, if you have a local disk, map it as swap. Done.</p></div>
	</htmltext>
<tokenext>Boot to ramdisk... Depending on how big your image is and how much ram you 've got.In what way is that better than booting to ramfs ?
Then , if you have a local disk , map it as swap .
Done .</tokentext>
<sentencetext>Boot to ramdisk... Depending on how big your image is and how much ram you've got.In what way is that better than booting to ramfs?
Then, if you have a local disk, map it as swap.
Done.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663685</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663183</id>
	<title>Here, let me google that for you</title>
	<author>Anonymous</author>
	<datestamp>1247306820000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>-1</modscore>
	<htmltext><p><a href="http://www.google.com/search?q=how+do+you+create+config+files+automatically" title="google.com" rel="nofollow">http://www.google.com/search?q=how+do+you+create+config+files+automatically</a> [google.com]</p></htmltext>
<tokenext>http : //www.google.com/search ? q = how + do + you + create + config + files + automatically [ google.com ]</tokentext>
<sentencetext>http://www.google.com/search?q=how+do+you+create+config+files+automatically [google.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663775</id>
	<title>Novell ZENwork Linux Management</title>
	<author>Anonymous</author>
	<datestamp>1247311320000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Novell's ZENworks Linux Management (ZLM) is great for deployment, patching, and configuration management.  It works with SUSE Linux Enterprise and Redhat Linux Enterprise.  Combine this with Autoyast and a network install point,and it should do everything you need and more.<br>I use it to manage a large deployment of SUSE Linux Enterprise, with a small number of Redhat systems thrown in.  It has a  steep learning curve and is poorly documented, but once you have it up and running, it will make your life much easier.</p></htmltext>
<tokenext>Novell 's ZENworks Linux Management ( ZLM ) is great for deployment , patching , and configuration management .
It works with SUSE Linux Enterprise and Redhat Linux Enterprise .
Combine this with Autoyast and a network install point,and it should do everything you need and more.I use it to manage a large deployment of SUSE Linux Enterprise , with a small number of Redhat systems thrown in .
It has a steep learning curve and is poorly documented , but once you have it up and running , it will make your life much easier .</tokentext>
<sentencetext>Novell's ZENworks Linux Management (ZLM) is great for deployment, patching, and configuration management.
It works with SUSE Linux Enterprise and Redhat Linux Enterprise.
Combine this with Autoyast and a network install point,and it should do everything you need and more.I use it to manage a large deployment of SUSE Linux Enterprise, with a small number of Redhat systems thrown in.
It has a  steep learning curve and is poorly documented, but once you have it up and running, it will make your life much easier.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663415</id>
	<title>xorg</title>
	<author>FudRucker</author>
	<datestamp>1247308440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>#!/bin/sh
X -configure \
cp<nobr> <wbr></nobr>/root/xorg.conf.new<nobr> <wbr></nobr>/etc/X11/xorg.conf</htmltext>
<tokenext># ! /bin/sh X -configure \ cp /root/xorg.conf.new /etc/X11/xorg.conf</tokentext>
<sentencetext>#!/bin/sh
X -configure \
cp /root/xorg.conf.new /etc/X11/xorg.conf</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668551</id>
	<title>Re:Templates</title>
	<author>vrmlguy</author>
	<datestamp>1247425800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like <a href="http://genshi.edgewall.org/" title="edgewall.org">Genshi</a> [edgewall.org].  Run it periodically against the database, check in changes and email diffs to the admin.</p></div><p>I've always used <a href="http://gcc.gnu.org/onlinedocs/cpp/" title="gnu.org">cpp</a> [gnu.org] as my template engine, but then again, I've been doing this since the '80's.</p></div>
	</htmltext>
<tokenext>I 've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [ edgewall.org ] .
Run it periodically against the database , check in changes and email diffs to the admin.I 've always used cpp [ gnu.org ] as my template engine , but then again , I 've been doing this since the '80 's .</tokentext>
<sentencetext>I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [edgewall.org].
Run it periodically against the database, check in changes and email diffs to the admin.I've always used cpp [gnu.org] as my template engine, but then again, I've been doing this since the '80's.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663423</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28669659</id>
	<title>Re:M4 baby, M4</title>
	<author>arth1</author>
	<datestamp>1247392320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>And this is easier than creating a batch script HOW, exactly?</p><p>I had a discussion with a sysadmin-wannabe who wanted to use abstractions on absolutely everything.  His idea was to use substitutions like you subscribe, thinking it was easier that way.  I told him I could do the same with a single sed line.  He then said "A-ha, but what if you need a second replacements -- all *I* have to do is add two lines to my m4 source file and regenerate it!!!" (yes, he would speak with multiple exclamation points).  Whereupon I pointed out that all I had to do was add<nobr> <wbr></nobr>/one/ more line to the sed...  And that in all likelihood, when a new and incompatible version of the config file comes out with the next version of the software, the<nobr> <wbr></nobr>.m4 will have to be rewritten, while the simple sed script likely will keep on working.</p><p>There<nobr> <wbr></nobr>/is no/ substitute for understanding.  Any attempts of introducing automation without understanding will invariably introduce more points of failure, and make it harder to upgrade, migrate, or troubleshoot.  And if you understand, why then you don't<nobr> <wbr></nobr>/need/ abstractions.  They get in the way of quicker and less fragile methods.</p><p>Old school sysadmin:  Spends 7 hours on understanding something, then 5 minutes on writing a script, and 25 minutes rewriting it to be self-documenting and take into account any possible contingencies or race conditions.  Management thinks he's slacking, because he is only doing productive work for an hour a day.</p><p>New school sysadmin:  Spends 5 minutes not understanding something, 5 minutes on Google, then two full days on obtaining and installing OTS software to do magic for him, then applies for a training course to use that software.  Management thinks he's the bee's knees, cause not only does he do productive work much more of the time, but he also proactively seeks out training!  And the software ends up running with horrible default configurations, because he never got that training BEFORE he had to use the software the first time.</p></htmltext>
<tokenext>And this is easier than creating a batch script HOW , exactly ? I had a discussion with a sysadmin-wannabe who wanted to use abstractions on absolutely everything .
His idea was to use substitutions like you subscribe , thinking it was easier that way .
I told him I could do the same with a single sed line .
He then said " A-ha , but what if you need a second replacements -- all * I * have to do is add two lines to my m4 source file and regenerate it ! ! !
" ( yes , he would speak with multiple exclamation points ) .
Whereupon I pointed out that all I had to do was add /one/ more line to the sed... And that in all likelihood , when a new and incompatible version of the config file comes out with the next version of the software , the .m4 will have to be rewritten , while the simple sed script likely will keep on working.There /is no/ substitute for understanding .
Any attempts of introducing automation without understanding will invariably introduce more points of failure , and make it harder to upgrade , migrate , or troubleshoot .
And if you understand , why then you do n't /need/ abstractions .
They get in the way of quicker and less fragile methods.Old school sysadmin : Spends 7 hours on understanding something , then 5 minutes on writing a script , and 25 minutes rewriting it to be self-documenting and take into account any possible contingencies or race conditions .
Management thinks he 's slacking , because he is only doing productive work for an hour a day.New school sysadmin : Spends 5 minutes not understanding something , 5 minutes on Google , then two full days on obtaining and installing OTS software to do magic for him , then applies for a training course to use that software .
Management thinks he 's the bee 's knees , cause not only does he do productive work much more of the time , but he also proactively seeks out training !
And the software ends up running with horrible default configurations , because he never got that training BEFORE he had to use the software the first time .</tokentext>
<sentencetext>And this is easier than creating a batch script HOW, exactly?I had a discussion with a sysadmin-wannabe who wanted to use abstractions on absolutely everything.
His idea was to use substitutions like you subscribe, thinking it was easier that way.
I told him I could do the same with a single sed line.
He then said "A-ha, but what if you need a second replacements -- all *I* have to do is add two lines to my m4 source file and regenerate it!!!
" (yes, he would speak with multiple exclamation points).
Whereupon I pointed out that all I had to do was add /one/ more line to the sed...  And that in all likelihood, when a new and incompatible version of the config file comes out with the next version of the software, the .m4 will have to be rewritten, while the simple sed script likely will keep on working.There /is no/ substitute for understanding.
Any attempts of introducing automation without understanding will invariably introduce more points of failure, and make it harder to upgrade, migrate, or troubleshoot.
And if you understand, why then you don't /need/ abstractions.
They get in the way of quicker and less fragile methods.Old school sysadmin:  Spends 7 hours on understanding something, then 5 minutes on writing a script, and 25 minutes rewriting it to be self-documenting and take into account any possible contingencies or race conditions.
Management thinks he's slacking, because he is only doing productive work for an hour a day.New school sysadmin:  Spends 5 minutes not understanding something, 5 minutes on Google, then two full days on obtaining and installing OTS software to do magic for him, then applies for a training course to use that software.
Management thinks he's the bee's knees, cause not only does he do productive work much more of the time, but he also proactively seeks out training!
And the software ends up running with horrible default configurations, because he never got that training BEFORE he had to use the software the first time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664619</id>
	<title>Re:A Database w/ Config File Generators</title>
	<author>TooMuchToDo</author>
	<datestamp>1247320500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Have you thought about using Rocks or Redhat's Spacewalk to manage the server configs/kickstarts/etc and then kick that info over to Nagios?</htmltext>
<tokenext>Have you thought about using Rocks or Redhat 's Spacewalk to manage the server configs/kickstarts/etc and then kick that info over to Nagios ?</tokentext>
<sentencetext>Have you thought about using Rocks or Redhat's Spacewalk to manage the server configs/kickstarts/etc and then kick that info over to Nagios?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663189</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663347</id>
	<title>a bit of a special case</title>
	<author>ILongForDarkness</author>
	<datestamp>1247307960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>but at my work we use PXE boot and cfengine on one of our centos clusters. The nodes PXE boot off of the disk array of the cluster, after the install the next stage of the PXE/kickstart script installs and runs cfengine which gives the node all its NFS mounts, etc. I don't see why you couldn't do a similar thing for nagios configuration and ganglia. In fact for clusters I think that Rocks which uses centos, PXE, and Sun Grid Engine just like our cluster has the option of having ganglia for monitoring too so you probably can steal their setup and see how they automated it.</htmltext>
<tokenext>but at my work we use PXE boot and cfengine on one of our centos clusters .
The nodes PXE boot off of the disk array of the cluster , after the install the next stage of the PXE/kickstart script installs and runs cfengine which gives the node all its NFS mounts , etc .
I do n't see why you could n't do a similar thing for nagios configuration and ganglia .
In fact for clusters I think that Rocks which uses centos , PXE , and Sun Grid Engine just like our cluster has the option of having ganglia for monitoring too so you probably can steal their setup and see how they automated it .</tokentext>
<sentencetext>but at my work we use PXE boot and cfengine on one of our centos clusters.
The nodes PXE boot off of the disk array of the cluster, after the install the next stage of the PXE/kickstart script installs and runs cfengine which gives the node all its NFS mounts, etc.
I don't see why you couldn't do a similar thing for nagios configuration and ganglia.
In fact for clusters I think that Rocks which uses centos, PXE, and Sun Grid Engine just like our cluster has the option of having ganglia for monitoring too so you probably can steal their setup and see how they automated it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663423</id>
	<title>Templates</title>
	<author>Bogtha</author>
	<datestamp>1247308560000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>
I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like <a href="http://genshi.edgewall.org/" title="edgewall.org">Genshi</a> [edgewall.org].  Run it periodically against the database, check in changes and email diffs to the admin.
</p></htmltext>
<tokenext>I 've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [ edgewall.org ] .
Run it periodically against the database , check in changes and email diffs to the admin .</tokentext>
<sentencetext>
I've had good results with some home-grown scripts that grab the project-specific details from a database and then generate the relevant config files using a templating system like Genshi [edgewall.org].
Run it periodically against the database, check in changes and email diffs to the admin.
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663323</id>
	<title>Generate config files</title>
	<author>atomic-penguin</author>
	<datestamp>1247307780000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>4</modscore>
	<htmltext><p>That is what configuration management is supposed to do, as far as I know puppet and cfengine do this already.  I believe puppet compiles configuration changes and sends its hosts their configuration automatically, every 30 minutes.</p><p>Don't know what Unix or Linux vendor you're using puppet with.  Whenever you do your network install, assuming you have some unattended install process, there should be some way to run post installation scripts.  Create a post install script that will join your newly installed hosts to your puppet server.  Run this post install script with kickstart, preseed, etc. at the end of the install process.  Once newly installed hosts are joined to your central puppet server, then puppet can manage the rest of the configurations.</p></htmltext>
<tokenext>That is what configuration management is supposed to do , as far as I know puppet and cfengine do this already .
I believe puppet compiles configuration changes and sends its hosts their configuration automatically , every 30 minutes.Do n't know what Unix or Linux vendor you 're using puppet with .
Whenever you do your network install , assuming you have some unattended install process , there should be some way to run post installation scripts .
Create a post install script that will join your newly installed hosts to your puppet server .
Run this post install script with kickstart , preseed , etc .
at the end of the install process .
Once newly installed hosts are joined to your central puppet server , then puppet can manage the rest of the configurations .</tokentext>
<sentencetext>That is what configuration management is supposed to do, as far as I know puppet and cfengine do this already.
I believe puppet compiles configuration changes and sends its hosts their configuration automatically, every 30 minutes.Don't know what Unix or Linux vendor you're using puppet with.
Whenever you do your network install, assuming you have some unattended install process, there should be some way to run post installation scripts.
Create a post install script that will join your newly installed hosts to your puppet server.
Run this post install script with kickstart, preseed, etc.
at the end of the install process.
Once newly installed hosts are joined to your central puppet server, then puppet can manage the rest of the configurations.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663591</id>
	<title>Pick and Choose the best</title>
	<author>Anonymous</author>
	<datestamp>1247309640000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Just go with whatever works best for your environment.</p><p>OpenNMS for example uses discovery tools to automatically find new hosts, which works well unless you have a couple of hosts that have specific 1-off monitoring requirements.  That makes it a heck of a lot easier to use compared to Nagios, which is a pain to install and manage.</p></htmltext>
<tokenext>Just go with whatever works best for your environment.OpenNMS for example uses discovery tools to automatically find new hosts , which works well unless you have a couple of hosts that have specific 1-off monitoring requirements .
That makes it a heck of a lot easier to use compared to Nagios , which is a pain to install and manage .</tokentext>
<sentencetext>Just go with whatever works best for your environment.OpenNMS for example uses discovery tools to automatically find new hosts, which works well unless you have a couple of hosts that have specific 1-off monitoring requirements.
That makes it a heck of a lot easier to use compared to Nagios, which is a pain to install and manage.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28667965</id>
	<title>Reading it again</title>
	<author>mindstrm</author>
	<datestamp>1247420040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Reading the original post again - I'm a little unclear what the question is.</p><p>If the question is "How can I manage all this stuff" - you can manage it through puppet.</p><p>If the question is "Is there something that can automaticaly do EVERYTHING for me" then the answer is "No" - no matter how much you want to abstract things, at some point, you are going to have to plan and put the system together.</p><p>You could roll something sweet with OpenQRM to make it all drag and drop - but you'd have to put in the wrench time to model it after the types of things your organisation has/needs, and you'd have to roll quite a bit of infrastructure out underneath it to make it work.</p><p>What you are really asking, I think, is are you missing something in the big picture  - and I don't think you are - it's just a matter of scale.</p></htmltext>
<tokenext>Reading the original post again - I 'm a little unclear what the question is.If the question is " How can I manage all this stuff " - you can manage it through puppet.If the question is " Is there something that can automaticaly do EVERYTHING for me " then the answer is " No " - no matter how much you want to abstract things , at some point , you are going to have to plan and put the system together.You could roll something sweet with OpenQRM to make it all drag and drop - but you 'd have to put in the wrench time to model it after the types of things your organisation has/needs , and you 'd have to roll quite a bit of infrastructure out underneath it to make it work.What you are really asking , I think , is are you missing something in the big picture - and I do n't think you are - it 's just a matter of scale .</tokentext>
<sentencetext>Reading the original post again - I'm a little unclear what the question is.If the question is "How can I manage all this stuff" - you can manage it through puppet.If the question is "Is there something that can automaticaly do EVERYTHING for me" then the answer is "No" - no matter how much you want to abstract things, at some point, you are going to have to plan and put the system together.You could roll something sweet with OpenQRM to make it all drag and drop - but you'd have to put in the wrench time to model it after the types of things your organisation has/needs, and you'd have to roll quite a bit of infrastructure out underneath it to make it work.What you are really asking, I think, is are you missing something in the big picture  - and I don't think you are - it's just a matter of scale.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28666519</id>
	<title>Re:M4 baby, M4</title>
	<author>Bazer</author>
	<datestamp>1247401440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You'd get a cookie if I had my mod points.
I would be twice as productive if I knew all the tool sets that come with a standard Unix installation.
Problem is, most of those tools are older then me and getting to know them takes a lot of time.</htmltext>
<tokenext>You 'd get a cookie if I had my mod points .
I would be twice as productive if I knew all the tool sets that come with a standard Unix installation .
Problem is , most of those tools are older then me and getting to know them takes a lot of time .</tokentext>
<sentencetext>You'd get a cookie if I had my mod points.
I would be twice as productive if I knew all the tool sets that come with a standard Unix installation.
Problem is, most of those tools are older then me and getting to know them takes a lot of time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665743</id>
	<title>PECL</title>
	<author>Anonymous</author>
	<datestamp>1247340300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>.pl's and a PHP interface that calls them.</p></htmltext>
<tokenext>.pl 's and a PHP interface that calls them .</tokentext>
<sentencetext>.pl's and a PHP interface that calls them.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668595</id>
	<title>Wrong direction</title>
	<author>vlm</author>
	<datestamp>1247426340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>But each of these tools has to be configured independently or at least configuration has to be generated.</p></div><p>You write that like its bad or something.  Decentralized is always more reliable overall.</p><p>The correct way is to work it thru in reverse.  Automated tools should find things they can monitor, and then humans think about what to do.</p><p>NMAP periodically dumps its results in a DB.  Watch your CDP too.  Maybe sample your ARP cache on your switches.  And keep an eye on your RANCID router configs.</p><p>One simple script analyzes the nagios config and emails a complaint to either one individual, a mailing list, or a gateway that autogenerates a ticket.  The script sends one alert for each issue it finds, something like "WTF nmap found a device at 10.11.12.13 that is not configured or commented as ignore in Nagios".  I haven't met a plain text config file yet, that doesn't allow comments, so if you desire not to monitor something you have a syntax in the config file "# ignore 10.11.12.14" and your script understands that.</p><p>Nothing wrong with your script generating alerts that contain sample "cut-n-paste" info to add to your configs.</p><p>Repeat for reverse DNS, munin monitoring system, MRTG polling of anything with an open SNMP port, etc.</p><p>Also you need well backed up and replicated wiki with a page for every device your network monitoring tool detects.</p><p>Finally don't forget that if something has been "red" in nagios for perhaps a week and/or its gone from the ARP table for a week, maybe it's time to formally delete it, also necessitating alert emails.</p><p>Conveniently this scheme also "forces" people to explain what they think they are doing, to at least one other sentient being, which can be very educational for all concerned if the end users are doing something crazy.</p></div>
	</htmltext>
<tokenext>But each of these tools has to be configured independently or at least configuration has to be generated.You write that like its bad or something .
Decentralized is always more reliable overall.The correct way is to work it thru in reverse .
Automated tools should find things they can monitor , and then humans think about what to do.NMAP periodically dumps its results in a DB .
Watch your CDP too .
Maybe sample your ARP cache on your switches .
And keep an eye on your RANCID router configs.One simple script analyzes the nagios config and emails a complaint to either one individual , a mailing list , or a gateway that autogenerates a ticket .
The script sends one alert for each issue it finds , something like " WTF nmap found a device at 10.11.12.13 that is not configured or commented as ignore in Nagios " .
I have n't met a plain text config file yet , that does n't allow comments , so if you desire not to monitor something you have a syntax in the config file " # ignore 10.11.12.14 " and your script understands that.Nothing wrong with your script generating alerts that contain sample " cut-n-paste " info to add to your configs.Repeat for reverse DNS , munin monitoring system , MRTG polling of anything with an open SNMP port , etc.Also you need well backed up and replicated wiki with a page for every device your network monitoring tool detects.Finally do n't forget that if something has been " red " in nagios for perhaps a week and/or its gone from the ARP table for a week , maybe it 's time to formally delete it , also necessitating alert emails.Conveniently this scheme also " forces " people to explain what they think they are doing , to at least one other sentient being , which can be very educational for all concerned if the end users are doing something crazy .</tokentext>
<sentencetext>But each of these tools has to be configured independently or at least configuration has to be generated.You write that like its bad or something.
Decentralized is always more reliable overall.The correct way is to work it thru in reverse.
Automated tools should find things they can monitor, and then humans think about what to do.NMAP periodically dumps its results in a DB.
Watch your CDP too.
Maybe sample your ARP cache on your switches.
And keep an eye on your RANCID router configs.One simple script analyzes the nagios config and emails a complaint to either one individual, a mailing list, or a gateway that autogenerates a ticket.
The script sends one alert for each issue it finds, something like "WTF nmap found a device at 10.11.12.13 that is not configured or commented as ignore in Nagios".
I haven't met a plain text config file yet, that doesn't allow comments, so if you desire not to monitor something you have a syntax in the config file "# ignore 10.11.12.14" and your script understands that.Nothing wrong with your script generating alerts that contain sample "cut-n-paste" info to add to your configs.Repeat for reverse DNS, munin monitoring system, MRTG polling of anything with an open SNMP port, etc.Also you need well backed up and replicated wiki with a page for every device your network monitoring tool detects.Finally don't forget that if something has been "red" in nagios for perhaps a week and/or its gone from the ARP table for a week, maybe it's time to formally delete it, also necessitating alert emails.Conveniently this scheme also "forces" people to explain what they think they are doing, to at least one other sentient being, which can be very educational for all concerned if the end users are doing something crazy.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668417</id>
	<title>Zenoss/Puppet</title>
	<author>F.O.Dobbs</author>
	<datestamp>1247424660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>There's a Zenoss/Puppet integration here: <a href="http://github.com/mamba/puppet-zenoss/tree/master" title="github.com" rel="nofollow">http://github.com/mamba/puppet-zenoss/tree/master</a> [github.com]</p></htmltext>
<tokenext>There 's a Zenoss/Puppet integration here : http : //github.com/mamba/puppet-zenoss/tree/master [ github.com ]</tokentext>
<sentencetext>There's a Zenoss/Puppet integration here: http://github.com/mamba/puppet-zenoss/tree/master [github.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663361</id>
	<title>OpenNMS</title>
	<author>Anonymous</author>
	<datestamp>1247308020000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>OpenNMS runs a scan every 10 hours on my network.  You tell it what your network ranges are and it finds hosts and brings them into the configuration by itself without having to generate config files.  If you partition your network correctly and only use certain IP ranges for production hosts you can bring a system into monitoring quickly.  Depending on the size of the netblocks you could also set OpenNMS to scan more frequently.  Lets say you assign a window of 8 hours for a host to be in production.  Just have openNMS scan every 8 hours and you won't be bugged by the NOC paging you about the new server you keep rebooting.</p></htmltext>
<tokenext>OpenNMS runs a scan every 10 hours on my network .
You tell it what your network ranges are and it finds hosts and brings them into the configuration by itself without having to generate config files .
If you partition your network correctly and only use certain IP ranges for production hosts you can bring a system into monitoring quickly .
Depending on the size of the netblocks you could also set OpenNMS to scan more frequently .
Lets say you assign a window of 8 hours for a host to be in production .
Just have openNMS scan every 8 hours and you wo n't be bugged by the NOC paging you about the new server you keep rebooting .</tokentext>
<sentencetext>OpenNMS runs a scan every 10 hours on my network.
You tell it what your network ranges are and it finds hosts and brings them into the configuration by itself without having to generate config files.
If you partition your network correctly and only use certain IP ranges for production hosts you can bring a system into monitoring quickly.
Depending on the size of the netblocks you could also set OpenNMS to scan more frequently.
Lets say you assign a window of 8 hours for a host to be in production.
Just have openNMS scan every 8 hours and you won't be bugged by the NOC paging you about the new server you keep rebooting.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665049</id>
	<title>Trade secret</title>
	<author>Anonymous</author>
	<datestamp>1247328180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>When did Slashdot become #techsupport for #india?</p><p>Seriously, I've done the R&amp;D to find out what works and doesn't.  Why should I tell you, Mr. Anonymous?  Why not hire someone instead of insulting them.</p></htmltext>
<tokenext>When did Slashdot become # techsupport for # india ? Seriously , I 've done the R&amp;D to find out what works and does n't .
Why should I tell you , Mr. Anonymous ? Why not hire someone instead of insulting them .</tokentext>
<sentencetext>When did Slashdot become #techsupport for #india?Seriously, I've done the R&amp;D to find out what works and doesn't.
Why should I tell you, Mr. Anonymous?  Why not hire someone instead of insulting them.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28723501</id>
	<title>Re:M4 baby, M4</title>
	<author>Anonymous</author>
	<datestamp>1247742300000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I use M4 and rdist driven by make,  The whole deal is available at http://ftp.npcguild.org/pub/2008/.</p><p>Each file on the target hosts is represented by a directory under<nobr> <wbr></nobr>/usr/msrc/path/to/target/, in that<br>directory I use marked-up (m4) files to build the correct instance for each target machine, a make<br>recipe file to drive that creation process, and another make recipe file to drive the installation and<br>upkeep on the target host.  For details see http://msrc.npcguild.org/local/sbin/msrc/msrc.html.</p><p>This tactic also depends on a list of your hosts with some annoatations (m4 macro defines) to<br>specify some details about which hosts need which special details.  I've used this for years and<br>never found a file I couldn't master with make+m4+make.</p><p>I have quite a few tools up on the above ftp server that create RPMs for distribution, as most of<br>those are not customer per-host.  But they use the same "msrc" tactic to build because it is<br>so powerful.   -- kevin.braunsdorf@gmail.com</p></htmltext>
<tokenext>I use M4 and rdist driven by make , The whole deal is available at http : //ftp.npcguild.org/pub/2008/.Each file on the target hosts is represented by a directory under /usr/msrc/path/to/target/ , in thatdirectory I use marked-up ( m4 ) files to build the correct instance for each target machine , a makerecipe file to drive that creation process , and another make recipe file to drive the installation andupkeep on the target host .
For details see http : //msrc.npcguild.org/local/sbin/msrc/msrc.html.This tactic also depends on a list of your hosts with some annoatations ( m4 macro defines ) tospecify some details about which hosts need which special details .
I 've used this for years andnever found a file I could n't master with make + m4 + make.I have quite a few tools up on the above ftp server that create RPMs for distribution , as most ofthose are not customer per-host .
But they use the same " msrc " tactic to build because it isso powerful .
-- kevin.braunsdorf @ gmail.com</tokentext>
<sentencetext>I use M4 and rdist driven by make,  The whole deal is available at http://ftp.npcguild.org/pub/2008/.Each file on the target hosts is represented by a directory under /usr/msrc/path/to/target/, in thatdirectory I use marked-up (m4) files to build the correct instance for each target machine, a makerecipe file to drive that creation process, and another make recipe file to drive the installation andupkeep on the target host.
For details see http://msrc.npcguild.org/local/sbin/msrc/msrc.html.This tactic also depends on a list of your hosts with some annoatations (m4 macro defines) tospecify some details about which hosts need which special details.
I've used this for years andnever found a file I couldn't master with make+m4+make.I have quite a few tools up on the above ftp server that create RPMs for distribution, as most ofthose are not customer per-host.
But they use the same "msrc" tactic to build because it isso powerful.
-- kevin.braunsdorf@gmail.com</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664699</id>
	<title>Look at SME Server for Inspiration</title>
	<author>grcumb</author>
	<datestamp>1247321940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If you want inspiration about automated configuration management done right, take a look at <a href="http://www.contribs.org/" title="contribs.org">SME Server</a> [contribs.org]. It's got a <a href="http://wiki.contribs.org/SME\_Server:Documentation:Developers\_Manual:Section2" title="contribs.org">template-based, event-driven configuration management system</a> [contribs.org] with a mature, well-documented API that could easily be appropriated for in-house use.</p><p>The SME Server distro itself is a general-purpose small office server, so it's likely not appropriate for your shop, but their approach to configuration management is simple, well-designed and extremely well-implemented.</p><p> <strong>Full disclosure:</strong> I worked for the company that developed SME Server for a couple of years, and I continue to deploy and support it widely.</p></htmltext>
<tokenext>If you want inspiration about automated configuration management done right , take a look at SME Server [ contribs.org ] .
It 's got a template-based , event-driven configuration management system [ contribs.org ] with a mature , well-documented API that could easily be appropriated for in-house use.The SME Server distro itself is a general-purpose small office server , so it 's likely not appropriate for your shop , but their approach to configuration management is simple , well-designed and extremely well-implemented .
Full disclosure : I worked for the company that developed SME Server for a couple of years , and I continue to deploy and support it widely .</tokentext>
<sentencetext>If you want inspiration about automated configuration management done right, take a look at SME Server [contribs.org].
It's got a template-based, event-driven configuration management system [contribs.org] with a mature, well-documented API that could easily be appropriated for in-house use.The SME Server distro itself is a general-purpose small office server, so it's likely not appropriate for your shop, but their approach to configuration management is simple, well-designed and extremely well-implemented.
Full disclosure: I worked for the company that developed SME Server for a couple of years, and I continue to deploy and support it widely.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664663</id>
	<title>Reminds me of a sysadmin koan...</title>
	<author>ghostis</author>
	<datestamp>1247321400000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext><p>Reminds me of a sysadmin koan I once found...</p><p>Junior admin: "How do I configure this server?"<br>Master: "Turn it on"</p><p><a href="http://bashedupbits.wordpress.com/2008/07/09/systems-administration-koans/" title="wordpress.com">http://bashedupbits.wordpress.com/2008/07/09/systems-administration-koans/</a> [wordpress.com]</p></htmltext>
<tokenext>Reminds me of a sysadmin koan I once found...Junior admin : " How do I configure this server ?
" Master : " Turn it on " http : //bashedupbits.wordpress.com/2008/07/09/systems-administration-koans/ [ wordpress.com ]</tokentext>
<sentencetext>Reminds me of a sysadmin koan I once found...Junior admin: "How do I configure this server?
"Master: "Turn it on"http://bashedupbits.wordpress.com/2008/07/09/systems-administration-koans/ [wordpress.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665879</id>
	<title>Re:Sounds like an Ubuntu user</title>
	<author>palegray.net</author>
	<datestamp>1247430180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Right, because Debian isn't a mature operating system, and Ubuntu couldn't possibly be based on Debian...<br> <br>

That aside, good luck with your pretty point-and-click crud on servers that don't have X installed (about 99\% of deployed Linux servers, probably).</htmltext>
<tokenext>Right , because Debian is n't a mature operating system , and Ubuntu could n't possibly be based on Debian.. . That aside , good luck with your pretty point-and-click crud on servers that do n't have X installed ( about 99 \ % of deployed Linux servers , probably ) .</tokentext>
<sentencetext>Right, because Debian isn't a mature operating system, and Ubuntu couldn't possibly be based on Debian... 

That aside, good luck with your pretty point-and-click crud on servers that don't have X installed (about 99\% of deployed Linux servers, probably).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663653</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28666937</id>
	<title>If you have money ... Voyence</title>
	<author>DougReed</author>
	<datestamp>1247408940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>At the risk of sounding like some sort of an advertisement for EMC, If you are working for a company with money...  Voyence is a WAY cool product.  It will do just about anything you could possibly want to network devices.  It will even tell you if you screw up something.</p></htmltext>
<tokenext>At the risk of sounding like some sort of an advertisement for EMC , If you are working for a company with money... Voyence is a WAY cool product .
It will do just about anything you could possibly want to network devices .
It will even tell you if you screw up something .</tokentext>
<sentencetext>At the risk of sounding like some sort of an advertisement for EMC, If you are working for a company with money...  Voyence is a WAY cool product.
It will do just about anything you could possibly want to network devices.
It will even tell you if you screw up something.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663991</id>
	<title>RedHat Satellite Server</title>
	<author>giminy</author>
	<datestamp>1247313180000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext><p>RedHat's satellite server has some pretty options for this, if you dig deeply enough.</p><p>RHSS lets you create configuration files to deploy to all of your machines.  It lets you use macros in deployed configuration files, and you can use server-specific variables (they call them Keys iirc) inside of the configuration files to be deployed on remote servers.  For example, you create a generic firewall configuration with a macro block that queries the variable SMBALLOWED.  If the value is set, it includes an accept rule for the smb ports.  Otherwise, those lines aren't included in the deployed config.  Every server that you deploy that you expect to run an SMB server on, you set the local server variable SMBALLOWED=1.  Satellite server can also be set up to push config files via XMPP (every server on your network stays connected to the satellite via xmpp, the satellite issues commands like 'update blah\_config' to the managed server, and the managed server retrieves the latest version of the config file from the satellite server).</p><p>Satellite is pretty darned fancy, but also was pretty buggy back when I used it.  Good luck!</p><p>Reid</p></htmltext>
<tokenext>RedHat 's satellite server has some pretty options for this , if you dig deeply enough.RHSS lets you create configuration files to deploy to all of your machines .
It lets you use macros in deployed configuration files , and you can use server-specific variables ( they call them Keys iirc ) inside of the configuration files to be deployed on remote servers .
For example , you create a generic firewall configuration with a macro block that queries the variable SMBALLOWED .
If the value is set , it includes an accept rule for the smb ports .
Otherwise , those lines are n't included in the deployed config .
Every server that you deploy that you expect to run an SMB server on , you set the local server variable SMBALLOWED = 1 .
Satellite server can also be set up to push config files via XMPP ( every server on your network stays connected to the satellite via xmpp , the satellite issues commands like 'update blah \ _config ' to the managed server , and the managed server retrieves the latest version of the config file from the satellite server ) .Satellite is pretty darned fancy , but also was pretty buggy back when I used it .
Good luck ! Reid</tokentext>
<sentencetext>RedHat's satellite server has some pretty options for this, if you dig deeply enough.RHSS lets you create configuration files to deploy to all of your machines.
It lets you use macros in deployed configuration files, and you can use server-specific variables (they call them Keys iirc) inside of the configuration files to be deployed on remote servers.
For example, you create a generic firewall configuration with a macro block that queries the variable SMBALLOWED.
If the value is set, it includes an accept rule for the smb ports.
Otherwise, those lines aren't included in the deployed config.
Every server that you deploy that you expect to run an SMB server on, you set the local server variable SMBALLOWED=1.
Satellite server can also be set up to push config files via XMPP (every server on your network stays connected to the satellite via xmpp, the satellite issues commands like 'update blah\_config' to the managed server, and the managed server retrieves the latest version of the config file from the satellite server).Satellite is pretty darned fancy, but also was pretty buggy back when I used it.
Good luck!Reid</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665477</id>
	<title>Re:A Database w/ Config File Generators</title>
	<author>Anonymous</author>
	<datestamp>1247334840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>That is an excellent idea! I wonder why original poster didn't think about automating the whole process!</p></htmltext>
<tokenext>That is an excellent idea !
I wonder why original poster did n't think about automating the whole process !</tokentext>
<sentencetext>That is an excellent idea!
I wonder why original poster didn't think about automating the whole process!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663189</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665693</id>
	<title>config management</title>
	<author>Sadsfae</author>
	<datestamp>1247339460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>We use a robust configuration management/provisioning system consisting of puppet, cobbler and koan.</p><p>Puppet is easily scaleable for just about any sort of server need, cobbler and koan take care of the heavy lifting for provisioning.  It's also fairly easy to write your own puppet types and modules for various tasks.</p><p>With one command we are able to provision a server from bare metal (or vm) to a fully working server, complete with SAN/NAS storage, fully operational daemons and authentication.</p></htmltext>
<tokenext>We use a robust configuration management/provisioning system consisting of puppet , cobbler and koan.Puppet is easily scaleable for just about any sort of server need , cobbler and koan take care of the heavy lifting for provisioning .
It 's also fairly easy to write your own puppet types and modules for various tasks.With one command we are able to provision a server from bare metal ( or vm ) to a fully working server , complete with SAN/NAS storage , fully operational daemons and authentication .</tokentext>
<sentencetext>We use a robust configuration management/provisioning system consisting of puppet, cobbler and koan.Puppet is easily scaleable for just about any sort of server need, cobbler and koan take care of the heavy lifting for provisioning.
It's also fairly easy to write your own puppet types and modules for various tasks.With one command we are able to provision a server from bare metal (or vm) to a fully working server, complete with SAN/NAS storage, fully operational daemons and authentication.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663653</id>
	<title>Sounds like an Ubuntu user</title>
	<author>Anonymous</author>
	<datestamp>1247310120000</datestamp>
	<modclass>None</modclass>
	<modscore>-1</modscore>
	<htmltext><p>On the mature Linux distributions (eg Redhat, Suse and Mandriva), there are numerous wizards, usually written in Perl, that will configure everything you can possibly dream of at the click of a mouse.  You can also use Redhat Kickstart (on any of the above distros) to automatically install and configure a system.</p><p>If you need to deploy lots of new machines, then Ubuntu is the wrong solution...</p></htmltext>
<tokenext>On the mature Linux distributions ( eg Redhat , Suse and Mandriva ) , there are numerous wizards , usually written in Perl , that will configure everything you can possibly dream of at the click of a mouse .
You can also use Redhat Kickstart ( on any of the above distros ) to automatically install and configure a system.If you need to deploy lots of new machines , then Ubuntu is the wrong solution.. .</tokentext>
<sentencetext>On the mature Linux distributions (eg Redhat, Suse and Mandriva), there are numerous wizards, usually written in Perl, that will configure everything you can possibly dream of at the click of a mouse.
You can also use Redhat Kickstart (on any of the above distros) to automatically install and configure a system.If you need to deploy lots of new machines, then Ubuntu is the wrong solution...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665345</id>
	<title>Huh!</title>
	<author>liquibyte</author>
	<datestamp>1247332800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Do/you/speak/english and/or any/other/language? AYFKM!!!</htmltext>
<tokenext>Do/you/speak/english and/or any/other/language ?
AYFKM ! ! !</tokentext>
<sentencetext>Do/you/speak/english and/or any/other/language?
AYFKM!!!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665699</id>
	<title>Re:too variable to automate</title>
	<author>mindstrm</author>
	<datestamp>1247339640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"We don't need configuration management because our configuration is an unmanaged mess and managing it would just be more overhead we don't have time for"... ?</p><p>Puppet, for one, is very generic.  Even if you only use it to push out basic packages and standard configs, even if you don't use any of the templating and fancy hooks and stuff - you are saving yourself work down the road, whether it's moving to virtualizing, switching from linux to bsd, or requiring test/qa/production systems, or maybe even a backup solution.  It's got very little to do with rolling out systems every day, and everything to do with consistency and policy enforcement.</p><p>Yes, it will require maintenance as your requirements change - but without it, so does the ragtag set of systems you are running.... and unless you are really picky with your documentation and procedures, most of the important details are probably in  your head.  If you force yourself to define them in puppet (or something similar) then you can focus your efforts better.</p><p>
&nbsp;</p></htmltext>
<tokenext>" We do n't need configuration management because our configuration is an unmanaged mess and managing it would just be more overhead we do n't have time for " ... ? Puppet , for one , is very generic .
Even if you only use it to push out basic packages and standard configs , even if you do n't use any of the templating and fancy hooks and stuff - you are saving yourself work down the road , whether it 's moving to virtualizing , switching from linux to bsd , or requiring test/qa/production systems , or maybe even a backup solution .
It 's got very little to do with rolling out systems every day , and everything to do with consistency and policy enforcement.Yes , it will require maintenance as your requirements change - but without it , so does the ragtag set of systems you are running.... and unless you are really picky with your documentation and procedures , most of the important details are probably in your head .
If you force yourself to define them in puppet ( or something similar ) then you can focus your efforts better .
 </tokentext>
<sentencetext>"We don't need configuration management because our configuration is an unmanaged mess and managing it would just be more overhead we don't have time for"... ?Puppet, for one, is very generic.
Even if you only use it to push out basic packages and standard configs, even if you don't use any of the templating and fancy hooks and stuff - you are saving yourself work down the road, whether it's moving to virtualizing, switching from linux to bsd, or requiring test/qa/production systems, or maybe even a backup solution.
It's got very little to do with rolling out systems every day, and everything to do with consistency and policy enforcement.Yes, it will require maintenance as your requirements change - but without it, so does the ragtag set of systems you are running.... and unless you are really picky with your documentation and procedures, most of the important details are probably in  your head.
If you force yourself to define them in puppet (or something similar) then you can focus your efforts better.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668889</id>
	<title>Re:M4 baby, M4</title>
	<author>illumin8</author>
	<datestamp>1247428920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis. I'm a developer and I used M4 in my projects.</p></div></blockquote><p>Excuse me, but I'd rather gouge my eyeballs out of their sockets with a rusty spoon than try to read someone else's M4 macros.  M4 fails at being readable, unlike other config generating tools like Cfengine, which has code that tells even a non-programmer exactly what it does.  Have you ever tried to read sendmail.mc?  If you have you'll know what I'm talking about.</p></div>
	</htmltext>
<tokenext>Everyone seems to have forgotten about M4 , an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis .
I 'm a developer and I used M4 in my projects.Excuse me , but I 'd rather gouge my eyeballs out of their sockets with a rusty spoon than try to read someone else 's M4 macros .
M4 fails at being readable , unlike other config generating tools like Cfengine , which has code that tells even a non-programmer exactly what it does .
Have you ever tried to read sendmail.mc ?
If you have you 'll know what I 'm talking about .</tokentext>
<sentencetext>Everyone seems to have forgotten about M4, an extremely handy standard Unix tool when you need a text file with some parts changed on a regular basis.
I'm a developer and I used M4 in my projects.Excuse me, but I'd rather gouge my eyeballs out of their sockets with a rusty spoon than try to read someone else's M4 macros.
M4 fails at being readable, unlike other config generating tools like Cfengine, which has code that tells even a non-programmer exactly what it does.
Have you ever tried to read sendmail.mc?
If you have you'll know what I'm talking about.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663271</id>
	<title>Dear Slashdot..</title>
	<author>Anonymous</author>
	<datestamp>1247307420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How do I automate away a sysadmin position?</p><p>Love,</p><p>Industry</p><p>--</p><p>Heh, the Captcha word is "unions"</p></htmltext>
<tokenext>How do I automate away a sysadmin position ? Love,Industry--Heh , the Captcha word is " unions "</tokentext>
<sentencetext>How do I automate away a sysadmin position?Love,Industry--Heh, the Captcha word is "unions"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663685</id>
	<title>Create a single boot image</title>
	<author>Colin Smith</author>
	<datestamp>1247310420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Boot to ramdisk... Depending on how big your image is and how much ram you've got.</p><p>The problem with puppet, debian/apt etc is the inevitable gradual divergence of systems as time passes; scripts fail, packages don't get installed etc. It's exactly the same problem that life faces, you'll notice that all large multicellular organisms go through a stage where there is initially only a single cell. That's because mutations creep in otherwise and the cells diverge from one another over time. Eventually you're left with a random slime which is widely divergent in code.</p><p>Apply all your updates to a single image, boot the image on all the machines you want to run it on, they are now all running identical code. Guaranteed. Arrange your clusters such that any one machine can be offline. Plus, if you have an image you're booting, you can roll back to older versions trivially.<br>
&nbsp;</p></htmltext>
<tokenext>Boot to ramdisk... Depending on how big your image is and how much ram you 've got.The problem with puppet , debian/apt etc is the inevitable gradual divergence of systems as time passes ; scripts fail , packages do n't get installed etc .
It 's exactly the same problem that life faces , you 'll notice that all large multicellular organisms go through a stage where there is initially only a single cell .
That 's because mutations creep in otherwise and the cells diverge from one another over time .
Eventually you 're left with a random slime which is widely divergent in code.Apply all your updates to a single image , boot the image on all the machines you want to run it on , they are now all running identical code .
Guaranteed. Arrange your clusters such that any one machine can be offline .
Plus , if you have an image you 're booting , you can roll back to older versions trivially .
 </tokentext>
<sentencetext>Boot to ramdisk... Depending on how big your image is and how much ram you've got.The problem with puppet, debian/apt etc is the inevitable gradual divergence of systems as time passes; scripts fail, packages don't get installed etc.
It's exactly the same problem that life faces, you'll notice that all large multicellular organisms go through a stage where there is initially only a single cell.
That's because mutations creep in otherwise and the cells diverge from one another over time.
Eventually you're left with a random slime which is widely divergent in code.Apply all your updates to a single image, boot the image on all the machines you want to run it on, they are now all running identical code.
Guaranteed. Arrange your clusters such that any one machine can be offline.
Plus, if you have an image you're booting, you can roll back to older versions trivially.
 </sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663229</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663393</id>
	<title>XCAT and post scripts</title>
	<author>clutch110</author>
	<datestamp>1247308260000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>We have XCAT and post scripts setup to do the majority of our work.  Images the machine (PXE generation, DHCP config), installs files based on group, sets the ganglia config.  I don't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures.  Zenoss is done afterwards as I have yet to find a good way to automate that.</p></htmltext>
<tokenext>We have XCAT and post scripts setup to do the majority of our work .
Images the machine ( PXE generation , DHCP config ) , installs files based on group , sets the ganglia config .
I do n't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures .
Zenoss is done afterwards as I have yet to find a good way to automate that .</tokentext>
<sentencetext>We have XCAT and post scripts setup to do the majority of our work.
Images the machine (PXE generation, DHCP config), installs files based on group, sets the ganglia config.
I don't have any monitoring setup on compute nodes as I have ganglia open daily to watch for cluster node failures.
Zenoss is done afterwards as I have yet to find a good way to automate that.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28679877</id>
	<title>Re:LDAP</title>
	<author>LordKazan</author>
	<datestamp>1247511720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>add to that - live CDs or PXE booting liveCD images.</p><p>one of my previous employers had a server architecture that looked like this [after their upgrade/redesign of their cluster].</p><p>2 redirector nodes - primary and backup<br>4 app nodes - load sharing<br>2 mysql nodes - primary and backup<br>2 storage nodes. - primary and backup</p><p>only machines in this cluster with harddrives - the storage nodes.  (the mysql nodes had massive ram - they could buffer most of the tables in RAM for quick access while they were writing the updates to disk on the storaeg nodes).</p><p>A machines role was determined by what liveCD was put into it.  need another app node? slap in a liveCD and 30 seconds after boot it's sharing the load.  box get owned? reboot it and it's back to clean state while you roll a liveCD with the security updates.</p><p>a simple extension to this would have everything PXE booting with the default image being the app node image - adding extra capacity to the other machines would take just adding their MAC address to a list for the other image types. (not even sure there is an PXE software that supports doing that<nobr> <wbr></nobr>.. but you can always alter it)</p></htmltext>
<tokenext>add to that - live CDs or PXE booting liveCD images.one of my previous employers had a server architecture that looked like this [ after their upgrade/redesign of their cluster ] .2 redirector nodes - primary and backup4 app nodes - load sharing2 mysql nodes - primary and backup2 storage nodes .
- primary and backuponly machines in this cluster with harddrives - the storage nodes .
( the mysql nodes had massive ram - they could buffer most of the tables in RAM for quick access while they were writing the updates to disk on the storaeg nodes ) .A machines role was determined by what liveCD was put into it .
need another app node ?
slap in a liveCD and 30 seconds after boot it 's sharing the load .
box get owned ?
reboot it and it 's back to clean state while you roll a liveCD with the security updates.a simple extension to this would have everything PXE booting with the default image being the app node image - adding extra capacity to the other machines would take just adding their MAC address to a list for the other image types .
( not even sure there is an PXE software that supports doing that .. but you can always alter it )</tokentext>
<sentencetext>add to that - live CDs or PXE booting liveCD images.one of my previous employers had a server architecture that looked like this [after their upgrade/redesign of their cluster].2 redirector nodes - primary and backup4 app nodes - load sharing2 mysql nodes - primary and backup2 storage nodes.
- primary and backuponly machines in this cluster with harddrives - the storage nodes.
(the mysql nodes had massive ram - they could buffer most of the tables in RAM for quick access while they were writing the updates to disk on the storaeg nodes).A machines role was determined by what liveCD was put into it.
need another app node?
slap in a liveCD and 30 seconds after boot it's sharing the load.
box get owned?
reboot it and it's back to clean state while you roll a liveCD with the security updates.a simple extension to this would have everything PXE booting with the default image being the app node image - adding extra capacity to the other machines would take just adding their MAC address to a list for the other image types.
(not even sure there is an PXE software that supports doing that .. but you can always alter it)</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663535</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663973</id>
	<title>Puppet cr@p...</title>
	<author>Anonymous</author>
	<datestamp>1247313000000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>-1</modscore>
	<htmltext><p>...submitter is a schill for Puppet.  Real admins acheive network system convergence using Cfengine.</p><p>Anyone using Puppet has been duped by it's primary developer...someone who befriended Mark Burgess, author of Cfengine, and then betrayed and stole his code and ideas.</p><p>And still managed to fail it at miserably.</p></htmltext>
<tokenext>...submitter is a schill for Puppet .
Real admins acheive network system convergence using Cfengine.Anyone using Puppet has been duped by it 's primary developer...someone who befriended Mark Burgess , author of Cfengine , and then betrayed and stole his code and ideas.And still managed to fail it at miserably .</tokentext>
<sentencetext>...submitter is a schill for Puppet.
Real admins acheive network system convergence using Cfengine.Anyone using Puppet has been duped by it's primary developer...someone who befriended Mark Burgess, author of Cfengine, and then betrayed and stole his code and ideas.And still managed to fail it at miserably.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663187</id>
	<title>Emacs or vi...</title>
	<author>Anonymous</author>
	<datestamp>1247306820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>And I type the stuff I need.</p><p>(And I start a war on<nobr> <wbr></nobr>/. )</p></htmltext>
<tokenext>And I type the stuff I need .
( And I start a war on / .
)</tokentext>
<sentencetext>And I type the stuff I need.
(And I start a war on /.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663487</id>
	<title>FAI - Fully Automatic Installation</title>
	<author>Clark Rawlins</author>
	<datestamp>1247308860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I have successfully used FAI to install Debian servers in the past. For what I needed it worked great.  It is supposed to support other distributions and automatic updates as well but I haven't tried it for either of those uses.</p></htmltext>
<tokenext>I have successfully used FAI to install Debian servers in the past .
For what I needed it worked great .
It is supposed to support other distributions and automatic updates as well but I have n't tried it for either of those uses .</tokentext>
<sentencetext>I have successfully used FAI to install Debian servers in the past.
For what I needed it worked great.
It is supposed to support other distributions and automatic updates as well but I haven't tried it for either of those uses.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28670845</id>
	<title>Cobbler?</title>
	<author>apresrasage</author>
	<datestamp>1247402460000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I use cobbler and cfengine to deploy and maintain a couple of clusters including Xen virtual machines and a<br>
few labs with workstations.<br>
Cobbler does a pretty good job at deployment<nobr> <wbr></nobr>... cfengine a pretty good job at management<nobr> <wbr></nobr>...<br>
<br>
<br>
Automatic configuration<nobr> <wbr></nobr>... uh<nobr> <wbr></nobr>... I guess cobbler takes the edge off of configuring dhcp/pxe/dns/yum servers<br>
for deployment and updates. Kickstart scripts can be obtained by building one machine, grabbing the anaconda<br>
script from the root directory and fudging it to taste.<br> That's almost automatic<nobr> <wbr></nobr>;-) (not really) <br>
On the downside, with cobbler, you get the overenthusiastic release sequences typical of Fedora related<br>
projects (if it compiles and runs, it is production ready; major features introduced within a minor release and<br>
all that good stuff), so updating is a bit of a adrenalin rush time.<br> But, such is price of freedom (and free beer).
<br>
<br>
Configuring machines using cfengine is a dog (and I learned to love the pup), but it is the best dog we have.<br>
That is all but automatic. I also have puppet deployed to compare<nobr> <wbr></nobr>... well, It has its upsides, but it is not<br>
better than cfengine. Frankly, I do not benefit much from the main concepts and features behind cfengine<br>
and would probably be as well off with puppet, or even func and such.<br> Having a company backing cfengine<br>
makes me feel a little better now. (I was a little nervous about Mark crossing the streets every day<nobr> <wbr></nobr>... buses<br>
stop for no one).<br>
<br>
I don't think that in the current state of affairs automatic configuration is not even desirable as all of the<br>
components involved very rapidly reach configuration complexity that needs auditing.<br>
<br>
I have my working setup, but the next step in improving and upgrading it is a bit of a mystery to me<br>
given the options out there.<br>
<br>
<br>
Anyway<nobr> <wbr></nobr>... that's my rant.</htmltext>
<tokenext>I use cobbler and cfengine to deploy and maintain a couple of clusters including Xen virtual machines and a few labs with workstations .
Cobbler does a pretty good job at deployment ... cfengine a pretty good job at management .. . Automatic configuration ... uh ... I guess cobbler takes the edge off of configuring dhcp/pxe/dns/yum servers for deployment and updates .
Kickstart scripts can be obtained by building one machine , grabbing the anaconda script from the root directory and fudging it to taste .
That 's almost automatic ; - ) ( not really ) On the downside , with cobbler , you get the overenthusiastic release sequences typical of Fedora related projects ( if it compiles and runs , it is production ready ; major features introduced within a minor release and all that good stuff ) , so updating is a bit of a adrenalin rush time .
But , such is price of freedom ( and free beer ) .
Configuring machines using cfengine is a dog ( and I learned to love the pup ) , but it is the best dog we have .
That is all but automatic .
I also have puppet deployed to compare ... well , It has its upsides , but it is not better than cfengine .
Frankly , I do not benefit much from the main concepts and features behind cfengine and would probably be as well off with puppet , or even func and such .
Having a company backing cfengine makes me feel a little better now .
( I was a little nervous about Mark crossing the streets every day ... buses stop for no one ) .
I do n't think that in the current state of affairs automatic configuration is not even desirable as all of the components involved very rapidly reach configuration complexity that needs auditing .
I have my working setup , but the next step in improving and upgrading it is a bit of a mystery to me given the options out there .
Anyway ... that 's my rant .</tokentext>
<sentencetext>I use cobbler and cfengine to deploy and maintain a couple of clusters including Xen virtual machines and a
few labs with workstations.
Cobbler does a pretty good job at deployment ... cfengine a pretty good job at management ...


Automatic configuration ... uh ... I guess cobbler takes the edge off of configuring dhcp/pxe/dns/yum servers
for deployment and updates.
Kickstart scripts can be obtained by building one machine, grabbing the anaconda
script from the root directory and fudging it to taste.
That's almost automatic ;-) (not really) 
On the downside, with cobbler, you get the overenthusiastic release sequences typical of Fedora related
projects (if it compiles and runs, it is production ready; major features introduced within a minor release and
all that good stuff), so updating is a bit of a adrenalin rush time.
But, such is price of freedom (and free beer).
Configuring machines using cfengine is a dog (and I learned to love the pup), but it is the best dog we have.
That is all but automatic.
I also have puppet deployed to compare ... well, It has its upsides, but it is not
better than cfengine.
Frankly, I do not benefit much from the main concepts and features behind cfengine
and would probably be as well off with puppet, or even func and such.
Having a company backing cfengine
makes me feel a little better now.
(I was a little nervous about Mark crossing the streets every day ... buses
stop for no one).
I don't think that in the current state of affairs automatic configuration is not even desirable as all of the
components involved very rapidly reach configuration complexity that needs auditing.
I have my working setup, but the next step in improving and upgrading it is a bit of a mystery to me
given the options out there.
Anyway ... that's my rant.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663535</id>
	<title>LDAP</title>
	<author>FranTaylor</author>
	<datestamp>1247309280000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Keep all your config information in LDAP.</p><p>Configure your servers to get their information from LDAP wherever possible.  Then the config files are all fixed, they basically just point to your LDAP server.</p><p>If you have servers apps that cannot get their configuration from LDAP, write a Perl script that generates the config file by looking up the information in LDAP.</p><p>If you are tricky you can replace the config file with a socket.  Use a perl script to generate the contents of the config file on the fly as the the app asks for it, and make sure the the app does not call seek() on the config file.</p></htmltext>
<tokenext>Keep all your config information in LDAP.Configure your servers to get their information from LDAP wherever possible .
Then the config files are all fixed , they basically just point to your LDAP server.If you have servers apps that can not get their configuration from LDAP , write a Perl script that generates the config file by looking up the information in LDAP.If you are tricky you can replace the config file with a socket .
Use a perl script to generate the contents of the config file on the fly as the the app asks for it , and make sure the the app does not call seek ( ) on the config file .</tokentext>
<sentencetext>Keep all your config information in LDAP.Configure your servers to get their information from LDAP wherever possible.
Then the config files are all fixed, they basically just point to your LDAP server.If you have servers apps that cannot get their configuration from LDAP, write a Perl script that generates the config file by looking up the information in LDAP.If you are tricky you can replace the config file with a socket.
Use a perl script to generate the contents of the config file on the fly as the the app asks for it, and make sure the the app does not call seek() on the config file.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665881</id>
	<title>IBM Tivoli Provisioning Manager ... if you have $$</title>
	<author>Anonymous</author>
	<datestamp>1247430180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>TPM or TPM for OSD<nobr> <wbr></nobr>...</p></htmltext>
<tokenext>TPM or TPM for OSD .. .</tokentext>
<sentencetext>TPM or TPM for OSD ...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663831</id>
	<title>Gentoo Ebuilds, CVS</title>
	<author>lannocc</author>
	<datestamp>1247311860000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>3</modscore>
	<htmltext>I run Gentoo on all my systems, and since the<nobr> <wbr></nobr>.ebuild file format was easy for me to understand (BASH scripts) I started creating Ebuilds for everything I deploy. These ebuilds are separated into services and machines, so emerging a machine will pull in the services (and configs) that machine uses.

<p>Here's an example:
<br>- lannocc-services/dhcp
<br>- lannocc-services/dns
<br>- lannocc-servers/foobar

</p><p>On machine "foobar" I will `emerge lannocc-servers/foobar`. This pulls in my dhcp and dns profiles.

</p><p>I use CVS to track changes I make to my portage overlay (the ebuilds and config files). I keep config files in a files/ subdirectory beneath the ebuild that then follows the root filesystem to place the file in the right spot. So lannocc-services/dhcp will have a files/etc/dhcp/dhcpd.conf file. I've been doing this for the last few years now and it's worked out great. I get to see the progression of changes I make to my configs, and since everything is deployed as a versioned ebuild I can roll it back if necessary.</p></htmltext>
<tokenext>I run Gentoo on all my systems , and since the .ebuild file format was easy for me to understand ( BASH scripts ) I started creating Ebuilds for everything I deploy .
These ebuilds are separated into services and machines , so emerging a machine will pull in the services ( and configs ) that machine uses .
Here 's an example : - lannocc-services/dhcp - lannocc-services/dns - lannocc-servers/foobar On machine " foobar " I will ` emerge lannocc-servers/foobar ` .
This pulls in my dhcp and dns profiles .
I use CVS to track changes I make to my portage overlay ( the ebuilds and config files ) .
I keep config files in a files/ subdirectory beneath the ebuild that then follows the root filesystem to place the file in the right spot .
So lannocc-services/dhcp will have a files/etc/dhcp/dhcpd.conf file .
I 've been doing this for the last few years now and it 's worked out great .
I get to see the progression of changes I make to my configs , and since everything is deployed as a versioned ebuild I can roll it back if necessary .</tokentext>
<sentencetext>I run Gentoo on all my systems, and since the .ebuild file format was easy for me to understand (BASH scripts) I started creating Ebuilds for everything I deploy.
These ebuilds are separated into services and machines, so emerging a machine will pull in the services (and configs) that machine uses.
Here's an example:
- lannocc-services/dhcp
- lannocc-services/dns
- lannocc-servers/foobar

On machine "foobar" I will `emerge lannocc-servers/foobar`.
This pulls in my dhcp and dns profiles.
I use CVS to track changes I make to my portage overlay (the ebuilds and config files).
I keep config files in a files/ subdirectory beneath the ebuild that then follows the root filesystem to place the file in the right spot.
So lannocc-services/dhcp will have a files/etc/dhcp/dhcpd.conf file.
I've been doing this for the last few years now and it's worked out great.
I get to see the progression of changes I make to my configs, and since everything is deployed as a versioned ebuild I can roll it back if necessary.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663427</id>
	<title>standard VM image?</title>
	<author>Anonymous</author>
	<datestamp>1247308560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>have a standard virtual machine image, copy it and voila</p></htmltext>
<tokenext>have a standard virtual machine image , copy it and voila</tokentext>
<sentencetext>have a standard virtual machine image, copy it and voila</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663889</id>
	<title>Solution</title>
	<author>Bluebottel</author>
	<datestamp>1247312340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I found! Its already on slashdot! <a href="http://it.slashdot.org/story/09/07/11/2017214/How-Do-You-Create-Config-Files-Automatically" title="slashdot.org" rel="nofollow">Heres the link</a> [slashdot.org].
Oh, wait...</htmltext>
<tokenext>I found !
Its already on slashdot !
Heres the link [ slashdot.org ] .
Oh , wait.. .</tokentext>
<sentencetext>I found!
Its already on slashdot!
Heres the link [slashdot.org].
Oh, wait...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663229</id>
	<title>How about Debian and aptitude?</title>
	<author>Anonymous</author>
	<datestamp>1247307180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>How about <a href="http://www.debian.org/" title="debian.org">Debian</a> [debian.org], which automatically includes dpkg, aptitude and synaptic?</p><p>From my experience it would take care of most aything.</p><p>And with a good admin, even more.</p><p>.</p></htmltext>
<tokenext>How about Debian [ debian.org ] , which automatically includes dpkg , aptitude and synaptic ? From my experience it would take care of most aything.And with a good admin , even more. .</tokentext>
<sentencetext>How about Debian [debian.org], which automatically includes dpkg, aptitude and synaptic?From my experience it would take care of most aything.And with a good admin, even more..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664961</id>
	<title>ticketmaster's</title>
	<author>Anonymous</author>
	<datestamp>1247326680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>http://code.ticketmaster.com/index.php?page=spine-overview</p></htmltext>
<tokenext>http : //code.ticketmaster.com/index.php ? page = spine-overview</tokentext>
<sentencetext>http://code.ticketmaster.com/index.php?page=spine-overview</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663189</id>
	<title>A Database w/ Config File Generators</title>
	<author>Anonymous</author>
	<datestamp>1247306880000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>At my institution, we run a MySQL database which we use to store information (such as their IP address, SNMP community) about network devices, linux servers, etc.  We then have config file generators that query the database and generate the appropriate configs for Nagios and our other tools, and will restart them if needed.  The idea is once you seed the initial information in the database, the config generators will pick them up and do their work so we won't have to remember to add the new hosts everywhere.</p></htmltext>
<tokenext>At my institution , we run a MySQL database which we use to store information ( such as their IP address , SNMP community ) about network devices , linux servers , etc .
We then have config file generators that query the database and generate the appropriate configs for Nagios and our other tools , and will restart them if needed .
The idea is once you seed the initial information in the database , the config generators will pick them up and do their work so we wo n't have to remember to add the new hosts everywhere .</tokentext>
<sentencetext>At my institution, we run a MySQL database which we use to store information (such as their IP address, SNMP community) about network devices, linux servers, etc.
We then have config file generators that query the database and generate the appropriate configs for Nagios and our other tools, and will restart them if needed.
The idea is once you seed the initial information in the database, the config generators will pick them up and do their work so we won't have to remember to add the new hosts everywhere.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28669659
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665699
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663669
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664619
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663189
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665477
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663189
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668889
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28723501
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664003
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663685
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663229
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665615
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663535
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665879
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663653
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668551
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663423
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28666519
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_07_11_2017214_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28679877
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663535
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663653
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665879
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663669
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665699
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663415
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663427
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664961
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663323
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663591
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663189
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665477
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664619
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663535
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665615
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28679877
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663831
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28665743
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663423
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668551
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663361
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663607
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28666519
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28723501
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28668889
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28669659
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663991
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663183
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663973
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663271
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_07_11_2017214.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663229
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28663685
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_07_11_2017214.28664003
</commentlist>
</conversation>
