====== Clean Slate ====== 2007-12-15: Formatted the fresh drive and installed all the components in our second Antec case. New 250 GB drive, with the following partitions: ^/dev/hda1| /| 16 GB| ^/dev/hda2| | 2 GB| ^/dev/hda3| /usr/local/| the rest| ====== Automatic Processes ====== ===== Hourly Scripts ===== These are put into **/etc/cron.daily/**: ==== Check NFS Mounts ==== Using the same script as on archon (just changed the directory to the one I am jusing): #!/bin/bash # # check_nfs_mounts.sh # quick check if we have our NFS directories mounted... if not... mount them! # - dlr 2008/11/12 MOUNTEDTEST=`df | grep '/mnt/nfs/vault' | wc -l` if [ $MOUNTEDTEST -ne 1 ]; then mount /mnt/nfs/vault fi ==== Set Clock Hourly ==== Set our clock hourly to nist's time server: **/etc/cron.hourly/set_clock_via_network.sh** #!/bin/sh /usr/sbin/ntpdate time-a.nist.gov >/dev/null 2>&1 ===== Monitoring Daemons ===== ==== Power Disruption ==== See the [[#apcupsd]] entry for what I did to monitor ups and power monitoring. ==== Raid Monitoring ==== Using **mdadm**, I have the following running from **/etc/rc.d/rc.local**: mdadm --monitor --daemonise --mail MY.EMAIL@MY.SERVER --test /dev/md0 ====== Daemon Configurations ====== ===== apcupsd ===== - Download from the website the **apcupsd-3.14.4-1.el5.x86_64.rpm ** package and install it: cd /root/down yum localinstall apcupsd-3.14.4-1.el5.x86_64.rpm - Checked over the defaults, and they look perfect to me. The other machines in the house will use this daemon to signal whether they should turn off or not. **NOTE:** I pulled the plug on this to test if they all at least saw the power going down. Now that I think about it, archon is going to need to go down first, so I'll need to modify the battery time left on the slave so it goes first. Test run by pulling the plug (I saw this same message on both slaves (archon and mythtv): Thu Nov 13 07:21:22 EST 2008 Power failure. Thu Nov 13 07:21:28 EST 2008 Running on UPS batteries. Thu Nov 13 07:23:15 EST 2008 Mains returned. No longer on UPS batteries. Thu Nov 13 07:23:15 EST 2008 Power is back. UPS running on mains. ===== dhcpd ===== Since my mythbox hard drive seems to be dead (that was my dhcp server), I'll attempt to put it on here. - Install the **dhcpd** server: yum install dhcp.x86_64 - Modified my **/etc/dhcpd.conf** to give out local IPs: # dhcpd.conf # # Configuration file for ISC dhcpd (see 'man dhcpd.conf') # # archon.lattice.net dhcpd.conf - dlr 20070322 (spring is here!) # If this DHCP server is the official DHCP server for the local # network, the authoritative directive should be uncommented. authoritative; #Sets the domain name and our default DNS servers option domain-name "lattice.net"; option domain-name-servers 10.0.0.1, 10.0.0.2; option netbios-name-servers 10.0.0.1; option netbios-dd-server 10.0.0.1; option netbios-scope ""; option netbios-node-type 8; #Sets the time loan time in seconds before computers must renew thier leases default-lease-time 86400; #Set the maximum amount of time a pc can hold a lease for max-lease-time 864000; # I was told to do this :) # ddns-update-style ad-hoc; ddns-update-style none; ddns-ttl 86400; #This is a subnet which the dhcpd server controlls, note the { this is required subnet 10.0.0.0 netmask 255.255.255.0 { #Sets the network gateway / router option routers 10.0.0.1; #Sets the network broadcast address option broadcast-address 10.0.0.255; #Defines a range of ips to be used as leases range 10.0.0.100 10.0.0.200; # specific host definitions host david { # Set the hostname of the client computer hardware ethernet 00:50:8d:ed:aa:dd; # Registers the MAC address of the client computer. fixed-address 10.0.0.42; # This line specifies the IP address for david's computer } host krysalis { #Set the hostname of the client computer hardware ethernet 00:10:dc:a1:d3:aa; # Registers the MAC address of the client computer. fixed-address 10.0.0.40; # This line specifies the IP address for christine's computer } host wirelesslan { # Set the hostname of the client computer hardware ethernet 00:30:bd:66:4d:b2; # Registers the MAC address of the client computer. fixed-address 10.0.0.11; # This line specifies the IP address for the wireless lan (inside) } } - Ran **setup** and checked the box next to **dhcpd**. - Manually started it normally: service dhcpd start ===== dovecot ===== * Modified **/etc/dovecot.conf** so that only: protocols = imaps ssl_listen = *:993 * Generate our self signed certificate: - Move the original one mv /etc/pki/dovecot/certs/dovecot.pem /etc/pki/dovecot/certs/dovecot.pem.orig mv /etc/pki/dovecot/private/dovecot.pem /etc/pki/dovecot/private/dovecot.pem.orig - Edit our configuration here: jed /etc/pki/dovecot/dovecot-openssl.cnf - Generate a new one: /usr/share/doc/dovecot-1.0.7/examples/mkcert.sh - Restart the imap service: service dovecot restart ===== httpd ===== Well, selinux has to go apparently setup # and disable it ===== mysqld ===== Need to install the server (it doesn't come with it by default) mysql-server.x86_64 I also am installing [[http://www.phpmyadmin.net/]], and I needed to install the mysql plug for that yum install php-mysql.x86_64 yum install php-mbstring.x86_64 service httpd restart ===== named ===== The **named.conf** file goes here: **/var/named/chroot/etc** The directory root where all the files go is **/var/named/chroot/var/named** chkconfig named on ===== nfsd ===== Shares go in **/etc/exports** ==== Restarting nfs ==== Stopping **nfs**: service nfslock stop service nfs stop service portmap stop umount /proc/fs/nfsd Starting **nfs**: service portmap start service nfs start service nfslock start mount -t nfsd nfsd /proc/fs/nfsd ===== samba ===== For windows sharing, I'm going to set this up like the other machines. Modified **/etc/samba/smb.conf** to share up my directory on the local LAN only. [global] workgroup = DCGAMER netbios name = SAGE # wins support = yes server string = Sage Samba Server # don't log, we get hammered from the outside # log file = /var/log/samba.%m max log size = 50 interfaces = eth1 lo hosts deny = ALL hosts allow = 10.0.0.0/24 127. security = share [david] comment = David on Sage path = /usr/local/home/david/ public = yes only guest = yes writable = no printable = no ===== sendmail ===== Need to set the smart host for sage to archon (our mail guru) Edit **/etc/mail/sendmail.mc** so that MAILHOST is not commented and set to our local IP m4 sendmail.mc > /etc/mail/sendmail.cf ===== snmpd ===== - For network monitoring I installed snmp: yum install net-snmp.x86_64 yum install net-snmp-utils.x86_64 - Edit **/etc/snmp/snmpd.conf** to contain the following: # sec.name source community com2sec local localhost archcomm com2sec mynetwork 10.0.0.0/24 archcomm com2sec mynetwork 71.127.151.0/24 archcomm #### # Second, map the security names into group names: # sec.model sec.name group MyRWGroup v1 local group MyRWGroup v2c local group MyRWGroup usm local group MyROGroup v1 mynetwork group MyROGroup v2c mynetwork group MyROGroup usm mynetwork #### # Third, create a view for us to let the groups have rights to: # incl/excl subtree mask view all included .1 80 #### # Finally, grant the 2 groups access to the 1 view with different # write permissions: # context sec.model sec.level match read write notif access MyROGroup "" any noauth exact all none none access MyRWGroup "" any noauth exact all all none ######## and down the road... syscontact "david " syslocation "Ellicott City, MD USA" - Verify that the daemon is up at running in our run levels: chkconfig named on - Start the service: service snmpd start ===== vsftpd ===== - Modify your firewall rule so that ftp can see into your home directory: /usr/sbin/setsebool -P ftp_home_dir 1 - Modify **/etc/vsftpd/vsftpd.conf** so that at the bottom, you have this: pam_service_name=vsftpd userlist_deny=NO # <- so the list is ONLY who CAN ftp in userlist_enable=YES userlist_file=/etc/vsftpd.ftpusers # <- let the daemon know exactly the list you are using tcp_wrappers=YES log_ftp_protocol=YES - Add just the users you want to ftp into the system to **/etc/vsftpd.ftpusers** - Enable ftpd: chkconfig vsftpd on service vsftpd start ===== yum-cron ===== - Install the cronjob yum update: yum install yum-cron.noarch - Turn on the cron job for our run levels: chkconfig yum-cron on - Kickstart the service manually: service yum-cron start ===== yum-updatesd ===== Remove this as it appears to be broken yum remove yum-updatesd ====== Device Settings ====== ===== Ethernet Configuration ===== Modify the network settings for the second ethernet by directly modifying the startup script. vi /etc/sysconfig/network-scripts/ifcfg-eth1 ===== Networking ===== Modify **/etc/resolv.conf** to add archon as our primary dns. ===== Raid Setup ===== ==== Drive Information ==== My old mst3k 500GB drive is Vendor: ATA Model: SAMSUNG HD501LJ Rev: CR10 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux The new Seagate drives are Vendor: ST350063 Model: 0NS Rev: H Type: Direct-Access ANSI SCSI revision: 02 Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes ==== Raid5 Build ==== === My Notes Take II === I had the old Hitachi 500gb drive that used to be my MST3k repository die, so I think I'm going to go with a clean and unencrypted raid this time around (now that I know how to do it, I'd rather have the speed). Creation Date: --- //[[david@lattice.net|David Lloyd Rabine]] 2008/10/16 06:45// - Turned off the server, and swapped out the Hitachi with the new Seagate ES.2 500gb drive. - Create a partition on the new drive (just take up the entire disk) with **fdisk** on each as the primary partition. - Use **mdadm** to create the array mdadm --create /dev/md0 --chunk=64 --level=5 --name=spaceraid --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - Format the device as ext3: mkfs.ext3 -m 0 /dev/md0 -L /space # I am not leaving any reserve since this is a data drive only - Mount the raid mount /dev/md0 /mnt/raid/space/ === My Notes === Creation date: --- //[[david@lattice.net|David Lloyd Rabine]] 2008/02/12 09:49// A lot of this is coming directly from [[http://ubuntuforums.org/showthread.php?t=408461|here]] and the man page for **mdadm**. - Installed the 4 Seagate drives in the removable cartridges, and the Hitachi disk (as my 5th disk) in the main case below the root drive (gray SATA cable). - Create a partition on each drive (just take up the entire disk) with **fdisk** on each as the primary partition. - Use **mdadm** to create the array mdadm --create /dev/md0 --chunk=64 --level=5 --name=spaceraid --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - Create a mapping from our device to an encrypted device: (found this cryptsetup stuff [[http://luks.endorphin.org/dm-crypt|here]]) cryptsetup -c aes -h sha512 -y create chaoticspace /dev/md0 # and you will want to supply a password (good one) - Create our partition on the encrypted device: mkfs.ext3 -m 0 /dev/mapper/chaoticspace -L /space # I am not leaving any reserve since this is a data drive only - Make a directory to mount to: mkdir /mnt/raid/ mkdir /mnt/raid/space - Symbolically link that to our root directory: ln -s /mnt/raid/space /space - Manually mount our new data castle: mount /dev/mapper/chaoticspace /mnt/raid/space **To add keys**: cryptsetup luksAddKey /dev/md0 **To mount**: (after a reboot you're going to need to manually mount the drive (with password)) - Assemble the software raid: mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 - Map the device: cryptsetup -c aes -h sha512 create chaoticspace /dev/md0 - Mount the encrypted partition (via the mapper) to our mount point: mount /dev/mapper/chaoticspace /mnt/raid/space **To unmount**: umount /mnt/raid/space cryptsetup remove chaoticspace === External Notes === Some notes I got from [[http://www.li5.org/?p=4|here]]: After you’ve created the array it’s time to encrypt it. Running the following command will create /dev/mapper/storage which is effectively your encrypted device. # cryptsetup -c aes -h sha512 -y create storage /dev/md0 You will now need to create the file system. If you plan on resizing the array at any time I would recommend you use reiserfs. # mkreiserfs /dev/mapper/storage Finally you will need to mount it # mkdir /mnt/storage # mount /dev/mapper/storage /mnt/storage After a shutdown or restart you will need to mount the storage manually. The mdadm init script should have assembled the array, you can check by viewing /proc/mdstat. If your raid device hasn’t assembled you can do it with this command : # mdadm --assemble --run /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 Now you’ll need to run cryptsetup and mount the device. Notice the command is silightly different to the command issued when creating the encrypted device : # cryptsetup -c aes -h sha512 create storage /dev/md0 # mount /dev/md0 /mnt/storage ====== Software Installed ====== ===== bittorrent ===== Just use your slackware box with the package that is in the 'extras' directory (bit torrent and bit tornado so you can use that headless client in screen). I need to compile and install a fair number of packages to get this to work. ===== firefox ===== I wanted to run the SpeakEasy speed test page, so I wanted to install a browser. yum install firefox yum remove firefox.x86_64 # the 64 bit version won't do flash This requires [[#x11]] (and flash too) ===== flash plugin ===== Go to adobe's download site, and install the repository for adobe / flash yum install flash-plugin.i386 I edited **/etc/yum.repos.d/adobe-linux-i386.repo** afterwards and disabled it by default ===== gallery ===== I wanted to install the newest Gallery BETA software to store photos and videos online. I'm migrating this to sage as this is the faster, better machine at the moment (until Slackware 13 arrives and the big re-install on archon...) This needed a more recent PHP, so I followed some instructions online to install a "testing" version of CentOS 5. * Page where I found HOWTO: [[http://www.freshblurbs.com/install-php-5-2-centos-5-2-using-yum]] * Photo Repository: ==== Update PHP ==== - Added the "testing" repository to sage by editing **** and putting this in it: [c5-testing] name=CentOS-5 Testing baseurl=http://dev.centos.org/centos/5/testing/$basearch/ enabled=0 gpgcheck=1 gpgkey=http://dev.centos.org/centos/RPM-GPG-KEY-CentOS-testing - Update just PHP by enabling the repository temporarily: yum --enablerepo=c5-testing update php - Restart the webserver once PHP is installed: service httpd restart ==== Gallery 3 Beta 2 Install ==== - Download the .zip file - I made a virtual host to: [[http://gallery.rabine.org]] - Modified the named configuration files and added **gallery.rabine.org** - Added **/etc/httpd/conf/sage_virtual_hosts.conf** and configured the remote named virtual host - Created a directory on the raid to store all the data - Had to disable SELinux (probably a rule I could have used to allow it... but I'm being lazy) for apache to see the raid drive directory!? See this link [[http://forums.devshed.com/apache-development-15/documentroot-does-not-exist-when-it-does-526847.html]] ===== gcc ===== yum install gcc yum install compat-gcc* ===== jed ===== I can't seem to live without **jed**. Need to modify **/etc/yum.conf** to not check gpg keys for this to work. wget ftp://rpmfind.net/linux/fedora/core/development/x86_64/os/Fedora/jed-0.99.18-5.fc6.x86_64.rpm yum localinstall jed-0.99.18-5.fc6.x86_64.rpm ===== lynx ===== yum install lynx ===== nwn ===== ==== Install ==== For hatter, a nwn server I compiled these libraries on the 32 bit sage before he died. cd / tar -zxvf /tmp/mysql_for_nwn.tgz # md5sum = 1ef1a9e95a2c6c759b6da21d5298d681 yum install compat-libstdc* ==== Making Sage Prime Server ==== - Since this has become the main server, I migrated the servervault over using this command: rsync -va --checksum /mnt/nfs/vault/nwn/servervault/ /usr/local/home/nwn/servervault/ - Modified **/etc/exports** to share up the drive for writing (since the other server will need to write here): # WARNING! Sage is currently WRITING TO /usr/local/home so DO NOT REMOVE or DISABLE NFS!!! # - dlr 20080212 (primary voting day!) /usr/local/home 10.0.0.1(rw,no_root_squash,async,subtree_check) - Enable NFS on bootup: chkconfig nfs on - Restart NFS daemon: service nfs restart - Mount that drive on archon with the following entry to **/etc/fstab**: # DO NOT REMOVE without first shutting down nwn server! It is writing to this directory 10.0.0.2:/usr/local/home /mnt/nfs/sage/local/home nfs defaults,rw,soft 0 0 - Modified both **nwn.ini** to point to the same server vault: ### archon ### SERVERVAULT=/mnt/nfs/sage/local/home/nwn/servervault ### sage ### SERVERVAULT=/usr/local/home/nwn/servervault ===== pine ===== Found another package on DAG repository wget http://dag.wieers.com/rpm/packages/pine/pine-4.64-3.el5.rf.x86_64.rpm yum localinstall pine-4.64-3.el5.rf.x86_64.rpm ===== screen ===== Need screen to run the servers in the background. yum install screen ===== x11 ===== yum groupinstall "X Window System" ===== xterm ===== Got to have my xterm... and you need **xauth** installed or else it won't do the X11 forwarding. yum install xterm yum install xauth ====== User Accounts ====== Here is a way to add the users from a root console: adduser -u UID USERNAME -g GROUP -s SHELL