yum: prevent a package from being updated in your system

Based on an Article at http://dougsland.livejournal.com/121262.html

yum: prevent a package to be updated in your system
1) Enable the yum-plugin-versionlock

Example:
$ vi /etc/yum/pluginconf.d/versionlock.conf
[main]
enabled = 1
locklist = /etc/yum/pluginconf.d/versionlock.list
# Uncomment this to lock out “upgrade via. obsoletes” etc. (slower)
# follow_obsoletes = 1

2) Add the list of packages to be protected:
/etc/yum/pluginconf.d/versionlock.list

Example:
jack-audio-connect-kit-1.9.4
qjackctl-0.3.6

Howto disabled password authentication for a specific users or group in sshd

This is based on some excellent information from http://serverfault.com/questions/286159/how-to-disabled-password-authentication-for-specific-users-in-sshd

Add a Match block to your sshd_config file. Something like this:


Match Group SSH_Key_Only_Users
PasswordAuthentication no

Or if it’s truly one user


Match User Bad_User
PasswordAuthentication no

See man sshd_config for more details on what you can match and what restrictions you can put in it.

Adding Postfix Template Zabbix 2.0

I wrote the original article based on Zabbix 1.8.4 and the template was missing some steps so I am updating the template adding some triggers to the template.

Original Article: http://www.middleswarth.us/2011/01/28/adding-a-postfix-template-to-zabbix/

My source for this article:

http://www.zabbix.com/wiki/howto/monitor/mail/postfix/monitoringpostfix
and
http://www.zabbix.com/forum/showthread.php?t=19723#post74271

To summarize what is there.

# You 1st install pflogsumm and logtail
apt-get install pflogsumm logtail

#Then add to /etc/zabbix/zabbix_agentd.conf and restart the agent.
nano -w /etc/zabbix/zabbix_agentd.conf

UserParameter=postfix.mailq,mailq | grep -v "Mail queue is empty" | grep -c '^[0-9A-Z]'

#Then create the file
nano -w /usr/local/sbin/zabbix-postfix.sh


#!/bin/bash

MAILLOG=/var/log/mail.log
DAT1=/tmp/zabbix-postfix-offset.dat
DAT2=$(mktemp)
PFLOGSUMM=/usr/sbin/pflogsumm
ZABBIX_CONF=/etc/zabbix/zabbix_agentd.conf
DEBUG=1

function zsend {
key="postfix[`echo "$1" | tr ' -' '_' | tr '[A-Z]' '[a-z]' | tr -cd [a-z_]`]"
value=`grep -m 1 "$1" $DAT2 | awk '{print $1}'`
[ ${DEBUG} -ne 0 ] && echo "Send key \"${key}\" with value \"${value}\"" >&2
/usr/bin/zabbix_sender -c $ZABBIX_CONF -k "${key}" -o "${value}" 2>&1 >/dev/null
}

/usr/sbin/logtail -f$MAILLOG -o$DAT1 | $PFLOGSUMM -h 0 -u 0 --no_bounce_detail --no_deferral_detail --no_reject_detail --no_no_msg_size --no_smtpd_warnings > $DAT2

zsend received
zsend delivered
zsend forwarded
zsend deferred
zsend bounced
zsend rejected
zsend held
zsend discarded
zsend "reject warnings"
zsend "bytes received"
zsend "bytes delivered"
zsend senders
zsend recipients

rm $DAT2

#Then chmod the file.
chmod 700 /usr/local/sbin/zabbix-postfix.sh

#Then add a cron entry for it. The site says every 30 min. I am doing it every min as my servers are busier to get more data into the system.
nano -w /etc/cron.d/zabbix_postfix


* * * * * root /usr/local/sbin/zabbix-postfix.sh

#Depending on your setup you might need to allow sudo access.
echo zabbix ALL = NOPASSWD: `which mailq` >> /etc/sudoers

#Then import the attached file as a temple.

Although this is a good start the template only has 3 tiggers and no graphic’s attached to it so I will need to add more of those in at some point.

Useful grep / sort / unquie commands.

Some useful bash commands to parser Postfix logs.

Create a list of authentication failed by IP address. Sort with the most failures first.
grep “SASL LOGIN authentication failed” /var/log/mail.log |cut -d “[" -f 3 |cut -d "]” -f 1 |sort -n |uniq -c |sort -n -r

Create a list of Reject_waringing and consolate the email addresses.. Sort with the most Rejects first
grep reject_warning /var/log/mail.log |cut -d “=” -f 2 |cut -d “>” -f 1 |cut -d “<” -f 2 |sort -n |uniq -c |sort -n -r

Creating an CentOS 6 Temple in oVirt 3.1

Creating a CentOS template.

1) Miniumal install of Center OS 6
2) su – # Switch User to root
3) wget http://www.dreyou.org/ovirt/ovirt-dre.repo -P /etc/yum.repos.d/
4) yum install rhev-agent-pam-rhev-cred rhev-agent
5) service rhev-agentd start
6) Yum -y upgrade # Install al updates.
7) Install any packages you want on the system.

8) touch /.unconfigured #Reset the root password and ask some first boot questions.
9) rm -rf /etc/ssh/ssh_host_* # remove host ssh keys.
10) /etc/udev/rules.d/70-persistent-net.rules # Remove mac addresses from the existing system to allow the replacement Mac to work.
11) poweroff

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System

oVirt 3.1 is currently in beta and compared to 3.0 it adds a several new features. One of the nice new features is glusterfs support. For it intended use it works well but I personally think it needs to be adjusted to for more generic use. The 2 main limits that effect me are. 1) Only oVirt nodes in the same cluster can be used as bricks to create clusters. So you can’t add in storage only nodes in the same oVirt cluster as nodes. However you can create a cluster of just gluster only nodes. 2) You can not change what interface / IP gluster uses, it will always use the ovirtmgmt network. This is a weakness as many people have an independent network just for storage and they can’t use them with 3.1.

If you can live with these limits then gluster intergration is for you. Before we start you should ask yourself whether you want to use glusterfsover NFS or posix (Native) fs. Both work and are very simlilar in setup but they do work slightly differently. Many prefer the native version and if you are using Fedora 17 with it latest kernal then native (posix) fs actually supports direct IO that will increase the throuput a lot. However I found on CentOS 6.2 nodes NFS ran faster because NFS shares are cached. My testing was over a 1G networks so a faster network will yield different results. Both install methods are pretty much the same I will add notes on there differences.

A quick word about OS. While there are plans to support EL6 and other Linux based distributions, support is currently limited to Fedora 17. However Andrey Gordeev ( http://www.dreyou.org/ovirt/ ) has created EL6 packages. Since Fedora 17 crashes on both my Dells every 10 to 12 hours I use the centos builds right now. Although the CentOS builds work really well there are a few missing features, such as live snapshots. I have done both installs and the steps outlined below work for both.

Engine Install
Node Install
Volume Setup

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System Engine Install

Engine Install

So lets begin by installing the engine. The engine install is pretty standard the only major change is going to be turning on gluster support in the cluster tab. Lets walk though the steps for installing the engine.

  1. Do a base (Minimal) install of your choosen OS.
  2. Upgrade to the lastest packages.
    yum -y upgrade
  3. Setup the oVirt repo you intentend to use.
    • For RHEL/CentOs/Scientific Linux 6.2
      wget http://www.dreyou.org/ovirt/ovirt-dre.repo -P /etc/yum.repos.d/
    • For Fedora 17
      wget http://ovirt.org/releases/beta/ovirt-engine.repo -P /etc/yum.repos.d/
  4. Installing Prerequisite Packages:
    yum install -y postgresql-server postgresql-contrib
  5. Now install the engine
    yum install ovirt-engine
  6. Lets confirm the version of Java installed since someone reported only having 1.6 installed instead of the required 1.7
    alternatives –config java
    There are 2 programs which provide ‘java’.Selection Command
    ———————————————–
    + 1 /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
    * 2 /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/javaEnter to keep the current selection[+], or type selection number:

    In this case you need to select #2 If jre-1.7.0-openjdk.x86_64 isn’t listed you need to manually install Java 1.7

    yum install java-1.7.0-openjdk.x86_64 java-1.7.0-openjdk-devel.x86_64
    Now re-run alternatives to confirm 1.7 is the default.

  7. Now we setup oVirt Engine. Most of the setting we will use the defauls. Note there are NFS shares needed for exports and ISO you can do them under gluster but there really isn’t a need since the are only used when setting up or backing up a VM and not during VM usage. Because my oVirt engine systems has a few hundred gigs free I create the NFS share on the same server as the oVirt Engine server. If you want to use a diff NFS server for the ISO and export folders simple say No when asked about creating the nfs share.

Run the engine setup

    1. engine-setup
    2. Welcome to oVirt Engine setup utility
      ovirt uses httpd to proxy requests to the application server.
      It looks like the httpd installed locally is being actively used.
      The installer can override current configuration .
      Alternately you can use JBoss directly (on ports higher than 1024)
      Do you wish to override current httpd configuration and restart the service? ['yes'| 'no'] [yes] : *yes
      *Then it will ask if you want to overwrite your current httpd configuiration. Say Yes
    3. HTTP Port [80] : 80
    4. HTTPS Port [443] : 443
    5. Host fully qualified domain name, note that this name should be fully resolvable [ovirt.example.com] : Enter
      * Note the Domain name must exist if it doesn’t you need to add it to /etc/host on the engine and all the nodes.
    6. Password for Administrator (admin@internal) : *Password
      * Make sure you pick a password you can remember you will need this to log into the web interface.
    7. Organization Name for the Certificate: *Example Inc
      *This will be visible to anyone who uses the web interface.
    8. Since we are installing gluster I assume you wont be using FC or iSCSI and posix fs isn’t an option but we can change it later.
      The default storage type you will be using ['NFS'| 'FC'| 'ISCSI'] [NFS] : NFS
    9. You can do a remote server but it is up to you to get that setup and working.
      Enter DB type for installation ['remote'| 'local'] [local] : Enter
    10. Local database password :password
      *This the database password you only need to use this if you are planing on tweaking database fileds. Make it strong.
    11. Should the installer configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no'] [yes] : *yes
      * Unless you are planning to use a different system for NFS I recommend you answer yes.
    12. Local ISO domain path: */nfs/iso
      * Note no trailing / and make sure the target location already exists.
    13. Display name for the ISO Domain:*iso
      * Will be used later just make sure you remember what you called you iso share
    14. Firewall ports need to be opened.
      You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
      Alternately you can configure the firewall later using an example iptables file found under /usr/share/ovirt-engine/conf/iptables.example
      Configure iptables ? ['yes'| 'no']: yes
    15. oVirt Engine will be installed using the following configuration:
      ========================================================
      override-httpd-config: yes
      http-port: 80
      https-port: 443
      host-fqdn: ovirt.example.com
      auth-pass: ********
      org-name: Example Inc.
      default-dc-type: NFS
      db-remote-install: local
      db-local-pass: ********
      nfs-mp: /nfs/iso
      iso-domain-name: iso
      config-nfs: yes
      override-iptables: yes
      Proceed with the configuration listed above? (yes|no): yes
    16. Installing:
      Configuring oVirt-engine… [ DONE ]
      Creating CA… [ DONE ]
      Editing JBoss Configuration… [ DONE ]
      Setting Database Configuration… [ DONE ]
      Setting Database Security… [ DONE ]
      Creating Database… [ DONE ]
      Updating the Default Data Center Storage Type… [ DONE ]
      Editing oVirt Engine Configuration… [ DONE ]
      Editing Postgresql Configuration… [ DONE ]
      Configuring the Default ISO Domain… [ DONE ]
      Configuring Firewall (iptables)… [ DONE ]
      Starting JBoss Service… [ DONE ]
      Handling HTTPD… [ DONE ]**** Installation completed successfully ******(Please allow oVirt Engine a few moments to start up…..)

      Additional information:
      * SSL Certificate fingerprint: B1:92:11:35:A7:38:2E:8E:D2:D2:48:00:AF:68:61:0E:27:BE:1D:97
      * SSH Public key fingerprint: 2c:32:83:33:b2:db:e4:31:a3:39:e2:57:32:ad:e8:de
      * A default ISO share has been created on this host.
      If IP based access restrictions are required, please edit /nfs/iso entry in /etc/exports
      * The firewall has been updated, the old iptables configuration file was saved to /usr/share/ovirt-engine/conf/iptables.backup.024634-07022012_23666
      * The installation log file is available at: /var/log/ovirt-engine/engine-setup_2012_07_02_02_22_46.log
      * Please use the user “admin” and password specified in order to login into oVirt Engine
      * To configure additional users, first configure authentication domains using the ‘engine-manage-domains’ utility
      * To access oVirt Engine please go to the following URL: http://ovirt.example.com:80

NFS share Setup Non Gluster

  1. Now lets cleanup the NFS share some. The default install works fine but something there are issue. I found the instructions at http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues to be useful and helps to fix a lot of issues. Let add an export share and tweak the ISO share as well to make sure it just works.
  2. First lets create /nfs/export and make sure the ownership is correct.
    cd /nfs
    mkdir -p export
    chown -R 36.36 iso export
  3. Now lets update /etc/export
    vi /etc/export
    change it to the following

    /nfs/iso *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
    /nfs/export *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

    vi /etc/sysconfig/nfs
    Add the following to the end of the file file.

    NFS4_SUPPORT="no"

    Now lets restart NFS

    • For RHEL/CentOs/Scientific Linux 6.2
      service nfs restart
    • For Fedora 17
      systemctl restart nfs-server.service

Log into the website

  1. Go to you URL for the website. In my example it is http://ovirt.example.com/
  2. Selected Administrator Portal
    * You will likely get an error about the certification not be valid. That is expected. You can manually replace the cert with one that is signed if you intend to allow non techincal users to log into the site.
  3. User name is Admin the password is what you set during install.
  4. You should now see the Main Menu Welcome to your oVirt Manager

Change from NFS to Posix FS (Native Gluser FS)

  1. Skip this section if you plan to access Gluster using NFS!
  2. Select the Tab Marked Data Center
  3. Selete the Item Default from the list.
  4. You should be able to click on Edit.
  5. Select Type and change from NFS to Posix compliant FS

Enabling Gluster support. Both NFS and Native Posix FS

  1. Select tab marked cluster.
  2. Select Edit then add a check box next to *”Enable Gluster Service”
    * You can create a storage only cluster by selecting new -> Then removeing Enable Ovirt Service and then add one in
    Enable Gluster Service

Concoraulation You Engine is now setup to run gluster next step setup and configure your nodes.

Node Install

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System Node Install

Node Install

So lets begin by installing the a node. The node install is pretty standard but there are a few very important steps that have to be done or it wont work correctly. All the issues I have found have been put into bugzilla. Before you begin your node install please make sure you have the engine setup correctly as outlined in Engine Install.

  1. Do a base (Minimal) install of your chosen OS.
  2. Upgrade to the lastest packages.

    yum -y upgrade
  3. Setup a glusterfs repo this repo was created by the same person who does the 3.2.5 packages in Fedora/EL6.
    • For RHEL/CentOs/Scientific Linux 6.2

      wget http://repos.fedorapeople.org/repos/kkeithle/glusterfs/epel-glusterfs.repo -P /etc/yum.repos.d/
    • For Fedora 17

      wget http://repos.fedorapeople.org/repos/kkeithle/glusterfs/fedora-glusterfs.repo -P /etc/yum.repos.d/
  4. Setup the oVirt repo you intentend to use.
    • For RHEL/CentOs/Scientific Linux 6.2

      wget http://www.dreyou.org/ovirt/ovirt-dre.repo -P /etc/yum.repos.d/
    • For Fedora 17

      wget http://ovirt.org/releases/beta/ovirt-engine.repo -P /etc/yum.repos.d/
  5. Now lets install some packages

    yum clean all
    yum install vdsm vdsm-cli vdsm-gluster

Now lets get the install ready to join the cluster.

  1. Force NFS to use ver. 3 edit /etc/nfsmount.conf and add.

    [ NFSMount_Global_Options ]
    Defaultproto=tcp
    Defaultvers=3
    Nfsvers=3
  2. Based on my install the management bridge fails to create most of the time. Here is how I manually create them. The example config files all reside in /etc/sysconfig/network-scripts.
    First we edit /etc/sysconfig/network-scripts/ifcfg-eth0. On your system it might be em0 instead of eth0. Adjust the device line to match what is already there.

    DEVICE=eth0
    BOOTPROTO=none
    NM_CONTROLLED=no
    ONBOOT=yes
    BRIDGE=ovirtmgmt
  3. Now we create /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt with the following

    DEVICE=ovirtmgmt
    BOOTPROTO=static
    GATEWAY=192.168.100.254
    IPADDR=192.168.100.201
    NETMASK=255.255.255.0
    NM_CONTROLLED=no
    ONBOOT=yes
    TYPE=Bridge

Now lets return to the engine and add the host into the cluster.

  1. Select the Host tab.
  2. Click New to add a new host.
  3. Name: Enter a Name you want the host to be known as.
  4. Address: Either enter the IP address or URL for the host.
  5. Root Password: Enter the root password for the host
  6. Automatically configure host firewall: Leave it checked
  7. You need to setup your power management settings. Below is a screen shot of my Drac5 settings. Note your 1st node will fail since there isn’t a 2nd node as you can see from the screen shot
  8. This step will take about 5 min a node and will reboot the node. Only add one node at a time I have had weird issue

Repeat for each host. Note you still need to make some changes on each host but because each node needs to talk to each other I suggest setting up all nodes first.

Next Step Volume Setup

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System Volume Setup

Gluster Volume Setup

Now that all the nodes are setup and added to the cluster we need to tweak a few settings create our 1st volume and add it to the data center. Before completing these steps please make sure you have gone though and confirm you have both the Engine and the Node are setup before contining.

  1. First thing we need to do is open the firewall and get the systems to talk to each other.
  2. We need to open some ports. There are several ways to do this I chose to edit /etc/sysconfig/iptables and add the following right about the line starting with # vdsm before any of the reject lines.

    # glusterfs
    -A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
    -A INPUT -p tcp --dport 111 -j ACCEPT
    -A INPUT -p udp --dport 111 -j ACCEPT
    -A INPUT -p tcp -m multiport --dport 38465:38467 -j ACCEPT
    # vdsm
  3. Now restart iptables

    service iptables restart
  4. I prefer to create all my bricks under a folder called data it isn’t required but all my examples will use it so this is a good place to create the folder. You can name it something else if you prefer.

    mkdir -p /data
    chown -R 36.36 /data
  5. Now repeat these steps on all of your nodes in your cluster.

Now we need to join all the nodes together under gluster.

  1. I have been told this will happen in a very narrow setup that I have never actually seen in the wild.
  2. To be safe we check from one of the nodes if any other nodes are connected.

    gluster peer status

    Result: No peers present
  3. Now add your peers in.

    gluster peer probe hostname

    Result: Probe successful
  4. Repeat the last step for each node in the cluster.
  5. Now lets confirm they are connected.

    gluster peer status

    Example Results:
    Number of Peers: 2

    Hostname: host2
    Uuid: 3a1bd38c-2a54-47b2-86b9-5d56e3980a44
    State: Peer in Cluster (Connected)

    Hostname: host3
    Uuid: 7a2fd499-4db6-4810-87c9-b7c57aaebfb8
    State: Peer in Cluster (Connected)

Now time to log back into the engine and create a volume.

  1. Log into the engine and select the tab marked Volumes
  2. Select Create Volume
  3. Name: Volume Name
  4. Type: *You choices are Distributed, Replicated, Striped, Distributed Striped, Distributed Replicated
    * You can read more about that on gluster.com. In my example I am going to use Distributed Replicated.
  5. It will auto open the add bricks box and allow you to add the bricks for the hosts in your system.
  6. Select the node and enter the folder you want the brick created in. In my example I used host1:/data/vol1, host2:/data/vol1, host1:/vol2, host3:/vol1
  7. Select Ok to save the brick list.
  8. Select Ok to save the create the cluster.
  9. Select the volume from the list.
  10. These setting might not be needed for glusterfs but they were set when I setup Posix (Native) FS. The 1st one is needed for NFS for certain. The other I am less sure but there were enabled when I got gluster working.
    1. Select the lower tab Volume Options
    2. Select Add then Select nfs.nlm set option off
    3. Select Add then Select nfs.register-with-portmap set option on
    4. Select Add then Select nfs.addr-namelookup set option off
    5. Right click on the volume and select start
  11. Now we need to chown all the gluster brick to the KVM user so they can be mounted. This must be done on all cluster nodes. Since I store my bricks under /data I just changed own the entire folder.

    chown -R 36.36 /data/

Warning if you active more then one host under gluster native it will rotate between the nodes.


See bugzilla report https://bugzilla.redhat.com/show_bug.cgi?id=835949 – [rhevm] PosixFS: contending storm when adding additional host to pool
Been reported fixed but I haven’t confirmed it either way. To test if you are seeing the problem move all nodes but one into mantaince if everything mounts fine they you are seeing this bug.

If you are using Posix Compatible (Native) FS

  1. Log into the engine and select the tab marked Storage
  2. Select New Domain

    1. Path: localhost:/volumename
    2. VFS Type: glusterfs
    3. Mount Options: vers=3
  3. Right Click on the new Storage Domain and select Activate

If you are using NFS

  1. Log into the engine and select the tab marked Storage
  2. Select New Domain

    1. Export Path: localhost:/volumename
  3. Right Click on the new Storage Domain and select Activate

Adding ISO and Export nfs shares.

  1. Select the tab marked Storage
  2. Select New Domain

    1. Domain Function / Storage Type: Export / NFS

      1. Export Path: enginehost:/nfs/export
    2. Right Click on the new Storage Domain and select Activate
    3. Now Select the ISO Domain
    4. Right Click on the new Storage Domain and select Attach
    5. Select the default datacenter
    6. Right Click on the new Storage Domain and select Activate

    You can now start to create Virtual Machines.