How to verify URIBL_SBL blocklist entry?

It took my a while of searching before I found this mailing list entry that gave me really good instructions on how to manually show what triggered URIBL_SBL. I needed this as we were blocking a client and had to figure out why they were getting blocked.

First, URIBL_SBL won’t have anything to do with the senders email address,
or their mailserver IP.

That’s a URI blacklist, thus only has anything to do with URI’s (more or
less the same as a URL in this discussion)

Thus you need to look at all the weblinks in the body of the email. No part
of the headers is relevant. only body, and only something that might look
like a web link to SA’s parser.

In the case of uribl_sbl it’s a little less direct than just trying to use
openrbl or something similar, because how this test is implemented is tricky.

First, take the target domain, and find it’s nameserver

$dig ns

next resolve the nameserver to an ip:

now take that, and go to openrbl and check to see if THAT is listed in sbl.
(or do it yourself by reversing the ip)

Example IP

dig txt

See the link for an Example

yum: prevent a package from being updated in your system

Based on an Article at

yum: prevent a package to be updated in your system
1) Enable the yum-plugin-versionlock

$ vi /etc/yum/pluginconf.d/versionlock.conf
enabled = 1
locklist = /etc/yum/pluginconf.d/versionlock.list
# Uncomment this to lock out “upgrade via. obsoletes” etc. (slower)
# follow_obsoletes = 1

2) Add the list of packages to be protected:


Howto disabled password authentication for a specific users or group in sshd

This is based on some excellent information from

Add a Match block to your sshd_config file. Something like this:

Match Group SSH_Key_Only_Users
PasswordAuthentication no

Or if it’s truly one user

Match User Bad_User
PasswordAuthentication no

See man sshd_config for more details on what you can match and what restrictions you can put in it.

Adding Postfix Template Zabbix 2.0

I wrote the original article based on Zabbix 1.8.4 and the template was missing some steps so I am updating the template adding some triggers to the template.

Original Article:

My source for this article:

To summarize what is there.

# You 1st install pflogsumm and logtail
apt-get install pflogsumm logtail

#Then add to /etc/zabbix/zabbix_agentd.conf and restart the agent.
nano -w /etc/zabbix/zabbix_agentd.conf

UserParameter=postfix.mailq,mailq | grep -v "Mail queue is empty" | grep -c '^[0-9A-Z]'

#Then create the file
nano -w /usr/local/sbin/



function zsend {
key="postfix[`echo "$1" | tr ' -' '_' | tr '[A-Z]' '[a-z]' | tr -cd [a-z_]`]"
value=`grep -m 1 "$1" $DAT2 | awk '{print $1}'`
[ ${DEBUG} -ne 0 ] && echo "Send key \"${key}\" with value \"${value}\"" >&2
/usr/bin/zabbix_sender -c $ZABBIX_CONF -k "${key}" -o "${value}" 2>&1 >/dev/null

/usr/sbin/logtail -f$MAILLOG -o$DAT1 | $PFLOGSUMM -h 0 -u 0 --no_bounce_detail --no_deferral_detail --no_reject_detail --no_no_msg_size --no_smtpd_warnings > $DAT2

zsend received
zsend delivered
zsend forwarded
zsend deferred
zsend bounced
zsend rejected
zsend held
zsend discarded
zsend "reject warnings"
zsend "bytes received"
zsend "bytes delivered"
zsend senders
zsend recipients

rm $DAT2

#Then chmod the file.
chmod 700 /usr/local/sbin/

#Then add a cron entry for it. The site says every 30 min. I am doing it every min as my servers are busier to get more data into the system.
nano -w /etc/cron.d/zabbix_postfix

* * * * * root /usr/local/sbin/

#Depending on your setup you might need to allow sudo access.
echo zabbix ALL = NOPASSWD: `which mailq` >> /etc/sudoers

#Then import the attached file as a temple.

Although this is a good start the template only has 3 tiggers and no graphic’s attached to it so I will need to add more of those in at some point.

Useful grep / sort / unquie commands.

Some useful bash commands to parser Postfix logs.

Create a list of authentication failed by IP address. Sort with the most failures first.
grep “SASL LOGIN authentication failed” /var/log/mail.log |cut -d “[” -f 3 |cut -d “]” -f 1 |sort -n |uniq -c |sort -n -r

Create a list of Reject_waringing and consolate the email addresses.. Sort with the most Rejects first
grep reject_warning /var/log/mail.log |cut -d “=” -f 2 |cut -d “>” -f 1 |cut -d “<” -f 2 |sort -n |uniq -c |sort -n -r

Creating an CentOS 6 Temple in oVirt 3.1

Creating a CentOS template.

1) Miniumal install of Center OS 6
2) su – # Switch User to root
3) wget -P /etc/yum.repos.d/
4) yum install rhev-agent-pam-rhev-cred rhev-agent
5) service rhev-agentd start
6) Yum -y upgrade # Install al updates.
7) Install any packages you want on the system.

8) touch /.unconfigured #Reset the root password and ask some first boot questions.
9) rm -rf /etc/ssh/ssh_host_* # remove host ssh keys.
10) /etc/udev/rules.d/70-persistent-net.rules # Remove mac addresses from the existing system to allow the replacement Mac to work.
11) poweroff

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System

oVirt 3.1 is currently in beta and compared to 3.0 it adds a several new features. One of the nice new features is glusterfs support. For it intended use it works well but I personally think it needs to be adjusted to for more generic use. The 2 main limits that effect me are. 1) Only oVirt nodes in the same cluster can be used as bricks to create clusters. So you can’t add in storage only nodes in the same oVirt cluster as nodes. However you can create a cluster of just gluster only nodes. 2) You can not change what interface / IP gluster uses, it will always use the ovirtmgmt network. This is a weakness as many people have an independent network just for storage and they can’t use them with 3.1.

If you can live with these limits then gluster intergration is for you. Before we start you should ask yourself whether you want to use glusterfsover NFS or posix (Native) fs. Both work and are very simlilar in setup but they do work slightly differently. Many prefer the native version and if you are using Fedora 17 with it latest kernal then native (posix) fs actually supports direct IO that will increase the throuput a lot. However I found on CentOS 6.2 nodes NFS ran faster because NFS shares are cached. My testing was over a 1G networks so a faster network will yield different results. Both install methods are pretty much the same I will add notes on there differences.

A quick word about OS. While there are plans to support EL6 and other Linux based distributions, support is currently limited to Fedora 17. However Andrey Gordeev ( ) has created EL6 packages. Since Fedora 17 crashes on both my Dells every 10 to 12 hours I use the centos builds right now. Although the CentOS builds work really well there are a few missing features, such as live snapshots. I have done both installs and the steps outlined below work for both.

Engine Install
Node Install
Volume Setup

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System Engine Install

Engine Install

So lets begin by installing the engine. The engine install is pretty standard the only major change is going to be turning on gluster support in the cluster tab. Lets walk though the steps for installing the engine.

  1. Do a base (Minimal) install of your choosen OS.
  2. Upgrade to the lastest packages.
    yum -y upgrade
  3. Setup the oVirt repo you intentend to use.
    • For RHEL/CentOs/Scientific Linux 6.2
      wget -P /etc/yum.repos.d/
    • For Fedora 17
      wget -P /etc/yum.repos.d/
  4. Installing Prerequisite Packages:
    yum install -y postgresql-server postgresql-contrib
  5. Now install the engine
    yum install ovirt-engine
  6. Lets confirm the version of Java installed since someone reported only having 1.6 installed instead of the required 1.7
    alternatives –config java
    There are 2 programs which provide ‘java’.Selection Command
    + 1 /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
    * 2 /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/javaEnter to keep the current selection[+], or type selection number:

    In this case you need to select #2 If jre-1.7.0-openjdk.x86_64 isn’t listed you need to manually install Java 1.7

    yum install java-1.7.0-openjdk.x86_64 java-1.7.0-openjdk-devel.x86_64
    Now re-run alternatives to confirm 1.7 is the default.

  7. Now we setup oVirt Engine. Most of the setting we will use the defauls. Note there are NFS shares needed for exports and ISO you can do them under gluster but there really isn’t a need since the are only used when setting up or backing up a VM and not during VM usage. Because my oVirt engine systems has a few hundred gigs free I create the NFS share on the same server as the oVirt Engine server. If you want to use a diff NFS server for the ISO and export folders simple say No when asked about creating the nfs share.

Run the engine setup

    1. engine-setup
    2. Welcome to oVirt Engine setup utility
      ovirt uses httpd to proxy requests to the application server.
      It looks like the httpd installed locally is being actively used.
      The installer can override current configuration .
      Alternately you can use JBoss directly (on ports higher than 1024)
      Do you wish to override current httpd configuration and restart the service? [‘yes’| ‘no’] [yes] : *yes
      *Then it will ask if you want to overwrite your current httpd configuiration. Say Yes
    3. HTTP Port [80] : 80
    4. HTTPS Port [443] : 443
    5. Host fully qualified domain name, note that this name should be fully resolvable [] : Enter
      * Note the Domain name must exist if it doesn’t you need to add it to /etc/host on the engine and all the nodes.
    6. Password for Administrator (admin@internal) : *Password
      * Make sure you pick a password you can remember you will need this to log into the web interface.
    7. Organization Name for the Certificate: *Example Inc
      *This will be visible to anyone who uses the web interface.
    8. Since we are installing gluster I assume you wont be using FC or iSCSI and posix fs isn’t an option but we can change it later.
      The default storage type you will be using [‘NFS’| ‘FC’| ‘ISCSI’] [NFS] : NFS
    9. You can do a remote server but it is up to you to get that setup and working.
      Enter DB type for installation [‘remote’| ‘local’] [local] : Enter
    10. Local database password :password
      *This the database password you only need to use this if you are planing on tweaking database fileds. Make it strong.
    11. Should the installer configure NFS share on this server to be used as an ISO Domain? [‘yes’| ‘no’] [yes] : *yes
      * Unless you are planning to use a different system for NFS I recommend you answer yes.
    12. Local ISO domain path: */nfs/iso
      * Note no trailing / and make sure the target location already exists.
    13. Display name for the ISO Domain:*iso
      * Will be used later just make sure you remember what you called you iso share
    14. Firewall ports need to be opened.
      You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
      Alternately you can configure the firewall later using an example iptables file found under /usr/share/ovirt-engine/conf/iptables.example
      Configure iptables ? [‘yes’| ‘no’]: yes
    15. oVirt Engine will be installed using the following configuration:
      override-httpd-config: yes
      http-port: 80
      https-port: 443
      auth-pass: ********
      org-name: Example Inc.
      default-dc-type: NFS
      db-remote-install: local
      db-local-pass: ********
      nfs-mp: /nfs/iso
      iso-domain-name: iso
      config-nfs: yes
      override-iptables: yes
      Proceed with the configuration listed above? (yes|no): yes
    16. Installing:
      Configuring oVirt-engine… [ DONE ]
      Creating CA… [ DONE ]
      Editing JBoss Configuration… [ DONE ]
      Setting Database Configuration… [ DONE ]
      Setting Database Security… [ DONE ]
      Creating Database… [ DONE ]
      Updating the Default Data Center Storage Type… [ DONE ]
      Editing oVirt Engine Configuration… [ DONE ]
      Editing Postgresql Configuration… [ DONE ]
      Configuring the Default ISO Domain… [ DONE ]
      Configuring Firewall (iptables)… [ DONE ]
      Starting JBoss Service… [ DONE ]
      Handling HTTPD… [ DONE ]**** Installation completed successfully ******(Please allow oVirt Engine a few moments to start up…..)

      Additional information:
      * SSL Certificate fingerprint: B1:92:11:35:A7:38:2E:8E:D2:D2:48:00:AF:68:61:0E:27:BE:1D:97
      * SSH Public key fingerprint: 2c:32:83:33:b2:db:e4:31:a3:39:e2:57:32:ad:e8:de
      * A default ISO share has been created on this host.
      If IP based access restrictions are required, please edit /nfs/iso entry in /etc/exports
      * The firewall has been updated, the old iptables configuration file was saved to /usr/share/ovirt-engine/conf/iptables.backup.024634-07022012_23666
      * The installation log file is available at: /var/log/ovirt-engine/engine-setup_2012_07_02_02_22_46.log
      * Please use the user “admin” and password specified in order to login into oVirt Engine
      * To configure additional users, first configure authentication domains using the ‘engine-manage-domains’ utility
      * To access oVirt Engine please go to the following URL:

NFS share Setup Non Gluster

  1. Now lets cleanup the NFS share some. The default install works fine but something there are issue. I found the instructions at to be useful and helps to fix a lot of issues. Let add an export share and tweak the ISO share as well to make sure it just works.
  2. First lets create /nfs/export and make sure the ownership is correct.
    cd /nfs
    mkdir -p export
    chown -R 36.36 iso export
  3. Now lets update /etc/export
    vi /etc/export
    change it to the following

    /nfs/iso *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
    /nfs/export *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

    vi /etc/sysconfig/nfs
    Add the following to the end of the file file.


    Now lets restart NFS

    • For RHEL/CentOs/Scientific Linux 6.2
      service nfs restart
    • For Fedora 17
      systemctl restart nfs-server.service

Log into the website

  1. Go to you URL for the website. In my example it is
  2. Selected Administrator Portal
    * You will likely get an error about the certification not be valid. That is expected. You can manually replace the cert with one that is signed if you intend to allow non techincal users to log into the site.
  3. User name is Admin the password is what you set during install.
  4. You should now see the Main Menu Welcome to your oVirt Manager

Change from NFS to Posix FS (Native Gluser FS)

  1. Skip this section if you plan to access Gluster using NFS!
  2. Select the Tab Marked Data Center
  3. Selete the Item Default from the list.
  4. You should be able to click on Edit.
  5. Select Type and change from NFS to Posix compliant FS

Enabling Gluster support. Both NFS and Native Posix FS

  1. Select tab marked cluster.
  2. Select Edit then add a check box next to *”Enable Gluster Service”
    * You can create a storage only cluster by selecting new -> Then removeing Enable Ovirt Service and then add one in
    Enable Gluster Service

Concoraulation You Engine is now setup to run gluster next step setup and configure your nodes.

Node Install

Installing oVirt 3.1 and glusterfs using either NFS or Posix (Native) File System Node Install

Node Install

So lets begin by installing the a node. The node install is pretty standard but there are a few very important steps that have to be done or it wont work correctly. All the issues I have found have been put into bugzilla. Before you begin your node install please make sure you have the engine setup correctly as outlined in Engine Install.

  1. Do a base (Minimal) install of your chosen OS.
  2. Upgrade to the lastest packages.

    yum -y upgrade
  3. Setup a glusterfs repo this repo was created by the same person who does the 3.2.5 packages in Fedora/EL6.
    • For RHEL/CentOs/Scientific Linux 6.2

      wget -P /etc/yum.repos.d/
    • For Fedora 17

      wget -P /etc/yum.repos.d/
  4. Setup the oVirt repo you intentend to use.
    • For RHEL/CentOs/Scientific Linux 6.2

      wget -P /etc/yum.repos.d/
    • For Fedora 17

      wget -P /etc/yum.repos.d/
  5. Now lets install some packages

    yum clean all
    yum install vdsm vdsm-cli vdsm-gluster

Now lets get the install ready to join the cluster.

  1. Force NFS to use ver. 3 edit /etc/nfsmount.conf and add.

    [ NFSMount_Global_Options ]
  2. Based on my install the management bridge fails to create most of the time. Here is how I manually create them. The example config files all reside in /etc/sysconfig/network-scripts.
    First we edit /etc/sysconfig/network-scripts/ifcfg-eth0. On your system it might be em0 instead of eth0. Adjust the device line to match what is already there.

  3. Now we create /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt with the following


Now lets return to the engine and add the host into the cluster.

  1. Select the Host tab.
  2. Click New to add a new host.
  3. Name: Enter a Name you want the host to be known as.
  4. Address: Either enter the IP address or URL for the host.
  5. Root Password: Enter the root password for the host
  6. Automatically configure host firewall: Leave it checked
  7. You need to setup your power management settings. Below is a screen shot of my Drac5 settings. Note your 1st node will fail since there isn’t a 2nd node as you can see from the screen shot
  8. This step will take about 5 min a node and will reboot the node. Only add one node at a time I have had weird issue

Repeat for each host. Note you still need to make some changes on each host but because each node needs to talk to each other I suggest setting up all nodes first.

Next Step Volume Setup